id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
52077781
pes2o/s2orc
v3-fos-license
Genetic testing in children and adolescents with intellectual disability Purpose of review Investigation for genetic causes of intellectual disability has advanced rapidly in recent years. We review the assessment of copy number variants (CNVs) and the use of next-generation sequencing based assays to identify single nucleotide variation in intellectual disability. We discuss the diagnostic yields that can be expected with the different assays. There is high co-morbidity of intellectual disability and psychiatric disorders. We review the relationship between variants which are pathogenic for intellectual disability and the risk of child and adolescent onset psychiatric disorders. Recent findings The diagnostic yields from genome wide CNV analysis and whole exome sequence analysis are high – in the region of 15 and 40%, respectively – but vary according to exact referral criteria. Many variants pathogenic for intellectual disability, notably certain recurrent CNVs, have emerged as strong risk factors for other neurodevelopmental disorders such as autism spectrum disorders, attention deficit hyperactivity disorder, and schizophrenia. Summary It is now conceivable that etiological variants could be identified in the majority of children presenting with intellectual disability using next-generation sequencing based assays. However, challenges remain in assessment of the pathogenicity of variants, reporting of incidental findings in children and determination of prognosis, particularly in relation to psychiatric disorders. INTRODUCTION Half of all mental health problems encountered in adulthood have already been established by the age of 14, and up to 75% by age 24 [1]. Ten percent of children aged 5-16 years have a diagnosable problem such as conduct disorder, anxiety disorder, attention deficit hyperactivity disorder (ADHD), or depression [2]. These figures are substantially higher in children with intellectual disability [3]. DSM-5 [4] defines intellectual disability as a disorder with onset during the developmental period that adversely affects both intellectual and adaptive functioning, causing deficits in conceptual, social, and practical domains. Mild, moderate, severe, and profound degrees of disability are defined on the basis of adaptive functioning nowadays, rather than in terms of IQ test results. This is because every day reasoning and judgment-making by people with intellectual disability is often poorer than formal cognitive assessments imply. In the United Kingdom, the management of people with intellectual disability is led by local specialist learning disability services, if these are available. In most cases, the cause of intellectual disability is unknown, especially in people who have a nonsyndromic condition that lacks physical signs. Genomic variation is probably the leading cause of mild intellectual impairment in the general population. We each possess around 3 million polymorphic nucleotide variants in our genomes, the great majority of which are 'common' in the sense that we share those variants with a substantial minority of other healthy individuals. These are often called single nucleotide polymorphisms (SNPs), defined as genomic differences in a single nucleotide, at a particular position in the genome where such variation is not rare. The cumulative impact of such variants, which occur in over 90% of all human genes, accounts for individual differences in complex physical and cognitive characteristics such as height and general intelligence. Most mild intellectual disability is probably attributable to common variation. Polygenic variation is also thought to contribute most genetic risk to the development of autism spectrum disorders (ASDs) [5]. A small proportion of such variants are unique, or almost unique to us as individuals (although they may also be found in our blood relatives). These are often known as private mutations, or as single nucleotide variants (SNVs). If they occur in a protein coding or regulatory part of a gene, they may alter genetic function if they are nonsynonymous with the typical nucleotide at that position. SNVs that arise in the germline of egg or sperm are termed 'de novo'. Disruptive SNVs are often associated with rare syndromes, and in a substantial proportion of such syndromes there is associated intellectual disability [6,7]. De-novo SNVs are thought to be responsible for most severe and profound intellectual disability [8], because affected individuals are unlikely to reproduce (and therefore do not pass on the mutation to future generations). SNVs in over 700 genes have been identified as contributing to autosomal dominant, autosomal recessive, and X-linked intellectual disability [9]. Whole exome sequencing (WES) can detect de-novo coding mutations if they occur within genes, but many SNVs are in intergenic regions, hence are not picked up by this technique. Whole genome sequencing (WGS) has the capacity to identify rare variants in the regions between genes too and is reported to find coding mutations and potentially pathogenic structural changes in up to 60% of severe/profound intellectual disability cases. However, the interpretation of WGS-identified noncoding (intergenic) variants is problematic because we know relatively little about their impact on gene regulation [7]. Another class of genetic anomaly is responsible for many cases of intellectual disability. Copynumber variants (CNVs) are structural changes in the genome that may duplicate or delete a segment of DNA. Such CNVs may be inherited or they may arise de novo, and they are usually between 15 kb and 1 Mb in length. They contribute to a range of neurodevelopmental disorders in both childhood [10] and adulthood [11 && ]. Not all CNVs are pathogenic [12], but pathogenic CNV are found in up to 15% of children referred for genetic investigation of developmental delay [13]. Some CNVs are particularly strongly associated with intellectual disability [14], especially the milder forms, and these are often familial. Recent research has shown, in both European and North American general population cohorts, that deletions of medium size and large duplications of DNA have a small but measurable detrimental impact on the IQ, and educational achievement, of people who possess them. The explanation is thought to be that large CNVs disrupt the action of many genes within the affected region, and the detriment to cognition is therefore polygenic in origin [ GENETIC TESTING IN INTELLECTUAL DISABILITY Chromosomal microarray analysis (CMA) encompass all types of array-based genomic analyses, including array-based comparative genomic hybridization (aCGH) and SNP arrays. To identify potentially pathogenic CNVs, DNA testing with aCGH is often the first-line diagnostic test for children with intellectual disability, in both Europe and North America [18,19 & ,20]. The introduction of array-based copy-number analysis has led to the identification of both inherited and de-novo microdeletions and duplications in up to 15% of cases [21]. It is important to be aware that the attribution of pathogenicity to a CNV identified by a microarray is by no means straightforward, and there is no universally agreed standard in the United Kingdom. Many CNVs are excessively rare events, and conclusions regarding KEY POINTS CMA detects pathogenic variants in approximately 15% of children with developmental delay. WES may detect pathogenic variants in more 50% of children with developmental delay if used as a first tier diagnostic test and where parents are available for genotyping. Some variants pathogenic for intellectual disability, particularly CNVs, are strong risk factors for other neurodevelopmental disorders such as schizophrenia, ADHD and ASD. There is a paucity of information about specific associations of childhood onset psychiatric disorders with more recently identified variants pathogenic for intellectual disability. Systematic psychiatric phenotyping in genomic intellectual disability disorders is important to inform prognosis and facilitate early intervention. the apparent strength of their association with cognition and psychiatric risk are therefore critically dependent on the interpretation of genomic data from unaffected comparison populations. In that regard, some variants previously considered to be pathogenic are being re-evaluated in light of increasing knowledge regarding their lack of association with disease in the general population. WES is not routinely available in the UK's National Health Service, but some Regional Genetics Centres are using specialist panel arrays to detect small nucleotide variants that are known to be associated with intellectual disability and other disorders of neurodevelopment. The proportion of sporadic cases of intellectual disability caused by point mutations (SNV) is unknown. Exome sequencing has led to an increasing identification of de-novo variants [22], but in clinical practice, because of high locus heterogeneity, we often cannot with confidence attribute pathogenicity to individual mutations [8]. Evidence to guide such decision-making is slowly emerging from two recent UK national research studies that have recruited children with intellectual disability, of probable genetic etiology. The Deciphering Developmental Disorders (DDD) project employed genome-wide microarray and WES in a nationwide survey of children with complex developmental disorders of probable genetic origin [23 && ]. Damaging de-novo coding mutations were found in 42% of these previously investigated, yet undiagnosed, children, the vast majority (90%) of whom had associated intellectual disability. The more recent 100 000 Genomes project has used WGS to investigate a similar cohort, but the results are as yet unpublished [24]. WHO GETS TESTED? A recent review, summarizing the outcome of several years of genetic testing for intellectual disability by a London community paediatric clinic [25 & ], stated referrals had been made by a wide range of paediatric specialists, general practitioners, therapists, and schools. Practitioners called for genetic testing when they predicted it was likely to be of diagnostic value, based on criteria that included significant developmental delay, an unusual physical phenotype, epilepsy and parental consanguinity. Rarely, if ever, was the reason for genetic testing in intellectual disability prompted by a neurodevelopmental disorder that manifested in terms of behavior. If a genetic diagnosis is made following CMA investigation, this may lead to counseling about the likely prognosis or potential complications associated with the disorder [26 & ]. A positive genetic finding in association with intellectual disability can provide information about recurrence risk in any future children, following cascade testing of biological relatives. It is important to be aware that CMA testing is not without drawbacks. In addition to the challenges of determining the pathogenicity of many variants that could have contributed to the neurodevelopmental disability, there is the ethical dilemma of reporting incidental findings to the family of the affected child. Such incidental findings could include the discovery of risk variants for serious adult-onset diseases, such as breast cancer. Genetic testing of children with intellectual disability is nowadays focused on the preschool population, whereas most child psychiatry services do not routinely assess children under 6 years of age. However, a recent survey of child and intellectual disability psychiatrists in the United Kingdom found that just over half had directly ordered genetic investigations at some time. Although the majority of psychiatrists thought genetic diagnosis was helpful for the family, the responses suggested that the diagnosis did not often result in management changes [27 & ]. GENETIC CAUSES OF ID AND RISK OF PSYCHIATRIC DISORDERS A broad range of childhood-onset psychiatric disorders is found in association with intellectual disability [28,29], but there is rarely any evidence of a specific genetic cause, with the exception of some cases of ASD. In adults with intellectual disability, there is a better understanding of the risks of psychiatric comorbidity in those bearing some rare genetic anomalies ], who are then subject to genetic screening. If damaging variants are found that are excessively rare in controls, there is a tendency to assume specificity. But because of high locus heterogeneity, it is hard to draw firm conclusions about the specific psychiatric pathogenicity of individual mutations [8]; mutations in genes associated with intellectual disability cause a wide range of phenotypes [34]. The alternative approach, undertaking broad psychiatric phenotyping of a representative sample of children with intellectual disability who have pathogenic genetic anomalies, is essential in order to set existing findings in context. An excess of males is ascertained with neurodevelopmental disorders, including intellectual disability. The reasons for this bias is not known, but it is apparently not attributable to X-linked variants because 'monogenic' X-linked intellectual disability accounts for no more than 8% or so of male cases [35], and a wide range of epidemiological studies has shown that the excess of males over females is up to 50%. This observation has been linked to the theory that for females, at genetic risk, to manifest the phenotype of neurodevelopmental dysfunction they need to possess higher mutational burden than males [36]. The phenomenon has been termed the female protective effect [37]. We know that males with normal-range IQ are more likely to be referred for genetic testing than females carrying the same autosomal variant, in populations with ASD. In the Simons Simplex Collection (SSC) of individuals with ASD, rare truncating SNVs show a slight female excess [38], but there are significantly more females than males with large (more than 400 kb) CNVs. Where the CNV was familial, maternal transmission was significantly higher than paternal transmission for these large deleterious CNVs. This contrasts with evidence that the rate of de-novo point mutations is generally increased among older fathers [39]. GENETIC TESTING IN INTELLECTUAL DISABILITY AND AUTISM SPECTRUM DISORDERS NICE guidelines in the United Kingdom [40] do not recommend routine genetic tests for children with an autistic disorder, but states these will be done 'as recommended by your regional genetics center, if there are specific dysmorphic features, congenital anomalies and/or evidence of a learning (intellectual) disability'. It used to be thought that children with autistic disorders were usually developmentally delayed. In a sense that is true, insofar as there is a substantially increased risk of ASD in children with intellectual disability [3], and consequently the apparent population prevalence of ASD is influenced by the prevalence of intellectual disability. In the United States, the 2014 National Health Interview Survey of Autism [41] used a revised question ordering and a new approach that asked about autistic characteristics before developmental disabilities. This change resulted in substantial increases in the apparent prevalence of autistic disorders because children formerly assigned as developmental disabilities were designated as having a primary diagnosis of ASD instead. At a population level, most newly diagnosed autism is not nowadays associated with generalized learning disabilities, probably because the clinical ascertainment of autistic features in children of normal range intelligence has improved in recent years [42 & ]. The heritability of autism is very high [43 & ] implying shared, familial genetic risk factors increase the likelihood of the diagnosis. Most risk at a population level is because of common variation, but this acts additively with rare variation to enhance risk in those ASD cases who carry a strongly acting de-novo variant [44 & ]. A recent review of genetic risk in autism [45 & ] emphasized that CNVs that are associated with a high risk of autism overlap with those known to cause intellectual disability. Not only are they expressed in the brain but are especially likely to involve genes that are structurally or functionally engaged in chromatin remodeling and transcription regulation. CNVs that are particularly strongly associated with ASD include duplications of 16p11.2, deletions of 15q13.3, 2p16.3, and 15q11.2 [46]. The yield of genetic testing of nonsyndromal cases of autism in simplex families (which are less likely than multiplex families to carry heritable private mutations) can be estimated from internationally curated samples. Large structural abnormalities (CNVs), which are detectable by microarray, can be found in up to 10% of cases [32 & ]. These CNVs are usually associated with relatively mild learning disabilities, and they comprise both inherited and de-novo anomalies. The wider use of exome sequencing is likely to increase the proportion of cases with an identifiable point mutation or indels that are the cause of loss of function or otherwise disrupting, the great majority of which are de novo (and likely to be paternal in origin [47]). CONCLUSION AND FUTURE DIRECTIONS In summary, when applied as a first-tier test for broadly defined developmental delay, current widely-available arrays (aCGH) detect pathogenic variants in approximately 15% of children. We know that, in a research context, WES gives a greater diagnostic yield of around 40% in children with severe developmental delay and it is estimated that yield could exceed 50% if used as a first tier diagnostic test [48 && ]. We anticipate that WGS may provide even better identification of pathogenic variants, although there is still debate about the interpretation of intergenic SNV. However, accurate assignment of pathogenicity to SNV is getting better, prompting some to advocate re-analyzing existing data sets [48 && ,49 & ]. Economic analysis suggests that current use of WES reduces healthcare costs when applied to the investigation of intellectual disability [50 & ]. The falling cost of WGS, and the associated improvement in our ability to detect very small CNVs, makes it likely that the first-tier investigation of childhood intellectual disability will be WGSbased in the future. However, our understanding of the relationship between genotype and phenotype in intellectual disability and related neurodevelopmental disorders is at an early stage for the majority of variants. Epigenetic changes are also likely to have important modifying effect on these neurodevelopmental trajectories, but measurement of epigenetic changes and integration of this information and polygenic risk for prognostic predication represents a significant research challenge.
2018-08-25T21:43:14.235Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "882f3a732e520e37207b285253770996036cfd3e", "oa_license": "CCBYNC", "oa_url": "https://discovery.ucl.ac.uk/10058092/1/00001504-201811000-00011.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8a46e27eb700c601d32ad381580e366a0f37962a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259745858
pes2o/s2orc
v3-fos-license
Spontaneous Hemothorax by Pulmonary Arteriovenous Malformation during Pregnancy Abstract Background:  Pulmonary arteriovenous malformation (PAVM) is a rare vascular malformation that may cause hemothorax, especially during pregnancy. Case Description:  A 25-year-old woman presented sudden-onset left chest pain, dizziness, and dyspnea in the 27th week of gestation. Computed tomography angiography showed left pleural effusion with complete hemithorax opacification and an aneurysmal PAVM. She exhibited hemorrhagic shock and received emergency exploratory video-assisted thoracic surgery. A ruptured PAVM was identified and stopped by wedge resection in the upper lobe of the left lung. The patient's postoperative clinical course was uncomplicated. She subsequently delivered a healthy live baby vaginally at 41 weeks gestation. Conclusion:  PAVM should be considered in pregnant women with hemothorax. Emergency thoracoscopic surgery is the best treatment option. Introduction Pulmonary arteriovenous malformation (PAVM) is a rare vascular malformation resulting from direct communication between the pulmonary artery and vein without interposition of the capillary bed. 1 Most PAVMs are asymptomatic; they are usually found by accident and occasionally, via dyspnea due to a right-to-left shunt. 2 Symptomatic PAVM is usually related to hereditary hemorrhagic telangiectasia, which is an autosomal dominant vascular disorder that is also known as the Osler-Weber-Rendu syndrome. 3The remaining causes are related to malignancy, trauma, hepatopulmonary syndrome, and heart surgery. 4regnancy is considered a risk factor for PAVM, mainly due to increased blood volume and cardiac output resulting in increased cardiac load, as well as relaxation of arterial smooth muscle due to increased levels of progesterone during pregnancy.Although the vast majority of pregnant women with PAVM are asymptomatic, when they present with symptoms such as hemoptysis, rupture, or hemothorax, the condition can be lethal. 5][8][9] As early diagnosis and appropriate treatment are crucial for PAVMs, we herein present the case of a pregnant patient with life-threatening spontaneous tension hemothorax caused by a ruptured PAVM who was successfully treated with emergency thoracoscopic surgery.emergency department with the chief complaint of suddenonset left chest pain, dizziness, and dyspnea without trauma.Physical examination revealed decreased breath sounds on the left side, dull percussion notes, and decreased vocal tactile fremitus but no evidence of cyanosis.Obstetric ultrasound revealed a single live intrauterine fetus.Computed tomography angiography (CTA) showed left pleural effusion that caused complete hemithorax opacification and an aneurysmal PAVM with a feeding branch of the upper right pulmonary artery and a dilated draining vein (►Fig. 1).Systolic blood pressure decreased to 70 mm Hg despite continuous intravenous infusion.Initial laboratory test results revealed normal platelets, a normal coagulation panel, and hemoglobin of 7.8 g/dL.Thoracentesis revealed blood collection in the left chest, and the patient was diagnosed with hemothorax with persistent bleeding.After multidisciplinary discussions with anesthesiologists and obstetricians, we decided to treat the primary cause first and extend the gestational age to the extent possible.We decided to perform emergency video-assisted thoracic surgery (VATS) because the patient's condition was considered life-threatening.After removal of the retained thrombus and blood inside the pleural space, measuring $3,000 mL, a ruptured PAVM was identified in the upper lobe of the left lung (►Fig.2).Wedge resection was performed using an endostapler (►Fig.3).No other obvious lesions were found in the lung parenchyma or thoracic wall.As a result, the bleeding was successfully stopped, and the patient's vital signs recovered.Red blood cells (800 mL) and fresh-frozen plasma (800 mL) were transfused during the surgery.Histological examination of the resected lung specimen confirmed a diagnosis of PAVM (►Fig. 4).The postoperative clinical Spontaneous Hemothorax Duan et al. e64 course was uncomplicated, and the patient was discharged on the sixth postoperative day.At the time of publication, the patient had vaginally delivered a live baby.She and the baby are currently healthy. Discussion Our case report discussed a pregnant patient affected by a ruptured PAVM that caused a massive hemothorax compressing the lung, which was successfully treated with emergency thoracoscopic surgery.Moreover, the report summarized the diagnosis and appropriate treatment for PAVMs in pregnant women to avoid life-threatening complications and reduce maternal mortality from the disease. The prevalence of hemorrhage associated with PAVM is extremely low; however, the risk would relatively increase during pregnancy, 10 which is related to the increase in cardiac work and blood volume, as well as the effect of the estrogen-progesterone imbalance on vessels. 2 Once a PAVM ruptures, bleeding into the pleural cavity results in hemothorax and may lead to progressive dyspnea, pleuritic pain, hypoxia, and hypovolemic shock. CTA is useful for the diagnosis of PAVM rupture, but it is not recommended in pregnancy because of fetal radiation exposure.However, the fetus is exposed to maximum radiation of 0.66 mGy when chest CT is performed, whereas a dose of !1 Gy is lethal to the fetus.Therefore, chest CT is safe and can be performed as needed. 7wo therapeutic options are recommended for treating patients with PAVM: transcatheter arterial embolization (TAE) and surgical resection (ligature, wedge resection, segmentectomy, lobectomy, and pneumonectomy).Surgical resection of the PAVM is recommended for all patients who can receive general anesthesia, especially for patients with neurological complications, newborns, or central localization of the PAVM. 11In cases of massive hemothorax, surgery is the only choice. 11TAE is also an effective treatment and should be performed at the time of diagnosis or when the following criteria are satisfied: progressive enlargement of a detected PAVM, paradoxical embolization, symptoms of hypoxemia, and feeding vessels of 3 mm or larger. 6n our case, the patient presented with rapid hemorrhagic shock without an obvious cause.After consulting with the obstetrician and the radiologist, we immediately performed enhanced CT.A PAVM was found in the upper lobe of the right lung although the lesion of the left side was indistinct.Considering the possibility of hemothorax caused by rupture of the PAVM, we performed an emergency VATS first and prepared thoracotomy if necessary, through which we clearly identified the ruptured and bleeding PAVM and successfully achieved hemostasis by removing the PAVM via wedge resection of the lung.Collaboration with anesthesiologists and obstetricians before, during, and after surgery is crucial for saving both the mother and fetus. Previous studies have reported that hemothorax caused by rupture of PAVMs during pregnancy usually occurs in the third trimester. 7,12Therefore, they all chose to perform a cesarean section to terminate the pregnancy first and then perform thoracoscopic surgery.However, in our case, the patient was in the second trimester, and we chose to perform emergency thoracoscopic surgery immediately to determine the cause and treat the disease, rather than directly terminate the pregnancy.At the time of publication, the mother and baby were healthy.We have fully informed the mother that the lung condition should be fully assessed as soon as possible after the end of lactation, and interventional therapy should be performed if necessary. Several key factors are involved in the success of a patient's outcome.Timely and rapid thoracoscopic surgery was performed to determine the cause and to remove the lesions.Second, multidisciplinary cooperation, including obstetricians, anesthesiologists, and other departments, contributed to quick and accurate judgment and action.Finally, systematic treatment and nursing were provided during the perioperative period. Fig. 1 Fig.1CT showing an aneurysmal pulmonary arteriovenous malformation located in the upper lobe of the right lung and a massive pleural effusion on the left side. Fig. 2 A Fig.2A ruptured pulmonary arteriovenous malformation in the upper lobe of the left lung.
2023-07-12T06:14:04.053Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "546a51e1d645d07bccb901f7f265076cd6594dc3", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2121-7350.pdf", "oa_status": "GOLD", "pdf_src": "Thieme", "pdf_hash": "13083206e4cf4a00c7befb96e5cc0ed04d3fc16d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
54551268
pes2o/s2orc
v3-fos-license
Atrial fibrosis: an obligatory component of arrhythmia mechanisms in atrial fibrillation? Atrial fibrosis is common in atrial fibrillation (AF). Experimental studies have provided convincing evidence that fibrotic transformation of atrial myocardium results in deterioration of atrial conduction, increasing anisotropy of impulse propagation and building of boundaries that promote re-entry in the atrial walls that maybe directly relevant for the mechanisms responsible for maintaining AF. Whether or not fibrosis is a result of structural remodelling caused by persistent AF or a manifestation of occult myocardial process that leads to development of arrhythmia is less clear. Human data indicate the presence of association between persistency of AF and the extent of structural changes in atrial myocardium. The role atrial fibrosis plays in the mechanisms of AF, however, may differ between patients with structurally normal hearts, such as lone AF, and those with advanced cardiovascular comorbidities. Introduction  Advances in clinical and fundamental research, which have been promoted over last decades have led to a well-established understanding of atrial fibrillation (AF) as an epiphenomenon that despite similar manifestations may have different underlying mechanisms and thus require individualized treatments. [1] With rare exceptions of AF caused by mutations in genes coding ion channels in patients with structurally normal atria, fibrotic replacement of atrial myocardium remains the corner stone of atrial pathology in patients with AF. However, we are still struggling trying to understand the mechanisms underlying the structural abnormalities in atrial walls observed in patients with the arrhythmia and its relationship to the arrhythmia mechanisms. The common perception of AF being a result of interplay between the structural changes in the atrial myocardium induced by the well-described cardiovascular risk factors and structural remodelling induced by the arrhythmia itself has recently been challenged by observations of progressive structural abnormalities in the atrial walls that occur inde-pendently from the cardiovascular comorbidities and persistency of AF. [2] It is also well known that lone AF is not an uncommon clinical entity that may manifest early in life without any apparent risk factors, which would explain development of atrial fibrosis in patients with structurally normal hearts. [3] To what extent this fibrotic atrial cardiomyopathy represents a 'common cause' of AF or a mechanism responsible for arrhythmia development in a subgroup of patients with AF phenotype remains however uncertain. Atrial fibrosis and arrhythmogenesis Fibroblasts represent an integral part of functioning myocardium by providing a cellular scaffold, maintaining a proper three-dimensional network required for normal mechanical function and contributing to the uniformity of the excitable substrate and to the uninterrupted and rapid propagation of electrical activation through myocardium. [4] In addition to that, fibroblasts play role in the regulation of cardiomyocyte function by slowing down conduction in response to mechanical stretch, [5] and in pathological condictions may proliferate, differentiate to myofibroblasts and increase production of extracellular matrix. In the pathological conditions associated with development of fibrosis in patients with AF, fibroblasts may proliferate, increase production of extracellular matrix and differentiate to myofibroblasts that may directly slow down conduction. [6] As a result of fibrosis development, the archi- tecture of fibrotic myocardial tissue becomes heterogeneous, thereby affecting intercellular conduction, [7,8] increasing its anisotropy and leading to conduction slowing, development of functional and structural block, thus creating the arrhythmic substrate. Whether or not fibrosis itself may serve as an arrhythmia-initiating mechanism in the settings of clinical AF is not clear. The impact of the structural changes in the myocardium associated with fibrosis development, however, does not seem to be limited to the arrhythmia maintenance mechanisms related exclusively to the conduction slowing and creation of boundaries needed for perpetuation of AF. Computational modelling studies suggest that fibrosis-induced disorganisation of electrotonic coupling [9] may also lead to increased automaticity and atrial ectopy, thus potentiating arrhythmia triggering mechanisms. [10] Despite the tremendous progress in the field that increased our understanding of the fine mechanisms involved in development or fibrosis in atrial walls, our knowledge of the role fibrosis plays in the development of AF remains limited. While fibrosis is beyond doubt the most consistently reported structural abnormality of atrial walls in patients with AF, large number of patients with accumulation of fibrosis-causing clinical risk factors, such as advanced congestive heart failure (CHF), hypertension or diabetes, remain arrhythmia free as shown in the studies utilizing implantable rhythm monitoring technology. [11] 3 Atrial fibrosis in experimental models of AF A number of experimental studies addressed the issue of structural remodelling of atrial myocardium in animals with AF induced by either prolonged rapid atrial pacing without directly affecting the ventricules [12][13][14] or mediated by induc-tion of congestive heart failure by pacing right ventricle at a high rate. [12,[15][16][17] The principal difference between these two experimental setups is related to their respective propensity to induce structural abnormalities of atrial myocardial and development of atrial fibrosis in particular. While CHF models consistently reported rapid development of atrial damage, [17] and irreversible atrial fibrosis associated with conduction heterogeneities and AF stability, rapid atrial pacing models demonstrated a wider spectrum of structural alterations with less pronounced fibrosis development. Some studies indicated that the likelihood of fibrosis development in the rapid atrial pacing model of AF strongly depends on the resulting ventricular rate, [18] so that animals with prolonged rapid atrial pacing and controlled ventricular rate by means of atrio-ventricular (AV) node ablation develop atrial fibrosis in a significantly lesser extent than those with preserved AV conduction. [19,20] Importantly, animals with controlled ventricular rate and minimal structural alterations were also less likely to develop persistent AF, thus supporting the link between the atrial fibrosis and AF mechanisms. [19] Even though it may be problematic to prove this in clinical studies, these observations made in experimental models of AF demonstrate clinical relevance to human AF observed in patients with and without adequate rate control. Histological evidence of atrial fibrosis in patients with AF In humans, indirect indication of the link between cardiovascular comorbidities and AF comes from epidemiological studies, in which potentially fibrosis-causing conditions such as hypertension, ischemic heart disease and diabetes were highly predictive of incident AF. [21] Age-related http://www.jgc301.com; jgc@mail.sciencep.com | Journal of Geriatric Cardiology increase in the prevalence of AF has also been well documented [22] and explained by growing cardiovascular disease burden in the elderly as well as age-related increase in the atrial fibrosis extent. [23] However, attempts to provide a quantitative assessment of atrial structural abnormalities associated with AF have shown a more complex picture. Even though catheter-based techniques of endocardial voltage mapping and emerging non-invasive MRI have shown their value in visualization of atrial structural abnormalities, histological evaluation of atrial tissue samples remains a golden standard of tissue characterization. This approach, however, is often limited to a small volume of tissue samples that can be collected in patients undergoing atrial biopsy or confined to right or left atrial appendages in patients undergoing open-chest heart surgery, thus imposing a significant selection bias on patient selection and leaving large portions of atrial walls, in which AF perpetuates, outside reach. One of the first observations of the structural substrate of AF in patients without apparent structural heart disease came from the study by Frustaci, et al. [3,24] who collected biopsies from atrial septum as well as from ventricles in patients with lone AF and reported a consistent finding of myocardial inflammation and fibrosis confined to the atrial myocardium but not present in ventricular walls. These studies were the first that suggested the presence of occult myocardial disease that might have direct causal relationship with development of AF. Our group has further expanded this concept by studying histology specimens from multiple sampling locations in the right and left atrium collected post mortem from deceased patients with common cardiovascular comorbidities with previous paroxysmal, permanent AF and those without AF history enrolled in a 1: 1: 1 fashion according to pre-specified inclusion criteria. [25] The extent of fibrosis and fatty tissue in the atrial myocardium showed strong and significant correlation with the presence of AF at all tissue sampling locations in the left and right atria. Notably, patients with and without AF did not differ with regard to cardiovascular co-morbidities and one could not observe any age-related increase in the extent of atrial fibrosis. Similar observations were made in patients with persistent or long-standing AF referred for surgical ablation, [26] thus suggesting that development of structural abnormalities in the atria is not a result of concomitant diseases but rather a phenomenon associated with AF. Indirect assessment of atria fibrosis using MRI-technique in a large cohort has further supported this theory by not finding any significant differences in the estimated fibrosis extent between AF patients with and without co-morbidities. [27] To the best of our knowledge, however, there is no histology data that would specifically address the question of causal relationships between the burden of concomitant cardiovascular diseases and atrial fibrosis in patients with AF. Contrary to the findings in lone AF, [24] we did observe similar extent of fibrotic replacement and inflammatory infiltration in the free walls of the right and left ventricles in patients with common cardiovascular comorbidities. In our controlled study, ventricular fibrosis demonstrated strong correlation with AF history and with the extent of fibrosis in the major atrial conduction routes such as Bachmann's bundle and terminal crest. [28] These findings may be interpreted as indicating an underlying occult cardiomyopathy with significant inflammatory component in patients with AF, the viewpoint that which has recently received support from a study of left ventricular function and energetics in patients with lone AF. [29] As demonstrated by Wijesurendra, et al., [29] not only the patients with lone AF had reduced phosphocreatine to ATP ratio as the measure of ventricular energetics, but this function remained lower than in control subjects regardless of both recovery of sinus rhythm and freedom from recurrent AF during long-term follow-up. Atrial fibrosis in AF: cause or consequence? Whether or not structural abnormalities observed in the atria are the cause or consequence of AF remains, however, an open question. The presence of relationship between the extent of fibrosis and AF burden does not give us a definitive answer and could be explained both ways: expansive fibrotic process in the atria may promote persistent AF or may be a consequence of the long-standing fibrillatory process. The lack of this relationship, however, would favour the concept of the primary occult cardiomyopathy underlying AF development. Available data suggest that the extent of fibrosis tends to be higher in patients with permanent compared with paroxysmal AF, [25] but the relationship between the extent of structural abnormalities and duration of AF seems to disappear in patients with persistent AF. [26] In another study that quantified the expression of extracelluar matrix proteins in atrial tissue samples collected during heart surgery, no systematic difference between patients with paroxysmal and permanent AF was documented. [30] Even though this does not address the unresolved causality issue, one can speculate that fibrosis extent in the atrial walls may be linked to AF burden and clinical manifestations of the arrhythmia at the early stages of the disease but, upon reaching a certain level, would no longer affect AF phenotype in patients who develop persistent AF. If fibrosis is directly related to the mechanisms governing development of AF substrate rather than being a consequence of the arrhythmia itself, then interventions attenuating fibrosis expansion would be expected to slow down or abolish AF progression. Indeed, there is considerable experimental evidence that a number of compounds, such as angiotensin-converting enzyme inhibitor, AT 1 -receptor blockers or statins, may delay structural remodelling process and reduce AF stability. [31][32][33][34] Clinical validity of these findings is however uncertain. Experimental findings have been supported by data presented in a recent meta-analysis (at least in regard to ACE inhibitors and AT 1 receptor blockers), [35] and upstream therapy with these compounds has been advocated by European Society of Cardiology guidelines for primary and secondary prevention of AF. [36] However, it is important to observe that the evidence is strongest in patients with heart failure and those who otherwise have indications for the upstream therapy drugs, while studies that used AF as a pre-specified endpoint demonstrate less convincing and conflicting results. [37][38][39] Efficacy of pre-treatment with candesartan [40] or pravastatin [41] for reduction of AF relapse after cardioversion was not proved in two randomized placebo-controlled studies. It is likely, though remains unproven, that efficacy of fibrosis reducing drugs for AF prevention may depend on the degree of structural remodelling and irreversibility of fibrotic transformation of atrial myocardium achieved by the time when drug therapy is initiated so that early administration of the drug would affect the course of the disease, while initiation of therapy late in the course of the disease would not affect the outcome. Conclusion Long-term observational studies would be able to resolve this controversy if show that successful rhythm-control intervention or upstream fibrosis-reducing therapies may slow down or abolish progression of the atrial structural changes, however direct histological evidence of this cause-effect relationship is lacking at this point. On the opposite, evidence of progressive cardiac remodelling that continues after a successful catheter ablation [42] or not reversed by the achieved freedom from arrhythmia [29] is emerging thus highlighting remaining knowledge gaps in our understanding of mechanisms of this common rhythm disorder.
2018-04-03T02:22:21.817Z
2017-04-01T00:00:00.000
{ "year": 2017, "sha1": "36cce95439f3d892428e5f8dceab03d75cb60230", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "36cce95439f3d892428e5f8dceab03d75cb60230", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270763854
pes2o/s2orc
v3-fos-license
Phage predation accelerates the spread of plasmid-encoded antibiotic resistance Phage predation is generally assumed to reduce microbial proliferation while not contributing to the spread of antibiotic resistance. However, this assumption does not consider the effect of phage predation on the spatial organization of different microbial populations. Here, we show that phage predation can increase the spread of plasmid-encoded antibiotic resistance during surface-associated microbial growth by reshaping spatial organization. Using two strains of the bacterium Escherichia coli, we demonstrate that phage predation slows the spatial segregation of the strains during growth. This increases the number of cell-cell contacts and the extent of conjugation-mediated plasmid transfer between them. The underlying mechanism is that phage predation shifts the location of fastest growth from the biomass periphery to the interior where cells are densely packed and aligned closer to parallel with each other. This creates straighter interfaces between the strains that are less likely to merge together during growth, consequently slowing the spatial segregation of the strains and enhancing plasmid transfer between them. Our results have implications for the design and application of phage therapy and reveal a mechanism for how microbial functions that are deleterious to human and environmental health can proliferate in the absence of positive selection. Phage predation is generally assumed to reduce microbial proliferation while not contributing to the spread of antibiotic resistance.However, this assumption does not consider the effect of phage predation on the spatial organization of different microbial populations.Here, we show that phage predation can increase the spread of plasmid-encoded antibiotic resistance during surface-associated microbial growth by reshaping spatial organization.Using two strains of the bacterium Escherichia coli, we demonstrate that phage predation slows the spatial segregation of the strains during growth.This increases the number of cell-cell contacts and the extent of conjugationmediated plasmid transfer between them.The underlying mechanism is that phage predation shifts the location of fastest growth from the biomass periphery to the interior where cells are densely packed and aligned closer to parallel with each other.This creates straighter interfaces between the strains that are less likely to merge together during growth, consequently slowing the spatial segregation of the strains and enhancing plasmid transfer between them.Our results have implications for the design and application of phage therapy and reveal a mechanism for how microbial functions that are deleterious to human and environmental health can proliferate in the absence of positive selection. Phage are integral components of microbial ecosystems that can direct ecosystem functioning and dynamics with consequences for human health, biotechnology, and elemental cycling [1][2][3][4][5] .The enormous influence of phage stems from them being the most abundant biological entity on Earth while also being effective predators [6][7][8] .They have highly selective host ranges, which can cause specific changes to microbial abundances, diversity, and interactions that can modify ecosystem functioning and stability [9][10][11] .They also impose strong selection pressures on their hosts that can drive ecosystem dynamics over ecological and evolutionary timescales [12][13][14] . The effectiveness of phage predation depends on whether the host is in a surface-associated state (e.g., biofilms, colonies, aggregates, etc.) 15,16 .This is typical as a large proportion of microbial life is associated with surfaces [17][18][19] .Surface association can increase phage predation by increasing local phage concentrations and the duration of physical contacts with host cells 20 .However, surface association can also repress phage predation by causing changes to host physiology and local environmental conditions 21,22 .This includes forming an extracellular matrix that can slow phage transport to host cells 22,23 , creating regions of low metabolic activity that are less susceptible to phage predation [24][25][26] , and inducing the secretion of molecules that inhibit phage 24,27 .Surface association also enables the process of microbial spatial self-organization, whereby different microbial populations arrange themselves across space as a consequence of their traits, local environmental conditions, and interactions with neighboring cells [28][29][30] .This can result in spatial patterns that physically protect or expose host cells to phage 15,21,22,31 .Phage predation can also feedback on spatial self-organization and, in turn, modify local environmental conditions and interactions 32,33 .Thus, there is a complex interplay between phage predation and spatial self-organization that can determine the dynamics and functioning of surface-associated microbial ecosystems. Here, we hypothesize that phage predation can drive the spread of plasmid-encoded antibiotic resistance by modifying microbial spatial self-organization.More precisely, we hypothesize that phage predation increases the spatial intermixing of different microbial populations during surface-associated growth, consequently increasing the number of cell-cell contacts and promoting conjugationmediated plasmid transfer between them.Our hypothesis is based on fundamental principles of surface-associated microbial growth.Only those cells located at the biomass periphery typically have access to resources replenished from the environment, and only those cells therefore grow and contribute to new biomass (Fig. 1a; referred to as the active layer) 28,34,35 .Because their population sizes are small, they are subject to stochastic fluctuations that cause different microbial populations to spatially segregate along the biomass periphery (Fig. 1a), which is referred to as spatial demixing 28,[35][36][37] .Briefly, the interfaces between microbial populations stochastically meander during surface-associated growth, which can cause neighboring interfaces to merge together (Fig. 1a).This reduces the number of cell-cell contacts between different microbial populations and the probability that plasmid transfer will occur between them 38,39 .However, phage have more ready access to, and are therefore more likely to predate on, cells located at the biomass periphery, which are the cells undergoing the most rapid spatial demixing (Fig. 1a).We therefore expect phage predation to slow spatial demixing, preserve more cell-cell contacts between different microbial populations, and promote plasmid transfer between them.Stated alternatively, we expect phage predation to hinder any one microbial population from dominating the biomass periphery and consequently increase spatial intermixing and plasmid transfer (Fig. 1a). Fig. 1 | Hypothesis and experimental system.a We hypothesize that phage predation slows the spatial demixing of different microbial populations during surfaceassociated growth, consequently increasing plasmid transfer between them.Phage will preferentially predate on those cells located at the biomass periphery, which are the cells undergoing the most rapid spatial demixing, thus maintaining more spatial intermixing and enhancing plasmid transfer.b The biological components of our experimental system.The plasmid donor (designated as the R388 donor) expresses GFP from the chromosome and CFP from R388 and appears cyan.The potential recipient expresses RFP from the chromosome.If R388 successfully transfers from the R388 donor to the potential recipient, the potential recipient will express RFP from the chromosome and CFP from R388 and appear magenta.c Our experimental approach is to grow the R388 donor and potential recipient together on a nutrient-rich surface in the presence or absence of the lytic phage T6 in anoxic conditions and quantify the effects of phage predation on the emergence of transconjugants. To test our hypothesis, we performed surface-associated growth experiments with pairs of competing strains of the bacterium Escherichia coli in the presence or absence of phage.The strains can engage in the conjugation-mediated transfer of plasmid R388, which is selftransmissible and encodes for cyan fluorescent protein (CFP) and resistance to chloramphenicol (Fig. 1b) 40 .We refer to one strain as the R388 donor and the other as the potential recipient (Fig. 1b).We mix the strains together, grow them across nutrient-rich surfaces, infect them with the T6 lytic phage 41 , and track the extent of R388 transfer using confocal laser-scanning microscopy (CLSM) (Fig. 1c).We complement our experiments with individual-based computational simulations that test how the peripheral killing caused by phage predation can reshape microbial spatial organization and increase plasmid transfer during surface-associated growth. Phage predation increases R388 spread during surfaceassociated microbial growth We first quantified the effect of phage predation on the spread of R388 as the R388 donor and potential recipient grow together across a nutrient-amended agar surface.We find that phage predation increases the spread of R388 even in the absence of positive selection for R388 in the form of added chloramphenicol (Fig. 2a, b).This is supported by four lines of evidence.First, the number of spatially discrete transconjugant regions (i.e., sectors composed of cells expressing CFP and RFP and appearing magenta) is larger when phage are present (two-sample two-sided Welch test; P = 5.2 × 10 −8 , n = 5) (Fig. 2c).Second, transconjugants comprise a greater proportion of the total biomass when phage are present (two-sample two-sided Welch test; P = 6.0 × 10 −8 , n = 5) (Fig. 2d).Third, the total number of transconjugants is larger when phage are present (two-sample two-sided Welch test; P = 7.6 × 10 −6 , n = 5) (Fig. 2e).This is true even though phage predation reduces the total biomass size (two-sample two-sided Welch test; P = 1.8 × 10 −6 , n = 5) (Fig. 2f).Finally, our outcomes remain valid when we calculate the number of transconjugants at a fixed radial distance across all of our samples (Supplementary Fig. 1).Overall, nearly all of the potential recipients receive R388 when phage are present (Fig. 2a), despite the fact that transconjugants grow slower than their R388-free counterparts (R388 reduces the growth rate by ~5%) 42 .In contrast, only those potential recipients lying at the interfaces with the R388 donor typically receive R388 when phage are absent (Fig. 2b).Thus, when phage are present, the production of transconjugants exceeds their removal via phage predation and outcompetition by R388-free counterparts.These outcomes remain valid when we reduce the initial relative abundance of the R388 donor by up to eightfold (Supplementary Fig. 2), demonstrating that only a few R388 donors are needed for the positive effect of phage predation on R388 transfer to manifest.They also remain valid in oxic conditions (Supplementary Fig. 3a, b) and when we reduce the nutrient concentration by 90% (Supplementary Fig. 3c, d). Natural transformation and transduction are not important R388 transfer mechanisms We next tested whether conjugation-independent mechanisms of horizontal gene transfer (i.e., natural transformation and transduction) can explain our results, as E. coli can acquire plasmid-encoded genes via conjugation-independent mechanisms 43 .To test this, we prepared a phage-induced lysate of the R388 donor and applied it to the potential recipient during surface-associated growth.We concurrently applied heat-inactivated phage to test for natural transformation or viable phage to test for transduction.For both treatments, CFP is undetectable throughout the entire biomass (Supplementary Fig. 4a), which is expected if natural transformation and transduction are negligible.We then suspended the biomass that received the lysate and streaked it onto chloramphenicol-amended agar plates.We do not observe any growth regardless of whether we apply heat-inactivated or viable phage, which is again expected if natural transformation and transduction are negligible.Finally, we repeated the experiments with DNase I to degrade free DNA.We do not find statistically significant evidence that DNAse I affects the number of transconjugants (twosample two-sided Welch test; P = 0.38, n = 5) (Supplementary Fig. 4b), which is expected if natural transformation is negligible.Taken together, our data establish that phage predation does not promote the transformation or transduction of R388 or its associated genes, and that conjugation-independent mechanisms therefore cannot explain our results. Phage predation increases R388 transfer by slowing spatial demixing Our hypothesis posits that phage predation slows the spatial demixing of different microbial populations along the biomass periphery (Fig. 1a), which in turn decreases the number of cell-cell contacts and the extent of R388 transfer between them.To test this, we quantified the effect of phage on the magnitude of spatial intermixing between the R388 donor and potential recipient.We use an intermixing index that quantifies the number of transitions between two strains (colors) adjusted for the anticipated number of transitions for a random spatial arrangement of the two strains.We first position a circle with its center located at the centroid of the biomass.We then calculate the intermixing index along the circumference of the circle as I r = (2N r ) ⁄ πr, where r is the radius of the circle.This allows us to quantify the intermixing index as a function of the radial extent of growth.We find that spatial intermixing is significantly higher when phage are present (two-sample two-sided Welch test; P = 1.9 × 10 −7 , n = 5) (Fig. 2g), which provides evidence that phage predation does indeed slow spatial demixing.This effect remains valid when R388 is absent from the experiments (Supplementary Fig. 5), demonstrating that it is an R388independent effect. To provide further evidence that phage predation slows spatial demixing, we performed an experiment where we inoculate the phage at a distance of 1 mm from where we inoculate the mixture of the R388 donor and potential recipient on the nutrient-amended agar surface.The phage must therefore diffuse across the surface to predate on their host.We expect that the side of the biomass facing towards the phage inoculation area is exposed to more phage, and thus has higher spatial intermixing and more extensive R388 transfer, than the side facing away.Indeed, we observe higher spatial intermixing on the side facing toward the phage inoculation area (two-sample two-sided Welch test; P = 0.019, n = 3) (Fig. 3a, c).We also observed significantly more transconjugant regions on the side facing towards the phage inoculation area (two-sample two-sided Welch test; P = 7.3 × 10 −5 , n = 3) (Fig. 3a, d), which is expected if the extent of spatial intermixing determines the extent of R388 transfer. Finally, we performed a third experiment where we reduce the dosage of phage applied to the biomass of the R388 donor and potential recipient (Supplementary Fig. 6).We expect that at a sufficiently low phage dosage, phage predation will be patchy across the biomass.This should generate correlated variance in the magnitude of spatial intermixing and transconjugant proliferation along the biomass periphery, where local regions with higher phage predation have higher spatial intermixing and more transconjugants.This is indeed what we observe (Supplementary Fig. 6). Phage predation slows spatial demixing by reshaping spatial organization When analyzing the surface-associated growth experiments, it is evident that the interfaces between the R388 donor and potential recipient are straighter when phage are present (two-sample two-sided Welch test; P = 0.005, n = 5) (Fig. 2h).This is particularly evident when we inoculate the phage at a distal location, where the interfaces become significantly straighter immediately after contact with phage (Fig. 3b) and are straighter on the side of the biomass facing towards the phage inoculation area (two-sample two-sided Welch test; P = 0.0008, n = 5) (Fig. 3e).Straighter interfaces are less likely to merge together during surface-associated growth 28 , thus slowing the spatial demixing of different microbial populations and maintaining more cell-cell contacts and promoting plasmid transfer between them. Why does phage predation cause the formation of straighter interfaces?We hypothesize that phage predation shifts the location of fastest growth from the biomass periphery to the interior.In the absence of phage, cells at the biomass periphery grow the fastest due to their preferential access to resources replenished from the environment (Fig. 4a).In the presence of phage, cells at the biomass periphery are preferentially predated on, thus shifting the location of fastest growth to the interior (Fig. 4a).Importantly, the biomass periphery has lower cell packing densities and cells are aligned further from parallel (Fig. 4b), and we therefore expect peripheral growth to result in more meandering interfaces.In contrast, the interior has higher cell packing densities and cells are aligned closer to parallel (Fig. 4b), and we therefore expect interior growth to result in straighter interfaces. To test this, we modified and employed an individual-based computational model where we assume that phage predation results in peripheral killing of the bacterial biomass.We implemented this approach because a prior study that simulated individual phage particles found that phage predation results in peripheral killing, where the depth of killing is related to population sizes and phage properties 31 .Using this approach, we find that peripheral killing accurately reproduces the effects that we observe in our experiments.When peripheral killing is absent in our simulations, cells at the biomass periphery grow the fastest.These cells are aligned further from parallel and form meandering interfaces that frequently merge together during surface-associated growth, resulting in rapid spatial demixing (Fig. 4c and Supplementary Video 1).In contrast, when peripheral killing is present in our simulations, cells at the biomass periphery are continuously removed, which causes cells in the interior to grow the fastest (Fig. 4d and Supplementary Video 1).These cells are aligned closer to parallel, which results in straighter interfaces (twosample two-sided Welch test; P = 2.3 × 10 −4 , n = 5) (Fig. 4f) and higher spatial intermixing (two-sample two-sided Welch test; P = 2.2 × 10 −16 , n = 5) (Fig. 4g).Indeed, we find that cells are aligned close to parallel during phage predation in our experiments (Supplementary Fig. 7). To further test this mechanism, we performed additional simulations without peripheral killing but where we prevent the two layers of cells at the biomass periphery from growing; we therefore directly set the location of fastest growth to the interior (Fig. 4e).This results in interfaces with straightness comparable to those observed with phage (mean interface straightness for interior growth = 0.993, SD = 0.0009; mean interface straightness with phage = 0.995, SD = 0.0003) (Fig. 4f and Supplementary Video 1).The magnitude of spatial intermixing is also comparable to that observed with phage (mean spatial intermixing for interior growth = −0.47,SD = 0.04; mean spatial intermixing with phage = −0.42,SD = 0.03) (Fig. 4g).Taken together, our simulations demonstrate that the key ingredient for the formation of straighter interfaces is to shift the location of fastest growth from the biomass periphery to the interior where cells have higher packing densities and are aligned closer to parallel, which reduces the probability that neighboring interfaces will merge together during surfaceassociated growth and slows spatial demixing. Peripheral killing lowers the conjugation rate needed to compensate for plasmid cost Because conjugation-mediated plasmid transfer requires direct contact between a plasmid donor and a potential recipient cell, we expect transconjugants to emerge along the interfaces between those microbial populations.To test this, we integrated a plasmid transfer module into our individual-based computational model that sets a defined probability of plasmid transfer when a plasmid donor and a potential recipient cell come into physical contact.When peripheral killing is absent in our simulations, transconjugants exclusively localize along the interfaces between plasmid donor and potential recipient regions and do not substantially proliferate (Fig. 5a and Supplementary Video 2), which is consistent with our experimental results (Fig. 2b).When peripheral killing is present in our simulations, transconjugants are not confined to the interfaces but instead proliferate and eventually displace potential recipients (Fig. 5b and Supplementary Video 3), which is again consistent with our experimental results (Fig. 2a). Why do transconjugants proliferate when phage are present even though the plasmid reduces the growth rate?We hypothesize that the plasmid transfer probability and the large number of cell-cell contacts created by phage predation are sufficiently high to counteract the effects of out-competition by faster growing plasmid-free counterparts.To test this, we varied the plasmid transfer probability in our model between 0.0001 and 0.001.When peripheral killing is absent in our simulations, the number of transconjugants consistently increases as the plasmid transfer probability increases (Fig. 5c, d and Supplementary Fig. 8b).When peripheral killing is present in our simulations, there are always significantly more transconjugants regardless of the plasmid transfer probability (two-sample two-sided Welch tests; P < 2.2 × 10 −16 , n = 5) (Fig. 5c, d and Supplementary Fig. 8).Moreover, once the plasmid transfer probability exceeds 0.0003, nearly all of the potential recipients receive the plasmid and the frequency of transconjugants approaches 0.5 within the entire biomass (Fig. 5c, d and Supplementary Fig. 8).Thus, when peripheral killing is present, a relatively low plasmid transfer probability can result in significant proliferation of transconjugants even though they grow slower than their plasmid-free counterparts. Discussion Our findings identify a new consequence of phage-microbe interactions; phage predation can promote the conjugation-mediated transfer and proliferation of plasmids during surface-associated microbial growth by slowing the spatial demixing of different microbial populations.Unexpectedly, while phage predation decreases the total microbial biomass size, it can simultaneously increase the total number of transconjugants even though carrying the plasmid slows growth (Fig. 2).Our results therefore challenge the idea that predatory interactions inhibit plasmid transfer by reducing population sizes [44][45][46] .For example, phage predation can slow plasmid spread by creating an additional death rate 44 , by specifically targeting cells that express plasmid-encoded traits 45 , and by modifying selection pressures that c-e Representative simulations (n = 5) of two competing bacterial strains (green and red cells) that have the same growth rate.We simulated biomass growth until reaching a population size of 18,000 cells in the (c) absence or (d) presence of peripheral killing.e There is no peripheral killing but we set the growth rate of cells located in the outer two layers of the periphery to zero.This effectively sets the location of fastest growth to be in the interior.f The mean interface straightness.g The intermixing index.f, g Each datapoint is a measurement an independent simulation (n = 5) and the P values are for two-sample two-sided Welch tests.Gray symbols are in the absence of phage, black symbols are in the presence of phage, and yellow symbols are for internal growth.Source data are provided as a Source Data file.limit plasmid spread 46 .Instead, we show that phage predation can increase the functional repertoires of microbial ecosystems even in the absence of positive selection for those functions.Such insights are important for understanding how functions that are deleterious for human and environmental health, such as antibiotic resistance and virulence, can persist within microbial ecosystems. The underlying mechanism is that phage predation creates straighter interfaces between different microbial populations during surface-associated growth.This slows spatial demixing, increases cell-cell contacts, and promotes conjugation-mediated plasmid transfer.Straighter interfaces form because phage predation shifts the location of fastest microbial growth from the biomass periphery to the interior where cells have higher packing densities and are aligned closer to parallel.In contrast, cells at the biomass periphery have lower packing densities and are aligned further from parallel, which causes meandering interfaces to form that are more likely to merge together during growth and reduce spatial intermixing 28,47 .We believe this mechanism is not restricted to phage predation; rather, it should be applicable to any type of predation or inhibition that targets cells at the biomass periphery and shifts the location of fastest microbial growth to the interior. We expect the straighter interfaces formed by phage predation to have consequences that extend well beyond plasmid transfer.Cell-cell contacts are important for the manifestation and operation of many microbial functions and traits (e.g., metabolic cross-feeding, contactdependent killing, etc.).For example, in marine particle-degrading communities, different bacterial populations coexist on particle surfaces and cross-feed metabolites resulting from the degradation of the particles 48 .The efficiency of such cross-feeding should improve with the closer spatial positioning of cross-feeding microorganisms, as this reduces the loss of cross-fed metabolites into the environment.A similar phenomenon occurs in soil communities that decompose organic matter 49 .We argue here that phage predation can result in closer spatial positioning of cross-feeding microorganisms and enable more efficient and prolonged ecosystem functioning, thus providing a new perspective on how phage indirectly contribute to elemental cycling. One limitation of our study is that we only investigate a scenario where the competing strains are equally susceptible to phage predation.If one of the strains were resistant or less sensitive to phage predation, we expect the effect size to reduce.During surfaceassociated growth, the phage-susceptible strain will be predated on, which will cause the phage-resistant strain to increase in frequency and eventually displace the phage-susceptible strain along the biomass periphery where resources are readily available.This will eliminate intermixing between the competing strains and consequently also eliminate plasmid transfer between them.If positive interactions and/or obligate dependencies were to occur between the strains, however, then the dynamics could be far more complex and lead to non-trivial outcomes. We believe that our results have immediate implications for phage therapy.Phage therapy holds promise as a strategy to combat microbial infections without using chemical antibiotics, and is therefore typically assumed to not contribute to the spread of antibiotic resistance [50][51][52] .However, our results raise concern that the use of predatory phage can inadvertently increase the spatial intermixing of microbial populations and facilitate contact-dependent mechanisms for the horizontal transfer of antibiotic resistance determinants.As phage therapy gains traction, we need to recognize that phage predation can have unexpected consequences on microbial community dynamics, functioning, and evolution through their effects on microbial spatial self-organization. In conclusion, our study reveals the intricate interplay between phage predation, microbial spatial self-organization, and plasmid transfer.This underscores the multifaceted nature of microbial ecosystems and necessitates a rethinking of the consequences of phage predation on microbial evolution.The pronounced effect of phage predation on plasmid transfer might be pivotal for understanding microbial adaptability in diverse environments, especially against challenges such as the spread of antibiotic resistance.Such insights are likely instrumental in shaping microbial management strategies, biotechnological endeavors, and therapeutic interventions. Strains and culture conditions We performed all experiments with E. coli strains TB204 and TB205 53 , which are isogenic mutants derived from E. coli strain MG1655.Strain TB204 (MG1655 attP21::PR-sfgfp) expresses GFP while strain TB205 (MG1655 attP21::PR-mcherry) expresses mcherry from the lambda promoter (PR) 54 located on the chromosome.We introduced the selftransmissible conjugative plasmid R388 (R388 parS1-Cm), which encodes for CFP and chloramphenicol resistance 40,55,56 , into strain TB204 via conjugation from E. coli strain DH5α using conventional filter mating on agar plates.We refer to strain TB204 carrying R388 as the R388 donor, strain TB205 as the potential recipient, and strain TB205 that receives R388 from the R388 donor as a transconjugant.We routinely cultured all strains in liquid lysogeny broth (LB) medium at 37 °C with shaking at 150 rpm.To avoid the proliferation of R388 segregants of the R388 donor, we supplemented the LB medium with 25 μg/mL chloramphenicol.For long-term preservation, we archived all strains in 15% (v/v) glycerol stocks at −80 °C.Prior to each experiment, we cultured each strain individually overnight by streaking the corresponding −80 °C stock onto an LB agar plate and used a single colony for inoculation of liquid cultures. We used the lytic phage T6 41 for all experiments.To culture the phage, we used strain TB204 that is free of R388 as the host and incubated the culture at 37 °C for 4 h with shaking at 150 rpm.We then obtained purified phage by filtering the culture through a 0.22-µm membrane, collecting the supernatant, and storing the phagecontaining supernatant at 4 °C prior to further use.For long-term storage, we mixed equal volumes of strain TB204 and phage suspensions, incubated them with shaking for 10 min at 150 rpm, and archived them in 15% (v/v) glycerol stocks at −80 °C. Surface-associated growth experiments We used LB agar plates for all of our surface-associated growth experiments.We prepared agar plates by combining 25 g/L LB and 10 g/L bacteriology-grade agar powder (AppliChem, Darmstadt, Germany) with 1000 mL of distilled water and added an additional 5 mM sodium nitrate to improve growth in anoxic conditions.We then autoclaved the medium at 121 °C for 20 min, let the medium cool to 70 °C, dispensed 10 mL aliquots of the medium into sterile petri dishes with a diameter of 3.5 cm, and allowed the medium to solidify at room temperature for 2 h.Once solidified, we transferred the agar plates to a sterile hood, dried them with the lids open for 10 min, covered them with their respective lids, sealed them with Parafilm (Amcor, Zürich, Switzerland), and stored them at 4 °C until further use. To prepare the bacterial cultures for the experiments, we first cultivated the R388 donor and potential recipient individually overnight in LB medium.We then diluted the overnight cultures by 100fold (vol: vol) in fresh LB medium and incubated the dilution at 37 °C with shaking at 150 rpm for 4 h to ensure the cells were in the logarithmic growth phase.Thereafter, we adjusted the optical density at 600 nm (OD 600 ) of each culture to one in a volume of 1 mL, corresponding to ~10 8 colony forming units (CFU)/mL.We then washed the cells three times by centrifugation at 3600×g and 4 °C for 10 min and resuspended the washed cells in 1 mL of phosphate-buffered saline (PBS). To prepare the phage for the experiments, we mixed the refrigerated phage stock with TB204 and incubated the mixture in a shaker at 37 °C and 150 rpm for 4 h.After incubation, we removed the TB204 cells from the solution by filtration through a 0.22-µm membrane to obtain a cell-free and active phage solution.We determined the concentration of the phage using the double-layer plate method 57 and diluted the phage solution to a concentration of 10 8 in PBS. For the surface-associated growth experiments, we mixed the R388 donor and potential recipient at a ratio of one (cell number:cell number) and deposited single 1 µl droplets onto the centers of separate LB agar plates.After allowing the bacteria to grow for 6 h, we added a 1 µl droplet of the phage solution or PBS (control) to the bacteria and incubated the plates in anoxic conditions for 10 days at 22 °C.To impose anoxic conditions, we incubated the plates in a glove box (Coy Laboratory Products, Grass Lake, MI, USA) containing a nitrogen:hydrogen (97:3) atmosphere.We performed ten replicates for each treatment and randomly selected five replicates that did not contain any phage-resistant mutants for subsequent imaging and quantitative analysis.Thus, we excluded the evolution of phage resistance during the time-course of the experiment, which typically occurred in one to three of the ten replicates per treatment under our experimental conditions.We provide an example of a replicate that we excluded due to the emergence of phage-resistant mutants in Supplementary Fig. 9. Confocal laser-scanning microscopy and image analysis We imaged the biomass at the end of the surface-associated growth experiments using a Leica TCS SP5 II CLSM (Leica Microsystems, Wetzlar, Germany) equipped with a 5× HCX FL objective, a numerical aperture of 0.12, and a frame size of 1024 × 1024 (resulting in a pixel size of 3.027 µm).We set the laser emission to 458 nm for the excitation of CFP, to 488 nm for the excitation of GFP, and to 514 nm for the excitation of RFP.We set the emission filter to 469-489 nm for CFP, to 519-551 nm for GFP, and to 601-650 nm for RFP. We quantified spatial intermixing using a well-established intermixing index 35,37,58,59 .We drew a circle at a specific radial distance from the inoculation droplet periphery, quantified the number of color transitions (each strain expresses a different fluorescent protein) along the circumference of the circle, and normalized the number of color transitions by the circle's circumference.We repeated this process for various radii to obtain spatial intermixing as a function of radial growth.We used Fiji (https://fiji.sc) to perform these analyses.We first used the "Mean" algorithm to threshold and binarize each image and the "remove outliers" algorithm (radius = 2, threshold = 50, bright) to eliminate noise.We next used the Sholl plugin 60 of ImageJ to apply a concentric windowing method from the inoculation droplet periphery to the final biomass periphery at a radial increment of 5 µm.We then calculated the intermixing index (I r ) for each concentric window as the number of color transitions (N r ) divided by the expected number of color transitions for a random spatial distribution of two strains at a specific radius (r), using Eq.(1). Scanning electron microscopy To perform SEM, we first exposed the biomass on the agar plates to 5% glutaraldehyde vapors and then to 2% osmium tetroxide vapors.After fixation, we cut the biomass out of the agar plate and immersed it in 30% ethanol followed by further incubations in 50%, 70%, 90%, and 100% ethanol.After several incubations in absolute-grade ethanol, we placed the biomass samples in metal capsules and exchanged the ethanol with liquid CO 2 using a CPD 931 critical point dryer (Tousimis Research, Rockville, MD, USA).We next raised the temperature and pressure above the critical point of CO 2 , released the pressure, and mounted the biomass samples onto SEM stubs with conductive carbon cement.Finally, we sputter coated the biomass samples with 4 nm Pt/ Pd with planetary rotation using a CCU-010 vacuum coating system (Safematic, Zizers, Switzerland) and imaged the samples with a TFS Magellan 400 SEM (Thermos Fisher Scientific, Waltham, United States) at 2 kV and secondary electron detection. In our simulations, we represent bacterial cells as three-dimensional capsules of length L that grow uniaxially.After reaching a specified target length, each cell divides into two daughter cells that inherit the properties of their parent.We set the biophysical parameter "gamma", which adjusts the ratio between the drag force on cell translation relative to cell growth, to 20.This parameter affects the rate at which cells stop growing due to constraints from physical forces.During each time step, each cell expends energy to grow and displace neighboring cells.If surrounding cells significantly impedes this process, the cell stops growing. To simulate E. coli cells, we set the initial cell shape to a radius of 0.5 µm and a length of 2 µm.Each cell divides into two when its length reaches a random value between 3.5 and 4.0 µm.We set the cell growth rate to 1, which means that each cell would ideally grow 1 micron per unit time step in the absence of physical constraints.We began the simulations with 600 cells of each type (1200 cells in total).We restricted the growth region such that cells only grow in the space from −100 units to within 100 units along the x-axis and y axis starting from 0 units upwards, which means that cells only grow in the positive direction along the y axis once they have filled the initial space.We assigned the initial position of each cell randomly within the specified 2D space with its x-coordinate in the range [−100, 100] and y-coordinate in the range [0, 30].We assigned the initial rotation of each cell randomly with the x and y components of the direction vector within the range [−1, 1]. To simulate conjugation-mediated plasmid transfer, we used the CompNeighbours function in CellModeller to detect cell contacts and calculate the probability of successful plasmid transfer after contact using a constant probability of 0.001 to 0.01 per simulation timestep.If successful, the potential recipient cell becomes a transconjugant cell and changes color to blue.The new transconjugant cell can then conjugate with a new potential recipient cell.We set the specific growth rate of plasmid donor and transconjugant cells to 0.95 to account for the cost of carrying the plasmid 42 .As we did not use antibiotics or apply any positive selection for the plasmid, there was never an advantage to carrying the plasmid. To investigate the phenomenological effect of phage predation on surface-associated growth and plasmid transfer, we used the killflag feature in CellModeller.This feature allows the removal of cells from the model and subsequently updates the list of cell states, keeping only live cells.Rather than directly simulating the complex biophysical process of phage replication and predation, we represent the phage as a signal that initiates cell removal.Hence, we do not explicitly simulate phage particles; rather, we simulate the expected phage-induced killing of cells located at the biomass periphery, which is based on prior work that simulates individual phage particles 31 .During the simulation as the biomass grows along the y axis, we extracted the y axis values of all cell locations.We then identified the cell with the largest y axis value as the peripheral cell and define the phage-infection region as those cells within 4 µm of the peripheral cell.Finally, we eliminated cells within this region at a specific frequency to simulate the effect of phage predation. We performed all simulations using a 2021 Windows system ThinkPad laptop computer with concurrent simulations distributed between a Platform Intel(R) 2.80 GHz Core(TM) i7-1165G7 OpenCL HD Graphics and device Intel(R) Iris(R) Xe Graphics.We stored the status and spatial location of each cell every 20 time-steps and we performed simulations until reaching 18,000 cells.We launched the Graphical User Interface (GUI) by running a written script and saved the Pickle data from the GUI.We performed each simulation five independent times.All parameters are listed in Supplementary Table 1. Intermixing index of computational modeling simulations We calculated the intermixing index in the simulations (denoted as I simulation ) using Eq. ( 2). In Eq. ( 2), N is the total number of cells and n(i) is the number of neighboring cells i. Neighboring cell I(i,j) yields a value of either 1 or −1, contingent upon the neighboring cell possessing a distinct (1) or identical (−1) cell type compared to the focal cell.For an individual cell i, we iterated across all neighboring cells that are in physical contact j using Eqs.(3) and (4). This methodology aligns with the spatial assortment parameter and segregation index 63 .Regions dominated by isogenic cell clusters, where the majority of cells are adjacent to those of the same type, have a negative intermixing index.Conversely, regions with frequent intermixing between distinct types have a positive intermixing index (I simulation ). Quantification of interface straightness We calculated the interface straightness for both the experiments and simulations using Python 3.11.We first used the Canny edge detection method 64 to identify the interfaces using a lower threshold of 100 and an upper threshold of 200.To focus the analysis on specific regions of the images, we defined an annular (ring-shaped) region on each image, where we positioned the center of the annulus at the geometric center of each image.We then extracted the interfaces within the annular region using the findContours function from the OpenCV library and only considered those that touched both the interior and peripheral boundaries of the annular region.We used a tolerance of five pixels to account for minor variations.We then quantified the straightness of the interfaces as follows: For each interface, we drew a straight line connecting the beginning and end points.We then calculated the average perpendicular distance of all points on the interface to the straight line connecting the beginning and end points using Eq. ( 5). In Eq. ( 5), S is the straightness, d max is the distance between the beginning and end points on the line, and d avg is the average perpendicular distance of all points on the interface to the straight line connecting the beginning and end points.This returns a value between 0 and 1, where a value closer to 1 indicates that the interface is straighter. Statistics and reproducibility We performed all statistics in R (v4.1.2) (https://cran.r-project.org).We used two-sample two-sided Welch tests to test for differences between means.We therefore did not make any assumptions regarding the homoscedasticity of our datasets.We used the Holm-Bonferroni method to adjust P values for multiple comparisons.We used the Wilk-Shapiro test to test whether our datasets deviate from normality with a significance level of P > 0.05.We did not observe any deviations from normality for any of our datasets.We reported the statistical test, P value, and sample size (n) for each test in "Results". Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Fig. 2 | Fig. 2 | Surface-associated growth experiments with the direct amendment of phage.a, b Representative CLSM images (n = 5) of the R388 donor and potential recipient after ten days of growth in anoxic conditions in the (a) presence or (b) absence of phage, where we added the phage directly to the growing biomass.The R388 donor expresses GFP and CFP and appears cyan, the potential recipient expresses RFP and appears red, and transconjugants express RFP and CFP and appear magenta.The images on the left are magnifications of the regions in the white dashed boxes in the images on the right.c The number of discrete transconjugant regions (magenta sectors).d The total transconjugant area divided by the total biomass area.e The total number of transconjugant cells quantified by colony counting on chloramphenicol-amended agar plates.f The biomass diameter.g The intermixing index quantified across a 30 µm thick band positioned at the biomass periphery.h The mean interface straightness.c-h Each data point is a measurement for an independent experimental replicate (n = 5) and the P values are for two-sample two-sided Welch tests.Gray symbols are in the absence of phage and black symbols are in the presence of phage.Source data are provided as a Source Data file. Fig. 3 | Fig. 3 | Surface-associated growth experiments with the distal amendment of phage.a Representative CLSM image (n = 3) of the R388 donor and potential recipient after ten days of growth in anoxic conditions where we added the phage at a distance of 1 mm in the positive y-direction from the centroid of the bacterial inoculation droplet.The R388 donor expresses GFP and CFP and appears cyan, the potential recipient expresses RFP and appears red, and transconjugants express RFP and CFP and appear magenta.The images on the left are magnifications of the regions in the white dashed boxes in the image on the right indicated by matching numbers.b Enlarged view of box 3 in (a) showing the change in interface straightness after phage encounter.The yellow line indicates the point in time when the phage made contact with the biomass.c The intermixing index at the biomass periphery.d The number of discrete transconjugant regions (magenta regions).e The mean interface straightness.c-e The quantities are for regions facing towards or away from the phage inoculation area.Each datapoint is a measurement an independent experimental replicate (n = 3), and the P values are for two-sample two-sided Welch tests.Gray symbols are for the regions facing away from the phage inoculation area while black symbols are for the regions facing towards the phage inoculation area.Source data are provided as a Source Data file. Fig. 4 | Fig. 4 | Surface-associated growth simulations in the absence of plasmids.a Illustration of how phage predation shifts the location of fastest growth from the biomass periphery to the interior.When phage are absent, cells at the biomass periphery have preferential access to resources supplied from the environment, and they therefore grow the fastest.When phage are present, the cells at the periphery are removed by phage, thus causing cells located in the interior to grow the fastest.b Representative scanning electron microscopy image (n = 5) of the periphery of a colony growing in the absence of phage.The green box identifies cells located at the periphery, where they have low packing density and are aligned far from parallel.The blue box identifies cells located in the transition zone between the periphery and interior, where they have intermediate packing density and are aligned closer to parallel.The orange box identifies cells located in the interior, where they have high packing density and are aligned closest to parallel.c-eRepresentative simulations (n = 5) of two competing bacterial strains (green and red cells) that have the same growth rate.We simulated biomass growth until reaching a population size of 18,000 cells in the (c) absence or (d) presence of peripheral killing.e There is no peripheral killing but we set the growth rate of cells located in the outer two layers of the periphery to zero.This effectively sets the location of fastest growth to be in the interior.f The mean interface straightness.g The intermixing index.f, g Each datapoint is a measurement an independent simulation (n = 5) and the P values are for two-sample two-sided Welch tests.Gray symbols are in the absence of phage, black symbols are in the presence of phage, and yellow symbols are for internal growth.Source data are provided as a Source Data file. Fig. 5 | Fig. 5 | Surface-associated growth simulations in the presence of plasmids.a, b Representative simulations (n = 5) of two competing bacterial strains where the red cells are potential recipients and the green cells carry a plasmid that reduces its growth rate by 5%.If a red cell receives the plasmid from a green cell, its growth rate is reduced accordingly and appears blue.We simulated biomass growth until reaching a population size of 18,000 cells in the (a) absence or (b) presence of peripheral killing.c, d Proportion of transconjugants within the total biomass as a function of the plasmid transfer probability.c Each datapoint is a measurement for an independent simulation (n = 5) and the P values are for two-sample two-sided Welch tests.Gray symbols are in the absence of phage, black symbols are in the presence of phage.Source data are provided as a Source Data file.d Each datapoint is a measurement for an independent simulation (n = 5), the lines are the running means, and the shaded regions are the standard error.Source data are provided as a Source Data file.
2024-06-28T06:17:12.805Z
2024-06-26T00:00:00.000
{ "year": 2024, "sha1": "ea393c4f463243304174af3f86d6e5a1c48ad389", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-024-49840-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55c0dde9d61601b889428e681495215fa3a347c0", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234309932
pes2o/s2orc
v3-fos-license
Analysis of Stress and Deformation Characteristics of Deep-Buried Phyllite Tunnel Structure under Different Cross-Section Forms and Initial Support Parameters Deep-buried soft rock tunnels exhibit low strength and easy deformation under the influence of high ground stress. The surrounding rock of the soft rock tunnel may undergo large deformation during the construction process, thereby causing engineering problems such as the collapse of the vault, bottom heave, and damage to the supporting structure. The Chengwu Expressway Tunnel II, considered in this study, is a phyllite tunnel, with weak surrounding rock and poor water stability. Under the original design conditions, the supporting structure exhibits stress concentration and large deformation. To address these issues, three schemes involving the use of the double-layer steel arch to support, weakening of the steel arch close to the excavation surface, and weakening of the steel arch away from the excavation surface to support were proposed. Using these schemes, the inverted radius was varied to explore its influence on different support schemes. For simulation, the values of the inverted radius selected were as follows:1300cm, 1000cm, and 700cm. The proposed support plan was simulated using FLAC3D, and the changes in the pressure between the initial support and surrounding rock, the settling of the vault, and the surrounding convergence were investigated. The numerical simulation results of monitoring the surrounding rock deformation show that the double-layer steel arch can effectively reduce the large deformation of the soft rock well. When the stiffness of one of the steel arches was weakened, the support’s ability to control the deformation was weakened; however, it still showed reliable performance in controlling deformation. However, changing the radius of the invert had an insignificant effect on the deformation and force of the supporting structure. Introduction In the construction of mountain tunnels, the original rock is often weakly broken and buried deep, and the geological conditions are relatively complicated. After tunnel excavation, the deformation of the surrounding rock is large and may last for a long time, thereby continuously increasing the stress on the support structure. e stress often exceeds the carrying capacity of the surrounding rock and initial support and further causes splitting of primary lining, surrounding rock deformation intrusion into the tunnel clearance, and even collapse. ese issues pose major challenges to the design and construction of soft surrounding rock tunnels. In view of the problems in the construction of soft rock tunnels, many studies based on on-site monitoring have been carried out [1][2][3][4][5]. e stability of the soft rock tunnel is affected by the structure, strength, and groundwater conditions of the surrounding rock [6][7][8]. Adachi et al. [9] analysed the stability of rectangular tunnels under soft rock conditions. Yang et al. [10] proposed that the rheology of chlorite schist under low stress is not obvious, but under high stress, obvious deformation may occur, and a viscoelastoplastic rheological model was used to simulate the influence of rock mass rheological characteristics on the surrounding rock stress and deformation during tunnel excavation. Luo et al. [11] found that using the Singh-Mitchell model for deformation simulation can yield reliable results with regard to the deformation monitoring of carbonaceous rock tunnels, and the model also can be used to predict the deformation of carbonaceous rock tunnels. Asghar et al. [12] discovered that, under the condition of high ground stress in field observation, the phyllite tunnel is prone to extrusion deformation, tunnel cross-sectional area reduction, bottom drum, and other problems, and they used Burger's creep model to simulate it using FLAC3D. e layered soft rock is prone to asymmetric deformation, which may adversely affect the supporting structure. Chen et al. [13] used Universal Discrete Element Code (UDEC) numerical simulation software to simulate the asymmetric deformation of the surrounding rock in the carbonaceous phyllite tunnel and found that the asymmetric deformation was caused by the coupling effect of the layered soft rock and shearing action along the foliation, and under the action of structural shear stress, the layered soft rock may exhibit asymmetric deformation and cause cracking along the secondary lining. e soft rock may produce large deformation under high ground stress and damage the supporting structure, which can pose a big challenge for the construction of soft rock tunnels. Because a large deformation of the soft rock is harmful to the project being undertaken, several scholars have further explored the mechanism of large deformation of soft rock tunnels [14][15][16]. To explore the deformation mechanism of a deep-buried soft rock roadway, Sun et al. [17] adopted the most advanced state-of-the-art infrared thermometer and full-field strain measurement system in the model test. It was pointed out that the change in the surrounding rock temperature can reflect the change of the surrounding rock behaviour. A decrease in temperature indicates a decrease in stress and the generation of microcracks, whereas an increase in temperature indicates the occurrence of friction during excavation. Furthermore, to better explore the internal causes of large deformation of soft rocks, several scholars have investigated the physical and mechanical properties of rock samples. Hu et al. [18] studied the mechanical properties of phyllite under different humidity conditions. ey found that, in uniaxial and triaxial compression tests, the strain of dry phyllite increased linearly with increasing stress. When the confining pressure was 10 MPa and as the immersion time increased, Poisson's ratio of the sample increased and compressive strength and elastic modulus decreased. Furthermore, the longer the water immersion time, the more obvious were the characteristics of the soft rock. Xu et al. [19] used triaxial compression tests to study the anisotropy of phyllite under different water content conditions. e microscopic damage mechanism on the fracture surface of the specimen was obtained through SEM fractographic studies and 3D laser profilometry. Additionally, they used the temporal and spatial distributions of the AE counts to analyse the initiation and propagation of microcracks. To cope with the damage caused by the large deformation of the soft rock, several coping methods have been proposed in engineering practice and theoretical research [20][21][22][23][24]. Zhang et al. [25] believed that setting the reserved deformation in the tunnel support structure can effectively reduce the damage caused by the large deformation of the soft rock tunnel, and the reserved deformation should be set between the initial support and secondary lining. Moreover, it is important to select the size of the reserved deformation. If the reserved deformation is too large, the cost is high; if the reserved deformation is too small, the construction process becomes complicated. rough on-site monitoring and numerical simulation analysis, Gao et al. [26] found that the support structure of soft rock tunnels can be optimized by weakening the anchor while enhancing the initial support stiffness and strength, thereby saving costs. Sun et al. [27] studied the system of negative Poisson's ratio (NPR) constant resistance and large deformation anchor cable support. ey compared the numerical simulation with the field test data and concluded that the NPR anchor support significantly reduced the asymmetric deformation of the tunnel surrounding rock, and the deformation of the surrounding rock was smaller than that of the steel arch support. Tao et al. [28] proposed an improved "NPR cable + steel arch frame + concrete" support mode. Yang et al. [29] proposed a support scheme for deep-buried soft rock roadways, namely, a new "bolt-cable-mesh-shotcrete + shell" combined support. ey used the UDEC software to simulate the support scheme and obtained reliable results. Taking Chengwu Expressway Tunnel II as the research object, a double-layer steel arch method for the initial support was proposed. Based on this method, the deformation and stress of the proposed support scheme were investigated by varying the inverted arch radius. A FLAC3D numerical simulation software was used to further simulate the proposed schemes. Additionally, the location of the measured points and data in the actual engineering were referred to ensure the accuracy of the research. Project Description. e Chengwu Expressway Tunnel II is a key project connecting Chengxian to Wudu (as illustrated in Figure 1). It is a separate two-way and four-lane highway tunnel located at a maximum depth of 1040 m, and the length of the tunnel is 3700 m. Furthermore, the mountain terrain is significantly steep. e natural slope of the inlet is roughly 40°, while that of the exit is roughly 50°. On-site direct shear and deformation tests were performed to obtain the basic parameters of the surrounding rock. e internal friction angle of the tunnel surrounding rock is 32°, cohesive force is 0.27 MPa, and elastic modulus is 1.3 GPa. e rock stratum is oblique to bedding, as depicted in Figure 2. Under the influence of the oblique bedding, the surrounding rock exhibits bias stress that is generally consistent with the bedding direction. e inner contour of the tunnel has a semicircle of R � 540 cm, sidewall has a large radius arc with R � 840 cm, transition arc of the sidewall and inverted arch is R � 100 cm, and radius of the inverted arch is R � 1300 cm. During the construction process, the bench method was used, and the height of the upper step was 5.4 m. e lining structure was applied with composite lining, including the initial support, waterproof board, and secondary lining. e initial support adopted an anchor net shotcrete and steel arch. e secondary lining was used for pouring the concrete. e C30 concrete, R27 hollow grouting anchor, and I18 steel arch were used as the initial support, and the C30 concrete was used as the secondary lining. During the excavating process, owing to the deeper embedding, the strength of the surrounding rock mass was low and the joints were significantly developed. Additionally, the tunnel supporting structure was greatly deformed after excavation. Owing to the abovementioned reasons, collapses, such as initial lining concrete cracking, steel arch deformation, secondary lining concrete failure, inverted arch cracking, and bulging, often occurred on the work face [30]. e initial support cracking is illustrated in Figure 3. Characteristics of the Surrounding Rock. e rock samples taken from the field site were made into slices, which were then placed under a polarizing microscope to observe the mineral composition of the rock samples [31][32][33][34]. e compositions of the rock samples are detailed in Table 1. Notably, rocks are mainly composed of sericite, chlorite, epidote, quartz, and albite. Among them, sericite and quartz have the highest proportion, accounting for 64% of the total proportion; except for the mineral components listed in Table 1, the remaining minerals account for 4% of the total proportion. Figure 4 shows the results of the scanning electron microscope test. From the microstructure of the rock sample, it can be concluded that the schistosity plane has a strong silky lustre. From Figure 4(a), it can be seen that the squama sericites are aligned. Furthermore, from Figure 4(b), we can observe that the clumped clusters of the chlorites are obvious. e distribution of quartz and albite in the rock is parallel to the rock's lineation, and the oriented elongated particles in the rock are formed by metamorphic recrystallization. As can be seen, the quartz particles' deformation is obvious, and the sericite intercalates between the quartz particles. Figure 4(c) shows the lineation formed by sericite and chlorite in the rock sample. It can be seen that the argillaceous components in the original rock also have different microbedding distributions. e original rock was the graphite sericite phyllite, which has undergone complex deformation and polymetamorphism. e parallel lineation foliation arrangement of graphite is depicted in Figure 4(d). Based on the microstructure and owing to the loose and directional arrangement of the rock sample structure, it can be said that its properties are extremely unstable. Macroscopically, the sample shows low rock strength, small cohesion, and high Poisson's ratio. e properties of the rock are weak with regard to the shear and tensile strength and are susceptible to external conditions. Field Monitoring Plan. Considering the large deformation of the surrounding rock that occurs during tunnel excavation, a field test plan based on the tunnel was proposed. e scheme mainly included the monitoring of the stress and deformation of the tunnel support structures. In the field monitoring test, vibrating-wire pressure cells were used to test the surrounding rock pressure, and the total station was used to test the vault crown settlement and peripheral convergence displacement. e detailed arrangement of the measuring points is depicted in Figure 5. Pressure cells were arranged at the top of the arch (P1), excavation position of the upper (P2 and P3) and lower steps (P4 and P5), and junction of the sidewall and invert (P6 and P7). e points for measuring the peripheral convergence were arranged at positions P4 and P5. e vibrating string pressure cell is used on-site to ensure the accuracy of the measurement data. To avoid stress concentration, when burying the pressure cell between the initial support and surrounding rock, the pressure cell should be closely attached to the back of the arch (as illustrated in Figure 6). e bottom of the cell should be flat, and there must be no gaps and unevenness between the steel arches and cell. When spraying, a space of about 20 cm between the steel arch and surrounding rock must be densely sprayed. Otherwise, the surrounding rock pressure cannot be normally transmitted to the pressure cell, thereby resulting in inaccurate test data. When burying the pressure cell between the secondary lining and initial support, the pressure cell must be fixed using steel bars and attached to the waterproof board of the initial support. Field Monitoring Results. e stress and deformation of the supporting structure are the crucial factors affecting the stability of the surrounding rock. e ZK86 + 120 section of the left hole was selected as the monitoring section. e results of pressure between the surrounding rock and initial Figure 7. e PSRIS suddenly increases around the 70th day of construction. is increase of PSRIS was caused by the lower step of excavation, and the application of locking bolts further reduced the stress. With the excavation of the lower heading of the tunnel, the cross-section showed stress concentration at the bottom of the right wall and top of the left arch, and a maximum pressure of 2.7 MPa was reached. Combined with the surrounding rock conditions of the tunnel, the rock layer was obliquely layered. Under the influence of the oblique bedding, a bias stress was generated between the surrounding rock and initial support structure, which is generally consistent with the bedding direction. is bias stress was the direct cause of the abovementioned stress concentration. In addition, the stress of the vault and left wall bottom was relatively large, i.e., close to 1 MPa. After the support was established, the stress at each measuring point gradually stabilized. e results of the peripheral cumulative convergence (PCC) and vault cumulative subsidence (VCS) are as shown in Figure 8. It can be seen that, during the construction process, the VCS and PCC gradually stabilized, and the rate of the VCS and PCC are the same. As the tunnel face advanced, the deformation of the supporting structure exhibited a three-stage growth characteristic, and this is closely related to the excavation process: (a) rapid increase after the lower-heading excavation, (b) slow growth after the lower-heading support, and (c) even slower growth after closing the invert. is was caused by the excavation of the upper heading, following which the stress of the surrounding rock was readjusted after being disturbed. e upperheading arch foot was directly located on the loose soft rock layer, and the overall support strength was low. After the lower-heading support, the lower steel arch is located on the precast concrete structure block and the strength of the integral support structure is strengthened; with the inverted arch closed, the initial support formed a loop, and the force was large. e rate of the VCS was increased rapidly after excavation, and the growth rate slightly slowed down after the lower heading was supported. is trend shows that the support of the lower heading has a limited effect on the slowing of the VCS. When the inverted arch was closed, the rate of the VCS tended to slow down, indicating that the closed inverting arch has a significant effect on slowing the rate of the VCS. e rate of the PCC significantly reduced after support. Establishment of the Numerical Simulation Model. To control the supporting structure damage caused by the large deformation of the soft surrounding rock, the initial support method using a double-layer steel arch was proposed. e FLAC3D was used to simulate the soft rock tunnel under different working conditions (WCs). e numerical simulation was based on the deformation and surrounding rock parameters of the Chengwu Expressway Tunnel II. A 100 m tunnel section was selected for simulation. Additionally, to eliminate the influence of the boundary effect, the horizontal direction was taken as five times the hole diameter, and the vertical direction as three times the hole diameter. e excavation method was regarded as the core soil method, and a pregrouting advanced support method was adopted. Considering the amount of calculation required and the effectiveness of the method adopted, the rock mass in a certain range of the tunnel section was set as a layered rock mass, and a contact surface was used to simulate the joint distribution of the rock layers around the tunnel. e cable unit was used to simulate the bolt and steel tube. e initial support, secondary lining, and inverting were all simulated using solid elements and their corresponding parameters. e model is depicted in Figure 9; (a) is a cross-sectional view of the tunnel and surrounding rock and (b) is the structure of the tunnel model, which shows the inverts, lining, and upper and lower steps of the tunnel. e position of the centroid of the tunnel was used as the origin of the coordinates. e X-axis was along the horizontal direction of the tunnel, Y-axis was along the axis of tunnel, and Z-axis was along the vertical direction of the tunnel. To monitor the vault settlement of the tunnel model and surrounding convergence, three monitoring points were set on the model section. e layout of the MPs referred to the on-site, as shown in Figure 5. e displacement of MP 1 represents the Four WCs were simulated, which were the original support situation (WC I), double-layer steel arch support situation (WC II), weaken the steel arch close to the excavation surface (WC III), and weaken the steel arch away from the excavation surface (WC IV). Furthermore, the side near the centre of the tunnel was considered as the inside, and the side near the excavation surface as the outside, and basic parameters of the surrounding rock are listed in Table 2. Data came from field tests. Simulation of the Original Support Situation. e support method used in the actual project was that involving the use of composite lining, which used a single-layer steel arch, and was prereinforced by the leading conduit during the excavation process. e equivalent elastic modulus of the selected steel arch was 32 GPa. e steel arch under the condition of the initial support was a single-layer steel arch corresponding to the actual project. e deformed cloud picture is depicted in Figure 10. It can be seen that, under the original supporting conditions, the deformation of the tunnel was large, the maximum settlement of the vault reached 182 mm, and the maximum floor heave of the tunnel reached 326 mm. A monitoring point was set in the middle of the model to monitor the vault subsidence and peripheral convergence of the tunnel to compare with the actual measured on-site data during the entire excavation process. e number of steps in the simulation process was converted into the number of Advances in Civil Engineering monitoring days based on the construction process, which was then compared with the in-site monitoring results, as depicted in Figure 11. Because the complex geological conditions of the surrounding rock at the site and construction speed and quality cannot be as uniform as those set in the numerical simulation, the site situation cannot be fully simulated. However, the numerical value and change trend of the VCS and PCC that can be simulated are roughly the same as on-site monitoring. e maximum difference between the actual measured value and simulated value of the vault settlement was approximately 18.6%, and the maximum difference between the two was approximately 18.2%; this proves that the simulation effect was relatively good. e deformation of the surrounding rock was large at the beginning of excavation since the support measurement was not performed. As can be seen from Figure 11(a), the deformation of both the curves increased rapidly after excavation and levelled off after the 6th day. After the 40th day, the excavation almost had no effects on the selected section, and the settlement rate of the vault further slowed down. As the numerical simulation became relatively stable, the changes in the deformation curve also became stable. As depicted in Figure 11(b), there is an obvious inflection point on the deformation curve around the 18th day after excavation, thereby indicating that the support was beginning to function, in line with the law of large deformation of the soft rock. e deformation amount of the numerical simulation began to converge gradually after 20 days, and the deformation rate of the field monitoring results were also gradually decreasing. e final vault settlement and peripheral convergence of the numerical simulation were 132 mm and 161 mm, respectively; the final vault settlement Figure 10: Deformation of the tunnel structure under the original support. Advances in Civil Engineering and peripheral convergence of the in-site monitoring were 149 mm and 163 mm, respectively. Based on these findings and observations, three other support schemes were simulated. Simulation of the Double-Layer Equal Rigidity Steel Arch Support. A method involving the use of double-layer steel arches for the initial support was proposed to control the large deformation of the soft surrounding rock. A steel arch with an equivalent stiffness of 42 GPa was used for simulation. e remaining parameters and construction methods were consistent with WC I. From Figure 12, it can be seen that the deformation trend of the tunnel model is consistent under the two WCs. e rate of vault settlement gradually decreased with time. Additionally, after the 36th day, the VCS tended to be stable. e rate of peripheral convergence continued to increase in the first 18 days and then gradually became stable. e construction of the support of the inverted arches was initiated on the 16th day. e supporting inverted arches could significantly reduce the PCC but had a negligible effect on the settlement of the VCS. After the invert support was constructed, the rate of the PCC significantly decreases. Compared with WC I, WC II could effectively control the rapid growth of the PCC produced by the single-layer steel arch support from the 18th day to the 25th day. Notably, the initial support belongs to flexible retaining, thereby allowing the surrounding rock to deform to provide a full play to the self-supporting capacity of the surrounding rock. Moreover, it can be seen from the figure that the deformation of the surrounding rock can be significantly reduced using the double-layer steel arch support. From the data, it can be concluded that, after replacing the single-layer steel arch support with the double-layer steel arch support, the final VCS decreased by 64.77% and PCC decreased by 63.93%. Furthermore, the deformation of the soft rock tunnel could be effectively reduced using the double-layer steel arch structure. Simulation of the Variable Stiffness Double-Layer Steel Arch Support. Although the use of double-layer steel arch support can lead to a reduction in the large deformation of the soft rock tunnel, the cost involved also increased. erefore, to weaken the strength of one of the double-layer steel arch to reduce the cost, two solutions were proposed for comparison: (a) weakening of the steel arch on the side close to the excavation face and (b) weakening of the steel arch on the side far from the excavation face. e weakening method was aimed at adjusting the equivalent elastic modulus of one layer of the two-layer steel arch to 32 GPa. From Figure 13, it can be seen that, since only the support parameters of the steel arch were varied, the trend of the deformation curve under the three WCs is the same. We can see that the VCS curves of WC III and WC IV almost coincide, indicating that there is no obvious difference between the two WCs under the influence of the vault settlement. However, the PCC curves under these two WCs do not coincide, which indicates that when two layers of steel arches with different rigidities were used for support, it is more reasonable to place the steel arch with a lower rigidity on the outside. When a certain layer of the steel arch is weakened, the ability to control deformation may be reduced. Compared with the double-layer steel arch with equal rigidity, the PCC and VCS of WC III increased by 37.12% and 35.97%, respectively, and the PCC and VCS of WC IV increased by 37.78% and 37.98%, respectively. Furthermore, the weakening of the single-layer steel arch supported by the double-layer steel arch reduces the effect Advances in Civil Engineering of limiting the deformation of the tunnel. However, from the viewpoint of data, the control of tunnel deformation does not weaken significantly. Compared with the doublelayer steel arch strong support, the maximum increase in deformation does not exceed 30 mm, which has a relatively insignificant effect in actual engineering practice. As can be seen from Figure 13(b), the PCC of WC III is 2.01% smaller than in the case of WC IV. Notably, when two layers of steel arches with different rigidities are used for support, it is more reasonable to place the steel arch with a lower rigidity on the outside. Numerical Simulation with a Varying Arch Radius e present section further discusses the effect of changing the arch radius on the structural deformation. Based on the discussion presented in the previous section, it can be concluded that the support effect of WC III is slightly better than that of WC IV, and therefore, WC IV is not discussed in the present section. To obtain more obvious results, models with 1300 cm, 1000 cm, and 700 cm inverted arch radii were simulated (as illustrated in Figure 14). Conditions of Deformation. Considering that the stress and deformation can change with the change in the radius of the inverted arch, a technique to strengthen the rigidity of the single-layer steel arch (WC V) was proposed. Using this method, the performance of the reinforced single-layer steel arch under different arch radii can be analysed. Furthermore, WC V increases the equivalent rigidity of the steel arch from 32 GPa to 42 GPa of WC I. Figure 15 shows the deformation after changing the radius of the invert under WC V. We can see that around the 16th day, the inverted arch began to provide the required support, and the deformation curves of the three-invert arch radius conditions began to diverge. Furthermore, only increasing the rigidity of the single-layer steel arch under the original conditions played a significant role in reducing the structural deformation. e VCS of WC V decreased from 149 mm to 85.79 mm, and the PCC of WC V decreased from 163 mm to 104.73 mm. Additionally, the VCS and PCC decreased by 42.42% and 35.75%, respectively. We can see that, in the case of a single-layer steel arch, changing the radius of the inverted arch had an insignificant effect on the deformation of the structure. e final settlement of the vault decreases with the decrease in the arch radius, and the settlements were 85.79 mm, 84.44 mm, and 83.14 mm, respectively. e change law of the PCC is consistent with that of the VCS, and the deformation amount under the three WCs reached 104.73 mm, 102.57 mm, and 95.46 mm, respectively. Additionally, when the reinforced single-layer steel arch support was used, the structural deformation decreased with the decrease of the arch radius, but the effect was not obvious. Regarding WC II and WC III, it can be seen from Figures 16 and 17 that, by varying the radius of the inverted arch, the effect on the deformation tendency of the supporting structure was negligible. e reduction of the PCC caused by the reduction of the arch radius from 1300 cm to 1000 cm is more obvious than the reduction of the arch radius from 1000 cm to 700 cm. is shows that the change in the radius of the inverted arch has a greater effect on the peripheral convergence, and after reducing this radius to a certain extent, the gain becomes smaller. From the numerical point of view, reducing the arch radius on the basis of the original design has an insignificant effect on the deformation of the supporting structure. Under the same WC, the VCS caused by different arch radii does not exceed 3.5%, and the peripheral convergence does not exceed 6.5%. Changing the radius of the inverted arch, therefore, has a negligible effect on reducing the deformation of the supporting structure. Figure 5, seven MPs were selected in the numerical simulation to monitor the stress. Based on the results, the stress of the supporting structure was analysed. Because the tunnel supporting structure was mainly subjected to compressive stress, the minimum principal stress was statistically analysed. e minimum principal stress after the structure stabilized is presented in Table 3. Conditions of Stress. As shown in From Table 3, it can be seen that the internal stress of the single-layer reinforced single-layer steel arch support is exceptionally large, roughly double that of the double-layer steel arch support. Additionally, the stress conditions of WC II and WC III are similar. In WC III, only the stresses at the left and right arch feet are greater than those at WC II, and the stress at other positions is even higher than that of WC II. is finding confirmed the rationality of weakening the rigidity of the outer steel arch based on WC II. Furthermore, by varying the radius of the inverted arch, the following observations were made: (a) there is no obvious rule for the change in the stress of WC; furthermore, the stress at the left and right arch feet becomes increasing when the radius of the inverted arch is decreased; (b) for WC II, the change in the radius of the inverted arch has almost no effect on the supporting structure stress; (c) for WC III, the change in the radius of the inverted arch has almost no effect on the MP 1-MP 4 positions. However, when the arch radius changed from 1300 cm to 1000 cm, the stress of the left and right arch feet was reduced by 25.6% and 24.9%, respectively, and the effect was more significant. In this process, the stress of the WC III support structure was slightly different from that of WC II. is conclusion shows that the stress of the WC III support structure can be optimized by reducing the arch radius from 1300 cm to 1000 cm. Conclusions In this paper, on-site monitoring data were collected from the Chengwu Expressway Tunnel II, and indoor tests were performed using this tunnel. Furthermore, simulation was performed using FLAC3D. e present study puts forward a field-monitoring plan of using a double-layer steel arch for the initial support to address the problem of large deformation of soft rock tunnels. In the numerical simulation, the VCS and PCC of the double-layer steel arch were monitored using the proposed model. e influence of different inverted radii on different support schemes were compared and analysed. e major conclusions of this study are as follows: (1) e results show that the deformation of the soft rock tunnel can be effectively controlled using the double-layer steel arch support. When the same rigidity steel arch support was used, the double-layer steel arch support can reduce the VCS by 64.77% and the PCC by 63.13%. Additionally, the use of a double-layer steel arch support can greatly reduce the rapid growth of surrounding rock deformation before the invert arch is closed. (2) When two layers of steel arches with different stiffnesses are used, the ability to control deformation is weaker than that of a double-layer steel arch support. However, from the numerical point of view, two layers of steel arches with different stiffnesses also played a significant role in reducing the tunnel deformation. Compared with the double-layer steel arch with equal rigidity, the PCC and VCS of WC III increased by 37.12% and 35.97%, respectively. Additionally, the PCC and VCS of WC IV increased by 37.78% and 37.98%, respectively. When using double-layer steel arches with different rigidities, it is recommended that the steel arches with low rigidity be placed on the outside. (3) When increasing the equivalent stiffness of the singlelayer steel arch of the original support scheme from 32 GPa to 42 GPa, the VCS and PCC decreased by
2021-05-11T00:04:37.932Z
2021-01-07T00:00:00.000
{ "year": 2021, "sha1": "a42e81341da1e7d6744a8d1b1444fcd5a59e46dd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/8824793", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1f66ecdcf14ffbe9e2d3dbd9ca77380239ed3774", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
195382780
pes2o/s2orc
v3-fos-license
Ecological islands: conserving biodiversity hotspots in a changing climate C change poses major challenges to the conservation of global biodiversity, including elevated risks of species’ extinctions (Bellard et al. 2012). Biodiversity risk assessments have commonly focused on geographically large ecosystems (Sala et al. 2000; Garcia et al. 2014). However, a wealth of plant biodiversity is contained in insular ecosystems, defined as small, isolated patches of unique habitat that support disproportionately large numbers of rare species (Collins et al. 2001; Kelso et al. 2001; Loehle 2006). Examples of such systems include rock outcrops, sinkhole wetlands, highelevation balds (forest openings), cliffs, springs, bogs, glades, and barrens (Figure 1; Noss 2013). From an ecological perspective, an “island” is a “patch of suitable habitat surrounded by unfavorable environment that limits the dispersal of individuals” (Brown 1978). In this review, I define insular ecosystems (Figure 2) as having (1) individual patches that are spatially isolated from one another (“islands”) and embedded within a matrix of a contrasting ecosystem (the “sea”); (2) total spatial areas that are very small (typically <5%) compared to the areas of their surrounding regions; and (3) boundaries marked by steep environmental gradients. Although many insular ecosystems have received attention for their rare species and unusual habitats, climatechange effects on biodiversity in these ecosystems are still poorly understood (Cartwright and Wolfe 2016). The literature on insular ecosystems is fragmented, with many small sitelevel studies (see list of references in WebPanel 1) but few synthetic reviews that are limited to a single insular ecosystem (eg limestone glades) or a set of related types (eg various kinds of depression wetlands). This review provides a holistic qualitative synthesis of insular ecosystems to derive common themes across diverse ecosystems and to explore the special features of insular ecosystems that require consideration in climatechange vulnerability assessments. A simple conceptual model is presented that can help to anticipate climatechange effects on plant communities in insular ecosystems. This approach also incorporates nonclimate threats to assess overall vulnerability (exposure, sensitivity, and adaptive capacity; Stein et al. 2013). Although insular ecosystems are a global phenomenon, this review uses the southeastern US as a case study, a region rich in rare Ecological islands: conserving biodiversity hotspots in a changing climate C limate change poses major challenges to the conservation of global biodiversity, including elevated risks of species' extinctions (Bellard et al. 2012). Biodiversity risk assessments have commonly focused on geographically large ecosystems (Sala et al. 2000;Garcia et al. 2014). However, a wealth of plant biodiversity is contained in insular ecosystems, defined as small, isolated patches of unique habitat that support disproportionately large numbers of rare species (Collins et al. 2001;Kelso et al. 2001;Loehle 2006). Examples of such systems include rock outcrops, sinkhole wetlands, high-elevation balds (forest openings), cliffs, springs, bogs, glades, and barrens (Figure 1;Noss 2013). From an ecological perspective, an "island" is a "patch of suitable habitat surrounded by unfavorable environment that limits the dispersal of individuals" (Brown 1978). In this review, I define insular ecosystems ( Figure 2) as having (1) individual patches that are spatially isolated from one another ("islands") and embedded within a matrix of a contrasting ecosystem (the "sea"); (2) total spatial areas that are very small (typically <5%) compared to the areas of their surrounding regions; and (3) boundaries marked by steep environmental gradients. Although many insular ecosystems have received attention for their rare species and unusual habitats, climatechange effects on biodiversity in these ecosystems are still poorly understood (Cartwright and Wolfe 2016). The literature on insular ecosystems is fragmented, with many small site-level studies (see list of references in WebPanel 1) but few synthetic reviews that are limited to a single insular ecosystem (eg limestone glades) or a set of related types (eg various kinds of depression wetlands). This review provides a holistic qualitative synthesis of insular ecosystems to derive common themes across diverse ecosystems and to explore the special features of insular ecosystems that require consideration in climate-change vulnerability assessments. A simple conceptual model is presented that can help to anticipate climate-change effects on plant communities in insular ecosystems. This approach also incorporates nonclimate threats to assess overall vulnerability (exposure, sensitivity, and adaptive capacity; Stein et al. 2013). Although insular ecosystems are a global phenomenon, this review uses the southeastern US as a case study, a region rich in rare Ecological islands: conserving biodiversity hotspots in a changing climate Jennifer Cartwright For decades, botanists have recognized that rare plants are clustered into ecological "islands": small and isolated habitat patches produced by landscape features such as sinkholes and bedrock outcrops. Insular ecosystems often provide unusually stressful microhabitats for plant growth (due, for example, to their characteristically thin soils, high temperatures, extreme pH, or limited nutrients) to which rare species are specially adapted. Climate-driven changes to these stressors may undermine the competitive advantage of stress-adapted species, allowing them to be displaced by competitors, or may overwhelm their coping strategies altogether. Special features of insular ecosystems -such as extreme habitat fragmentation and association with unusual landscape features -could also affect their climate sensitivity and adaptive capacity. To help predict and manage climate-change impacts, I present a simple conceptual framework based on a synthesis of over 300 site-level studies. Using this framework, conservation efforts can leverage existing ecological knowledge to anticipate habitat changes and design targeted strategies for conserving rare species. US Geological Survey, Lower Mississippi-Gulf Water Science Center, Nashville, TN (jmcart@usgs.gov) In a nutshell: • Insular ecosystems are produced by distinctive landscape features such as rock outcrops, sinkholes, cliffs, and springs • These ecosystems are small, naturally fragmented, and geographically "anchored" within the landscape • Many insular ecosystems are biodiversity hotspots for rare plants but the impacts of climate change on these systems are still poorly understood • Habitat suitability for rare, specially adapted plants is maintained by physical stressors that inhibit competitors from the surrounding landscape • Climate-driven shifts to physical stress regimes are therefore a promising framework in which to anticipate climate-change effects on biodiversity in insular ecosystems, including identification of potential climate microrefugia plant species that are endemic to insular ecosystems (ie confined to a particular ecosystem and requiring that ecosystem for habitat) (Estill and Cruzan 2001;Loehle 2006;Noss 2013). Concepts presented here are based on a synthesis of more than 300 localized botanical and ecological studies conducted in a variety of ecosystems over several decades (WebPanel 1). Insular ecosystems as biodiversity hotspots Biodiversity hotspots -localized concentrations of rare and endemic species -are promising conservation targets to mitigate biodiversity losses resulting from climate change (Myers et al. 2000;Noss et al. 2015). Some insular ecosystems arguably qualify as biodiversity hotspots, as evidenced by richness of endemic and globally rare plant species (Figure 3; WebTable 1; references marked with "B" in WebPanel 1) in ecosystems that encompass very small geographic areas (Collins et al. 2001;Cartwright and Wolfe 2016). Although quantitative analyses of rare plant richness relative to island size are often difficult to calculate (WebPanel 2), botanical accounts commonly emphasize the extraordinary diversity of rare plants in certain insular ecosystems (WebPanel 1). In the eastern US, for example, 11 species are endemic or nearly endemic to sandstone rockhouses (recesses under cliff overhangs; Walck et al. 1996). The Ketona dolomite glades of Bibb County, Alabama, have been described as a "botanical lost world" with at least 60 plant species of conservation concern and nine species found nowhere else on Earth (Allison and Stevens 2001). Although the focus of this review is on plant communities, several insular ecosystems also provide habitat to animals of conservation concern, especially rare invertebrates (Braunschweig et al. 1999;Wiser and White 1999;Cartwright and Wolfe 2016). Many insular ecosystems also contain disjunct populations of plants (that is, isolated populations that are geographically distant from the primary range of the species), and therefore contribute to regional biodiversity by supporting species that are globally common but regionally rare (Braunschweig et al. 1999;Hill 1999;Wolfe et al. 2004). For instance, the fern Wright's cliffbrake (Pellaea wrightiana) grows primarily in the US Desert Southwest but has disjunct populations more than 1000 km to the east in granite outcrops of North Carolina (Wyatt and Fowler 1977). Similarly, the population of mountain alder (Alnus viridis crispa) in grassy balds on Roan Mountain in Tennessee and North Carolina is more than 1000 km from its primary range in boreal and Arctic regions to the north. Some insular ecosystems support globally rare associations (groupings) of plants. For example, an association of overcup oak (Quercus lyrata), river birch (Betula nigra), and resurrection fern (Pleopeltis polypodioides) is known only from a single sinkhole wetland in Tennessee (Wolfe et al. 2004). Other unique plant associations are restricted to small clusters of sinkhole wetlands or to flood-scoured riparian outcrops (Fleming et al. 2012). Insular ecosystems are not only biodiversity hotspots but also key contributors to geodiversity (the diversity of geology, soil, and topography across a landscape). Many insular ecosystems are associated with distinctive geologic and topographic features, such as sinkholes, river gorges, rock outcrops, and springs (Collins et al. 2001;Kelso et al. 2001). Insular ecosystems exemplify the critical link between geodiversity and biodiversity, a concept long recognized by botanists (Kruckeberg 1986) and increasingly emphasized in conservation planning (Anderson and Ferree 2010; Comer et al. 2015). Because unusual landscape features may produce regionally rare microenvironments, their inclusion in conservation networks may be particularly important for conserving diversity not only of species but also of ecological and evolutionary processes (Anderson and Ferree 2010; Comer et al. 2015). Special features of insular ecosystems Although a diverse set of upland and wetland habitats qualify as insular ecosystems, they all share special features that set them apart from more geographically widespread ecosystems, with important implications for climate-change vulnerability and conservation strategies. Naturally fragmented habitats By their very nature, insular ecosystems represent highly fragmented habitats (ie many small and isolated habitat patches as opposed to large, geographically continuous habitats). Because climate change is driving species' migrations (Loarie et al. 2009;Corlett and Westcott 2013), relative habitat connectivity along climate gradients is a component of extinction risk (Klausmeyer et al. 2011). Habitat fragmentation is generally regarded as a conservation problem and a by-product of human activities (Haddad et al. 2015). Insular ecosystems complicate these assumptions because their fragmentation is both natural (habitat islands were small and isolated prior to human interference) and human-caused (many islands have been degraded or destroyed; Noss et al. 1995). It is unclear whether maintaining "large, intact landscapes" (Watson et al. 2011) is useful for conserving biodiversity in insular ecosystems. On the one hand, the vast majority of those landscapes do not represent suitable habitat for rare, endemic species; on the other hand, conservation of the surrounding "sea" might deter future destruction of habitat islands and reduce problems associated with anthropogenic fragmentation (eg invasive species' introductions). Naturally fragmented ecosystems may not be adequately described by the current science of habitat connectivity, and may require specialized conservation strategies (Cartwright and Wolfe 2016) that account for the role of genetic isolation in the evolution and maintenance of biodiversity in these ecosystems (Kruckeberg 1986;Collins et al. 2001). Geologic and topographic anchoring Predictions of biodiversity loss have emphasized species' abilities to "keep pace" with changing climate (Loarie et al. 2009;Corlett and Westcott 2013), but approaches assuming steady migration across homogenous landscapes are clearly inappropriate for insular ecosystems (Figure 4; Ibáñez et al. 2006). Indeed, species confined to montane ecosystems have REVIEWS been identified as having "nowhere to go" (Wiser and White 1999;Loarie et al. 2009) because they occupy microhabitats at extreme ends of local environmental gradients that are "anchored" in place by geologic and topographic features that cannot move across the landscape. A similar situation confronts ecosystems that have received less attention, such as cliffs, rockhouses, springs, sinkhole wetlands, certain bogs and fens, and barrens defined by particular geologic substrates (eg serpentine and shale). For rare plant communities restricted to such landscape features, migration may require "island hopping", or long-distance dispersal between isolated and geographically static habitat patches. As such, dispersal modes of species endemic to insular ecosystems may be an important control on adaptive capacity in response to climate change (Beever et al. 2016). Characteristic stress regimes Insular ecosystems are generally characterized by high levels of physical stress (including temperature and other factors listed below) relative to their surrounding landscapes (references marked with "S" in WebPanel 1). Stress regimescharacteristic combinations of natural physical (as opposed to biological) stress factors, such as temperature, pH, water availability, soil depth, and disturbance -appear to be vitally important in maintaining habitat for rare and endemic plants by impeding competition. Five general stress-regime categories characterize the environmental conditions across a variety of insular ecosystems (WebTable 2). For example, although granite outcrops, sandstone outcrops, and limestone glades within the southeastern US have different geographies and geologic substrates, they share a common set of stressful conditions: extremely thin soil, scarce shade, and seasonally hot and dry soil conditions. This stress regime has been characterized by direct measurement (eg soil temperatures as high as 50°C and soil moisture below the permanent wilting point; Baskin and Baskin 1999;Braunschweig et al. 1999;Shure 1999). This stress regime is also reflected in specialized adaptations of endemic plants (WebTable 2), several of which resemble desert plants in their physiology even though the southeastern US generally receives abundant rainfall (Shure 1999). Intriguingly, several glade and outcrop ecosystems also include seasonally wet microenvironments in depressions or seepage areas, which support aquatic endemic plants with their own specialized adaptations (WebTable 2; Shure 1999). Numerous studies of plants endemic to insular ecosystems have demonstrated two common themes (WebPanel 1). First, endemic plants typically have specialized adaptations to tolerate environmentally stressful conditions specific to the ecosystems they occupy. Second, endemic plants are generally poor competitors (ie slow-growing and shade intolerant; Baskin and Baskin 1988). Endemic plants therefore find competitive advantage in insular ecosystems, where physically stressful conditions inhibit their competitors from the surrounding landscape (Braunschweig et al. 1999;Shure 1999;Cofer et al. 2008). For instance, plants endemic to isolated flood-scoured riparian outcrops have a range of coping strategies, including special anchoring structures, "mechanical fuses" to protect roots, and the ability to use floods to facilitate reproduction (Bailey and Coe 2001). Competitors from the surrounding forest lack these adaptations and are unable to withstand the destructive floods, which maintain the open, sunny habitat islands required by river scour endemics (Vanderhorst et al. 2007;Wolfe et al. 2007). Climate change in insular ecosystems: gaps in understanding A coherent picture has emerged regarding the biodiversity, biogeography, and characteristic stress regimes of insular ecosystems based on hundreds of botanical and ecological studies. However, few studies have explicitly addressed climate change (WebPanel 1; Damschen et al. 2012). For models of future climate to be useful in predicting endemic species' distributions or extinctions in insular ecosystems, they must be appropriate in scale and mechanistically linked to the stress regimes that maintain habitat for endemics. The importance of microscale (<<1-km 2 ) differences in climate exposure based on topography is increasingly recognized in conservation biology (Hannah ) and may be critical to certain insular ecosystems (eg cliffs, rockhouses, high-elevation outcrops). Insular ecosystems can pose challenges of scale and resolution for modeling approaches because while the overall distribution of islands may encompass hundreds of thousands of square kilometers, individual islands harboring rare plant populations may be smaller than an individual 25-m × 25-m grid cell (Shure 1999;Cartwright and Wolfe 2016). Downscaled climate models rarely reach resolutions finer than 25 m (Potter et al. 2013), and as such they may obscure ecologically important gradients even within larger islands (ie up to a few hectares), necessitating site-specific microclimatic models. For example, approaches combining light detection and ranging (lidar)-derived topography with dense networks of climate loggers (George et al. 2015) can achieve sub-meter resolution (Lenoir et al. 2017). Although recent advances in microclimatic modeling have focused primarily on temperature and humidity (Lenoir et al. 2017), other important microclimatic variables include soil moisture, wind speed, snow depth, frost exposure, fog formation, and solar radiation (Potter et al. 2013;Hannah et al. 2014). Microscale differences in substrate geology, geochemistry, soil characteristics, and hydrologic and fire regimes may also play fundamental roles in translating shifts in regional climate into changing microenvironmental conditions shaping species' distributions. After all, favorable climate (even microclimate) alone cannot predict future habitat locations for species restricted to spring-fed wetlands if the new locations lack springs. Moreover, springs must continue to provide the proper physical stress regimes, requiring analysis of climate controls on spring hydrology and geochemistry. Therefore, scientists must ask not only "how fast can species move?" but also "how will regional climate change affect the island microhabitats and stress regimes that species experience as they move?" Anticipating climate-change effects in insular ecosystems To manage anticipated changes in insular ecosystems, we should first examine how climate change will alter ecologically important stress factors, and how changing stress regimes will affect competitive dynamics of plant communities. A conceptual model of stress-regime alteration A fundamental ecological concept is that "along key environmental gradients, species appear to find one direction to REVIEWS be physically stressful and the other to be biologically stressful" (Guisan and Thuiller 2005). The horizontal axes in Figure 5 depict this trade-off between biological stress (ie competition, on the left) and physical stress (on the right). Grime's (1977) conceptualization of plant strategies ("Competitive", "Ruderal", and "Stress-tolerant") can help generate and test hypotheses about ecological implications of changing stress regimes. Endemic plants in insular ecosystems are typically "S" strategists: that is, poor competitors that possess specialized adaptations to cope with physical stress. Insular ecosystems typically contain regionally rare microenvironments of high stress and low competition, with habitat suitability for endemic plants maintained by physical stress levels between certain upper and lower bounds (shaded area in Figure 5a). This baseline stress regime results from interactions between regional climate and local microhabitat (eg soil thickness, degree of shading, local hydrology and geochemistry). What if climate change reduces physical stress in an insular ecosystem? For example, warmer growing seasons with more frequent droughts could lower water tables and accelerate decomposition in mountain bogs, possibly reducing such stress factors as acidity, anoxia, and nutrient limitation (Schultheis et al. 2010). This would constitute a leftward shift (Figure 5b), reducing physical stress and increasing competition. Rare, stress-adapted bog plants could be displaced by woody encroachment from stress-intolerant competitors ("C" strategists; Grime 1977) from the surrounding landscape. Conversely, what if climate change intensifies physical stress? For instance, the same regional warming and drought intensification could make seasonally hot and dry conditions in rock outcrops even more extreme. Greater physical stress would shift the stress regime to the right (Figure 5c), which could overwhelm the coping strategies of even specially adapted endemic species. Notably, this conceptual model suggests that a large enough shift in either direction (ie rapidly increased or decreased physical stress) could potentially degrade habitat for rare endemic plants. Toward a holistic assessment of ecosystem vulnerability This conceptual model of stress-regime alteration ( Figure 5) can be integrated into a holistic assessment of ecosystem vulnerability (WebFigure 1), incorporating non-climate threats and the biological and physical characteristics of each ecosystem (Klausmeyer et al. 2011;Watson et al. 2011;Pearson et al. 2014). Management efforts can be informed by anticipating stress-regime alteration and gleaning data from diverse sources (WebTable 3), including site-specific studies from the past several decades (examples in WebPanel 1). Regional climate change is mediated by microhabitat characteristics to produce localized changes in the physical environment that organisms experience as climate-change exposure (Storlie et al. 2014). These microhabitat changes produce shifts along the spectrum of competition and physical stress ( Figure 5), which are mediated by stress-tolerance physiology (relative abilities of endemic plants and their competitors to cope with changing physical stressors) to influence climatechange sensitivity in insular ecosystems. Baseline stress regimes in some insular ecosystems are well characterized (references marked with "S" in WebPanel 1) and sensitivity to stress-regime alteration can be inferred from a variety of sources. For example, thermal tolerance of endemic plant seeds (Platt 1951), demographic simulations (Bernardo et al. 2016), and observed community shifts from human alterations and droughts (Pechmann et al. 1991) all provide clues about potential sensitivity to stress-regime change. In addition, dozens of studies have described non-climate threats (references marked with "T" in WebPanel 1) that may alter species' sensitivity to microclimatic change or limit the range of adaptive responses (Bellard et al. 2012;Damschen et al. 2012;Beever et al. 2016). Mitigating these non-climate threats may be important for enhancing adaptive capacity and maintaining suitable microhabitats for rare plants. Application of the framework depicted in WebTable 3 and WebFigure 1 can be illustrated with a hypothetical set of insular wetlands. Existing knowledge of the baseline stress regime for endemic plants in insular ecosystems, based on alteration of ecologically important stress regimes. In (a), high habitat suitability for stressadapted endemic plants has historically been maintained -and competitors inhibited -by relatively high levels of physical stress (gray shading). (b) If climate change reduces physical stress, then stress-adapted endemics could lose their competitive advantage and be displaced by competitors from the surrounding landscape. (c) Conversely, if climate change increases physical stress, then even the coping strategies of stressadapted endemics could be overwhelmed. combined with downscaled climate models indicating warmer, drier growing seasons could suggest shorter flooding duration and localized changes in soil and water chemistry. Sensitivity to this changing stress regime, and the resulting changes in competitive dynamics, would be mediated by stress tolerance abilities and thresholds of inundation-adapted plants relative to their upland competitors (Sharitz 2003), and might also be influenced by non-climate threats such as water pollution or invasive species. Some wetland specialists may adapt in place through physiological changes (eg improved efficiency of stress adaptations or changing seasonal timing of life cycles). These capacities will be mediated by species' traits, as well as by population genetics and demographics. Some species might adapt in place by shifting toward deeper and more frequently flooded zones, depending on the availability and diversity of newly suitable microhabitats. Adaptive capacity via long-distance dispersal to distant wetlands with more suitable hydrology may be constrained by species' traits (eg dispersal mode) or interspecies interactions (eg availability of pollinators or presence of pathogens). Non-climate threats can also constrain adaptive capacity, such as declining regional wetland connectivity as wetlands are degraded by peat harvesting, logging, and fire suppression (Buhlmann et al. 1999;Roberts et al. 2004). Such considerations can help tailor adaptive management plans to the rare plant populations of each insular ecosystem. Will insular ecosystems provide climate-change refugia? Population genetics and paleoecological evidence suggest that certain insular ecosystems -such as karst depression wetlands and high-elevation rock outcrops -sheltered relict plant populations during previous climatic shifts (Braun 1955;Wiser 1994). Concentrations of endemic species and relict lineages have been noted across a variety of insular ecosystems (WebTable 1) and are hallmark characteristics of paleoclimate refugia (Harrison and Noss 2017). However, refugia from current anthropogenic climate change may not be spatially or ecologically equivalent to past climate refugia (Keppel et al. 2015). Furthermore, a detailed comparison of the defining characteristics of insular ecosystems and climate refugia reveals important conceptual differences (WebTable 4). The functioning of climate-change refugia depends on species-specific habitat requirements and life-history traits (Keppel et al. 2015;McLaughlin et al. 2017;Stralberg et al. 2018). Figure 5 provides a framework to anticipate which types of insular systems -and perhaps even which individual islands -might provide climate refugia for which groups of species. Microclimate stability is a defining feature of climate refugia (Ashcroft et al. 2012;Morelli et al. 2016), specifically "stable" refugia (McLaughlin et al. 2017). Hence, an island will only function as a stable refugium for stressadapted endemics -including many rare and threatened plant species -if the island's stress regime is relatively steady through time (minimal horizontal shift in Figure 5a). However, if physical stress is reduced due to climate change (Figure 5b), the island may function as a "relative" refugium (McLaughlin et al. 2017). Specifically, the island may serve as a refugium for competitors from the surrounding landscape (eg upland woody species that encroach into bogs as water levels decline and soil geochemistry becomes more similar to that of uplands) but not for stress-adapted endemics (eg displaced rare bog plants that may be vulnerable to local extirpation). Conversely, sharp increases in physical stress (Figure 5c) may prevent islands from providing refugia for any vascular plant species. For example, rock outcrops that are already warmer and drier than their surroundings are unlikely to provide refugia from a warming, drying regional climate, and may in fact become increasingly inhospitable to all vascular plants. Importantly, individual islands within an insular ecosystem type may vary widely in their degree of stress-regime change (and therefore their refugial capacity), given similar exposure to regional climate change. For instance, some springs may provide stable microrefugia in a drying climate whereas others nearby may cease flowing entirely (Cartwright and Johnson 2018). Conclusions and management implications Although insular ecosystems are diverse in their natural histories, geomorphologies, and the species they support, they share common ecological features that require special consideration in conservation planning. First, insular ecosystems commonly represent concentrations of large numbers of rare species in small areas (ie biodiversity hotspots; Myers et al. 2000;Allison and Stevens 2001;Kelso et al. 2001). Second, species endemic to insular ecosystems typically share risk factors for extinction, including small and isolated populations, rare microclimates, and highly specialized habitat requirements (Estill and Cruzan 2001;Bellard et al. 2012;Pearson et al. 2014). Third, because the landscape features that create insular ecosystems (eg cliffs, sinkholes, rock outcrops) contribute to regional geodiversity and microclimate diversity, conservation of these features may be part of a coarse-filter approach to biodiversity conservation that is relatively robust to climate change (Anderson and Ferree 2010; Comer et al. 2015;Lawler et al. 2015). The special characteristics of insular ecosystems also suggest that they may require management approaches that differ from those for large landscapes. For example, managers may need to consider geographic patterns and spatial arrangement of islands within an archipelago to promote microhabitat diversity. Although small islands might seem expendable, their conservation may still be important, as they may provide distinct microenvironments and serve as "stepping stones" to facilitate movement between larger islands (Sharitz 2003;Cofer et al. 2008;Hannah et al. 2014). In some cases, management decisions may be informed by the potential for certain insular ecosystems to provide climate-change microrefugia (WebTable 4; Morelli et al. 2016), bearing in mind that refugial capacity may vary among islands within an ecosystem type. Management options in the face of climate change (eg land acquisitions, habitat restoration, species' reintroductions, assisted colonization of new sites) commonly require managers to predict which landscape locations will provide suitable habitat for species in the future. However, even when evidence-based guidance exists on microhabitat requirements for rare plant conservation and reintroduction, it may not incorporate climate change (eg Thompson et al. 2006). To be useful in predicting ecological responses in insular ecosystems, downscaled climate forecasts or sitelevel microclimate models must be linked to the proximal drivers of ecosystem structure and function, such as characteristic stress regimes that maintain habitat for rare species. Thanks to decades of research on individual sites and species (WebPanel 1), physical stress regimes and competitive dynamics in many insular ecosystems are now well understood (Cartwright and Wolfe 2016). Synthesis across studies reveals that a few general categories of stress regimes are common to many insular ecosystems (WebTable 2). Using a simple conceptual model of stress-regime alteration ( Figure 5) within a holistic framework for ecosystem vulnerability assessment, this existing knowledge can be leveraged into ecological predictions to inform management strategies (WebFigure 1; WebTable 3). For example, if regional climate forecasts and microclimate characteristics indicate that physical stress regimes will weaken (Figure 5b), then managers may employ strategies to maintain habitat for rare plants by targeting their competitors (eg removal of woody shrubs that encroach into bogs; Schultheis et al. 2010). Alternatively, if stress regimes are expected to intensify (Figure 5c), then managers might assess the capacity of vulnerable species to move to other microhabitats -or other islands -with less extreme stress levels. In either case, long-term monitoring of rare plant populations will continue to be important (Collins et al. 2001) in order to validate climate-change hypotheses and assess the effectiveness of conservation strategies. Similarly, continuing the long legacy of botanical and ecological investigations in these fascinating ecosystems (WebPanel 1) will become increasingly important in the context of accelerating global change.
2019-06-26T13:59:49.109Z
2019-06-03T00:00:00.000
{ "year": 2019, "sha1": "411ca3d1b5518b69ee5227d14835f8c941f1bcf9", "oa_license": "CCBY", "oa_url": "https://esajournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fee.2058", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b1a3fdacb45a2328006273ad0e9285d70c2f6a75", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
233261225
pes2o/s2orc
v3-fos-license
Computational Lithography Using Machine Learning Models : Machine learning models have been applied to a wide range of computational lithography applications since around 2010. They provide higher modeling capability, so their application allows modeling of higher accuracy. Many applications which are computationally expensive can take advantage of machine learning models, since a well trained model provides a quick estimation of outcome. This tutorial reviews a number of such computational lithography applications that have been using machine learning models. They include mask optimization with OPC (optical proximity correction) and EPC (etch proximity correction), assist features insertion and their printability check, lithography modeling with optical model and resist model, test patterns, and hotspot detection and correction. Introduction illustrates basic elements of optical lithography system. Illumination is modeled by partial coherence factor σ. In partially coherent imaging, which improves the minimum resolvable pitch [1] and is a preferred imaging method, the mask is illuminated by light traveling in various directions. The smaller σ is, the higher the degree of illumination coherence. Projection, or exposure, is represented by numerical aperture NA = n sin θ. A critical dimension, usually a half of minimum pitch, corresponds to a minimum feature size attainable by a particular technology: where λ is a wavelength of light source, and k 1 (often called k1 factor) is a measure of illumination complexity and is given by For smaller CD, one obvious option is to advance to a light source of smaller wavelength, i.e., KrF with 248 nm to ArF with 193 nm, which is most popular now, to even more advanced EUV with 13.5 nm. Second option is to reduce k 1 , which implies larger σ to be adopted. This is achieved through off-axis illumination (OAI), which involves the light source of annular, cross-pole, or quasar shapes. The last option is higher NA: this is through a medium with higher refractive index, e.g., water (n = 1.33) instead of air (n = 1.0), or projection lens of larger sin θ value. The k1 factor generally decreases with technology nodes [2]. The 500 nm and 350 nm nodes are imaged with k 1 > 0.65 with standard lithography. The 250 nm and 180 nm nodes approach k 1 of 0.5, and require the introduction of resolution enhancement techniques (RET). In 130 nm and 90 nm nodes, which are well below k 1 = 0.5, RETs are in widespread use. Theoretical limit of k 1 is 0.27; the nodes with k 1 smaller than that can only be imaged through multiple patterning technology. Machine Learning for Computational Lithography Computational lithography involves the use of computers to improve the resolution achievable through optical lithography. A key is lithography simulation, which is based on lithography models. A number of RET techniques have been introduced and used together with lithography simulation. They include optical proximity correction (OPC) adopted since 130 nm, subresolution assist feature (SRAF) since 90 nm, phase shift mask (PSM) and off-axis illumination (OAI) since 65 nm, double patterning since 20 nm, and directed self-assembly and triple patterning since 10 nm. Computational lithography has relied on compact modeling for long. A resist model, for instance, captures development process, and is given by a weighted sum of convolutions between light intensity and Gaussians. Machine learning model offers higher modeling capability than simple polynomial, so its application provides compact modeling with higher accuracy. Many lithography applications are computationally expensive, because they are iterative and often involve lithography simulations. A well trained machine learning model provides a quick estimation of outcome, and benefits these applications. A number of lithography applications, which have been using machine learning models, are reviewed in this tutorial: mask optimization including OPC and EPC in Section 2, SRAF insertion and printability check in Section 3, lithography modeling in Section 4, test patterns in Section 5, and hotspot detection and correction in Section 6. Figure 2 (a) shows a patterning process in optical lithography. Photomask pattern goes through a lithography process to form a pattern on photoresist, which is called resist-or aerial-image. Resist development and etch follow to finally form a wafer pattern. Mask Optimization Mask synthesis and optimization steps are illustrated in Fig. 2 (b), which can be considered as a reverse process of Fig. 2 (a). Ideally, the final wafer pattern should be the same as designers' layout. The goal of etch proximity correction (EPC), also called retargeting, is to synthesize an aerial image, which will yield the target wafer pattern even under non-ideal development and etch process. The goal of optical proximity correction (OPC) then is to synthesize a mask pattern, which produces the target aerial image (set by EPC) under light interference. OPC Most popular OPC method is model-based OPC (MB-OPC). It relies on iterative mask correction and lithography simulation. Each edge of initial mask pattern is divided into a number of segments ( Fig. 3 (a)), through a step called fragmentation. Each segment is individually moved by the amount called mask bias through correction step ( Fig. 3 (b)). Lithography simulation follows to estimate the contour of aerial image, which is then compared to target aerial image at each fragmentation point Fig. 3 (c)). The result is a number of EPE (edge placement error) values. MB-OPC is run with the given number of iterations of correction and simulation or until the target EPE (either maximum or average) is achieved. MB-OPC is computationally expensive. With smaller feature size, it requires larger runtime due to more iterations to meet smaller EPE target and increased simulation time to model more complex light interference. The number of critical layers that should go through OPC also increases. The OPC runtime at 5 nm with 66 critical layers is 5.6× of runtime at 28 nm with 18 critical layers [3]. Fast OPC with ML Models A number of OPC methods using machine learning models (ML-OPC) have been proposed to provide a quick OPC solution. The idea is illustrated in Fig. 4. A few features are extracted from the target segment to be corrected and its surroundings in the range of optical influence. The features are provided to the input layer of machine learning model, which has been trained beforehand. The values propagate through the hidden layers in the network until they reach the output layer. One node in the output layer with value 1 yields a predicted mask bias in case of classification, or a single node in the output layer may return mask bias value in regression model. This approach is very fast, e.g., more than 10 times [4] faster than MB-OPC, because correction is done just once and no lithography simulations are performed. Accuracy, however, is limited even though the model is trained well, e.g., its maximum EPE is about 4 times larger than that of MB-OPC. For practical application, ML-OPC may be considered as a generator of initial OPC solution, which is provided to MB-OPC to deliver the final OPC result with just a few iterations. This hybrid approach is still faster than MB-OPC alone, e.g., about 3 times [4], and is being considered as an approach for commercial use [3]. Implementation Details: A key in ML-OPC implementation is a choice of features that should be extracted from a target segment. Discrete cosine transform signals have been used [5] as inputs of simple linear regression model. Local layout densities are popular features to represent a layout, which have been used with a hierarchical Bayesian model [6]. It has been shown that using po- c 2021 Information Processing Society of Japan lar Fourier transform (PFT) signals can substantially reduce the number of features [7]. As will be shown in Section 4, a PFT signal, which is a convolution of PFT basis function (or optical kernel function) and local layout centered at target segment, is a component in light intensity calculation and thus well represents light interference around the segment. A simple MLP, illustrated in Fig. 4, has been used as mask bias prediction model. More recently, a number of MLP instances that are connected through recurrent hidden layers, which is equivalent to recurrent neural network (RNN), are shown to provide higher accuracy [8]. This is because a target segment is corrected while its neighbor segments are corrected together in RNN, which better reflects the actual correction step (as in MB-OPC) as opposed to a simple MLP model where only a target segment is corrected. Another issue in efficient implementation of ML-OPC is model training. Given a set of training segments with their reference mask bias values (through MB-OPC runs, for example), the goal is to train the model, i.e., determine the network structure and network parameters such as edge weights and node biases, such that the prediction of mask bias is as accurate as possible. A key in this process is sampling training segments, because using all segments from sample layouts is a waste of time and may cause overfitting of the model toward the segments which occur more frequently. Since sample layouts may not contain all segments that may arise in actual OPC process, generation of synthetic patterns, discussed in Section 5, may help extend the coverage of machine learning model. EPC During etch process, as shown in Fig. 5, some patterns experience over-etch due to photoresist erosion, which causes negative etch bias; some others are affected by under-etch due to excessive deposition, which causes positive etch bias. The goal of EPC is to modify a known target wafer pattern to compensate for etch biases, i.e., to synthesize an aerial image that can yield the designers' layout on the wafer even under over-and under-etch phenomena (see Fig. 2 ). A key in EPC is etch bias model. A number of test patterns are created on a wafer, and etch bias is recorded for each pattern through CD measurement. The results may be summarized as a set of rules (in RB-EPC), e.g., etch bias table with line width and line spacing as parameters. The results may be fitted into a function of a few empirical parameters (in MB-EPC): where Den is the density of the layout within a density kernel region; Vis is the area of the open space that is not hidden by the edges that are neighbors of a point of interest (i.e., space beyond the nearest edge is ignored); Blo is the area of the nearest polygon that overlaps with the blocked kernel, as shown in Fig. 6 [10]. The coefficients C i and the number of terms in this function are determined empirically through regression. MB-EPC can deal with a greater range of patterns than RB-EPC does, but it still fails to achieve a satisfactory on-chip variation (OCV). It is estimated that OCV in 20 nm DRAM devices is still 15% of gate size after MB-EPC has been applied [10]. EPC Using ML Models Instead of Eq. (3), machine learning may be introduced to build an etch bias model. A simple MLP has been shown to offer much smaller RMS error of etch bias prediction [9]: 9.1 nm for rulebased, 2.9 nm for model-based, but 1.9 nm with MLP. A key is the choice of input parameters. Figure 6 indicates that local layout densities are important; in fact, they affect the quantity of etching particles, their incident angle and direction; they can be measured as illustrated in Fig. 7. Optical kernel signals are also important, since they affect the photoresist sidewall angle. Experiments indicate that using only layout densities for MLP causes 3.5 nm error in etch bias prediction, but including additional optical kernel signals brings the error down to 1.9 nm [9]. It is also noted that regression network, MLP with single output node to report etch bias, is better choice for smaller etch bias range when etch process is weak; classification network, MLP with multiple decision output nodes with each node associated with a small range of etch bias, is well suited for larger etch bias with strong etch process. Once etch bias model is set up, actual correction can be performed through iteration as illustrated in Fig. 8. Some initial aerial image is assumed (L1), etch bias model is applied (L3) to estimate wafer image (L4) which is compared to target image (L5), and the process is repeated with some (systematic) modification being applied to aerial image (L7). Assist Features Assist feature, or sub-resolution assist feature (SRAF) or scattering bar, is a key resolution enhancement technique (RET) in low k 1 lithography. SRAFs are extra patterns added to the mask, not intended to be printed on the wafer, which help nearby main patterns to be printed with higher fidelity and improved process window. Intuitively, sparse lines exhibit broader linewidth variations than dense lines, so adding assist features to both sides of sparse lines creates a dense environment [1]. SRAFs are usually inserted before OPC and are refined while main patterns are corrected through OPC. Rule-based methods (RB-SRAF) have been used since 90 nm node. A few empirical rules are established (e.g., the number of assist wires for each distance between adjacent metal wires) and are applied, either manually or automatically. They require long development time and are only limited to simple assist features. Model-based methods (MB-SRAF) have been widely used since 20 nm node. They rely on repeated SRAF insertion and its evaluation through lithography simulations; they are accurate but very time consuming. Inverse lithography technology (ILT) is more advanced method, which explicitly solves the problem of mask optimization; it is even more computationally expensive. SRAF Insertion Using Guidance Map A faster MB-SRAF has been proposed [11]. A key concept is a light interference map (LIM), where a layout is overlaid with an array of values indicating the potential amount of light interference. LIM of a single contact c is illustrated in Fig. 9 (a). R c is a region with larger values; if some patterns are introduced in that region, light intensity at c increases due to constructive interference. The opposite is true in R d region, where values are smaller due to destructive interference. LIM in Fig. 9 (a) is obtained through repeated lithography simulations while some small pattern is moved around c. LIM of a number of contacts is simply a superposition of LIMs as shown in Fig. 9 (b). LIM is convenient for SRAF insertion as illustrated in Fig. 10 : (1) Binary versions of LIM are obtained by using different threshold values as shown in (b). If the value in the original LIM is larger than threshold, it is assumed 1 in binary LIM; it is assumed 0 otherwise. (2) Each binary LIM is discretized for easier SRAF insertion, as shown in (c). (3) One binary LIM (e.g., 5th one) is picked for initial SRAF insertion. (4) A lithography simulation is performed on the initial SRAF insertion, shown in (d), in which red contours are lithography images of contacts as well as SRAFs. (5) If an SRAF is associated with lithography image (its light intensity exceeds 100% of image threshold), it will be patterned; it is therefore replaced by its smaller version in binary LIM one level above. If the intensity of an SRAF is below 80% of image threshold (and so it is not associated with lithography image), it is replaced by its bigger version in binary LIM one level below. Refinement procedure (4) and (5) repeat until the intensity of all SRAFs lies between 80% to 100% of image threshold. The number of lithography simulations is 5 times at most, which is usually much smaller than the number associated with standard MB-SRAF. SRAF guidance map (SGM) has been proposed for fast MB-SRAF [12]. Its concept is similar to LIM: SGM value indicates the sensitivity of improving process windows on the desired pattern. SRAF Insertion Using ML Models Deep CNN (convolutional neural network) has been applied for quick prediction of SRAF guidance map [13]. Runtime of SRAF insertion is reported to be reduced to 1/7 of standard insertion method using CTM (continuous transmission mask), which is conceptually similar to SGM. The method has also been applied to ILT. Standard ILT involves 100 iterations of CTM refinement followed by 50 iterations of actual ILT process. Deep CNN allows a quick estimation of CTM without iterations, which is then followed by 50 iterations of ILT process. Runtime is reduced by about 34%. Data augmentation is important to extend the coverage of training data. Flipping, rotation, and translation are applied to initial training data for this purpose. Synthesis of test patterns covered in Section 5.2 is another possibility. SRAF Printability Check SRAFs are not intended to be printed on the wafer. Patterned SRAFs are considered as defects and become yield detractor. Printability check is thus an essential component of SRAF. Lithography simulation is usually performed assuming bottom of resist height when main patterns are simulated. More pessimistic approach is necessary to simulate SRAF patterns, because miss prediction (predicted as non-printing for actually printed SRAFs) should be avoided more than false alarm (prec 2021 Information Processing Society of Japan dicted as printing for actually non-printed SRAFs). One approach is to assume over-exposure condition while the same bottom of resist height is assumed for SRAFs; another is to assume the top or near the top of resist height with nominal exposure condition [14]. Machine learning approach has also been studied [15]. For each candidate pixel of SRAF, a machine learning model is applied to predict its printability. A simple MLP network has been tried with PFT signals and local layout densities (see Section 2) as network input parameters. Printability of SRAF is determined from the printability of its member pixels. Since SRAFs are not printed most of the time, balancing the number of reference printed SRAFs and non-printed SRAFs for model training is important. Lithography Modeling Lithography simulation is a foundation of computational lithography. It is based on lithography model, illustrated in Fig. 11, which describes the response of photoresist to exposure and development. Exposure is captured by optical model, in which image intensity is described by the weighted sum of convolutions between mask image M(x, y) and optical kernel functions φ i : where λ i is a weight value. This method is called sum of coherent systems (SOCS) approximation [16]. Development is captured by resist model, in which image intensity is modulated by resist process formula and the result is compared to some threshold values. Specifically, photoresist processing involves post-exposure bake (PEB) and resist development. The solution to differential equations for PEB modeling (or called reaction and diffusion), i.e., the quenching and diffusion of acid, is given by a sum of convolutions of I(x, y) and Gaussians G i [17]: Development is modeled by comparing Eq. (5) to threshold value, either constant or variable. In popular variable threshold model [18], a threshold is given by a function of maximum intensity, minimum intensity, and intensity slope in local region of interest: T (x, y) = C 0 + C 1 I max + C 2 I min + C 3 I slp + C 4 I 2 max + · · · . (6) The weights in Eqs. (5) and (6) are determined through calibration process. Some test patterns are prepared, and calibration is performed such that the error of resist model is minimized, where the error may be measured as the difference of CD values between simulated resist patterns and corresponding patterns from actual measurement or from rigorous simulation. The choice of test patterns is thus very important, which is covered in Section 5. ML for Resist Model A number of options can be considered [19] in applying machine learning techniques in lithography model. A polynomial (6) for variable threshold may be replaced by machine learning model. CNN has been used [20] for this purpose. The input is an aerial image of a small clip and the output is corresponding intensity threshold. Experiments indicate that RMS error in predicted CD is about 5.5 nm with variable threshold model, while the method using CNN causes only 1.6 nm error. A polynomial for resist model may be replaced by machine learning model [19]. Each convolution term in Eq. (5) becomes one input of machine learning model, and the output is R(x, y) value. Experiments with 10 nm M1 layer demonstrates the drop of RMS error from 2.07 nm (when CM1 compact model [21] is used, in which threshold is constant) to 1.31 nm. The resist model and the model for threshold in Fig. 11 may be put into a single machine learning model. A CNN has been used to predict CD from input aerial image [19], in which the accuracy is good but the parameters involved are too many and difficult to optimize. Another option is to use fully convolutional networks to predict resist contours from input aerial image [19]; in this application, reference resist contours (often extracted from SEM image) have to be very high quality [22]. Estimation of 3D Resist Profile Standard resist model assumes two-dimensional space, but resist structure is often associated with non-vertical sidewalls. This may cause non-ideal resist profile, e.g., footing, T-topping, and top-less. Accurate prediction of 3D resist profile is therefore important. Rigorous simulation can be performed to predict 3D resist profile as illustrated in Fig. 12, but this is too time consuming. A simple MLP network with local layout densities as shown in Fig. 7 and optical kernel signals as inputs produces accurate prediction of resist height [23], or a similar network may be trained to predict whether resist will remain after etch process. ML for Lithography Model A CVNN (complex valued neural network) has been applied for both optical-and resist-model [24]. The frequency components are limited, so small amount of training data can produce a model of higher accuracy even though there are extra processes of Fourier-and inverse Fourier-transform. The CGAN (conditional generative adversarial network) model has been tried to obtain a wafer image directly from input mask image [25]. It has been applied to contact or via patterns, and extra CNN has also been applied to adjust the center of each pattern for higher accuracy. Test Patterns Comprehensive test patterns are important for a number of lithography applications including source mask optimization (SMO), building hotspot library, exploration of design rules, calibration of lithography models, and more recently training a machine learning model for lithography applications. Two types of test patterns are popular: parametric and actual. Parametric patterns are represented by a few geometrical parameters such as line width and space, as illustrated in Fig. 13. They are easy to build and analyze, but cannot cover complex patterns. Actual patterns are extracted from sample layouts and can cover more random shapes, but some similar shapes may be more frequent and some are not really important. Thus the extraction of important shapes, e.g., extracting hotspot patterns [26], and classifying them are important. Classification Once test patterns are assembled, they are classified into a small number of groups based on geometric similarity, or some other similarity measure of interest. One or more representative patterns is then identified from each group, and a collection of such representative patterns will form the final set of test patterns. A classical method of classification is area-based pattern matching shown in Fig. 14 (a); the percentage of overlap between the two patterns is used as a similarity measure. In some applications, not all shapes are equally important. For example of hotspot patterns, hotspot in the center is important, and shapes closer to the hotspot have a greater impact than those far apart from the hotspot. The pattern may be weighted by the square of the complex degree of coherence [27], μ(x, y) 2 , before pattern matching is performed. These strict pattern matching methods do not capture the similarity when one pattern is a shifted (or rotated or reflected) version of the other. This can be alleviated through pattern matching in Fourier domain [26]; the two patterns in Fig. 14 (a), which are similar by just 10%, are now very similar in frequency domain as illustrated in Fig. 14 (b). A layout pattern, which is in Manhattan geometry, can be represented through Hanan grid, or called Squish pattern [28]. As shown in Fig. 15, scan lines are drawn from the extensions of all polygon edges, which divide the pattern into grids of non-regular interval. A binary matrix, in which 1 indicates the region occuc 2021 Information Processing Society of Japan pied by the pattern, together with a list of grid width and grid height information represents the pattern. The two patterns shall be similar if their corresponding matrices are the same but grid sizes are different within some tolerance range. In many lithography applications, classification through strict pattern matching is inefficient and unnecessary. In lithography modeling, for instance, image parameter space (IPS) [29] consisting of intensity slope (I slp ) at pattern edge together with maximum-intensity (I max ) and minimum-intensity (I min ) in its close proximity is popular to define a parameter space. IPS may be extended to include more parameters to define a higher dimensional parameter space. e.g., IPS sensitivity to geometry perturbation [30] (∂I max /∂M, where M is line width). Since the choice of parameter space is only engineering, alternative method is to introduce a machine learning to extract a number of parameters [31] and use them to define a parameter space. Once test patterns are identified in IPS space, any clustering algorithms can be applied for classification purpose: partitioning methods (such as K-means) or hierarchical methods (such as complete-link). The parameter space is also convenient for analysis of test pattern coverage. Synthesis of Test Patterns Pattern coverage through parametric patterns or actual patterns is always limited. This can be alleviated through automatic synthesis of test patterns. Machine learning approaches have been applied for this purpose. Transforming auto encoder is a type of machine learning model suited for image translation, e.g., generating a shifted image [32]. Its key component, called capsule, consists of recognition units and generation units. A recognition unit is made of a few convolution layers and fully connected layers to identify the input image and generate a vector that characterizes the image, called latent vector. A generation unit is made of fully connected layers followed by deconvolution layers, with its function opposite to that of recognition unit. The idea of using transforming auto encoder is to systematically alter the latent vector so that the output pattern is slightly different from the input pattern [33]. The pattern is represented by Squish pattern, a matrix that identifies 2-dimensional topology as shown in Fig. 15, so pattern synthesis is realized by altering the matrix entries. More general approach toward test pattern synthesis has been proposed [34]. A layout clip is represented by a few low frequency discrete cosine transform (DCT) signals. A layout often contains some repeated patterns, and DCT captures such repetition with a smaller number of frequency signals. Some random DCT signals are generated using GAN model as shown in Fig. 16 (a); the block generator is trained beforehand such that it is difficult to discriminate its output from the DCT signals of clips used for training, i.e., DCT signals are generated such that they are realistic and close to the ones for training clips. The output DCT signals are provided to inverse DCT process to yield the corresponding clip image, which is blurred since only low frequency components are contained in the DCT signals. The clip is then made sharper by using CGAN model shown in Fig. 16 (b), which is trained such that it is also difficult to discriminate the sharpened clip from the ones that are used for training. The approach has been applied to resist modeling. When 1,000 parametric patterns are used for resist modeling, RMSE (root mean square error) of CD values is 5.11 nm; when they are replaced by 500 parametric patterns, 250 actual patterns, and 250 synthesized ones, CD RMSE becomes only 2.88 nm due to wider coverage of test patterns. Hotspot Hotspot patterns are the ones that may cause defects such as bridging, necking, and line-end shortening. They can be identified through process variation band (PVB). Figure 17 shows an example. Lithography process is under the influence of key parameters: scanner focus, exposure energy, and mask manufacturing error. To account for parameter variations, lithography simulation may be repeated while each parameter is set to its mean or ±3σ values. A set of resulting 27 contours is PVB. When two patterns are too close, their PVBs become thicker and so the minimum distance between the PVBs gets smaller as illustrated in Fig. 17. This may cause bridge. Hotspot may also be defined in probabilistic fashion [35]. Since repeated lithography simulations to get PVBs and detect hotspots are time consuming, a practical approach is to build a library of hotspot patterns beforehand through pattern classification and apply pattern matching to narrow down the region for actual lithography simulations. c 2021 Information Processing Society of Japan ML for Hotspot Detection and Correction Hotspot detection using CNN has been proposed [36]. Since hotspots are sparse, it is important to augment sample hotspot patterns so that CNN is well trained. A simple flipping and rotating have been tried for this purpose, even though they will not be enough for diverse patterns. Automatic correction of hotspot using cycleGAN (cycleconsistent GAN [37]) has been proposed [38]. A set of sample hotspot patterns is X, and a set of coldspot patterns is Y as illustrated in Fig. 18. A hotspot pattern and coldspot pattern are not paired, i.e., Y does not necessarily contain the corrected version of hotspot from X, which is key advantage of using cycleGAN. The goal is to learn mappings G : X → Y and F : Y → X, together with two discriminators D X and D Y . The objectives in learning process are: (1) G tries to generate a pattern G(x ∈ X) that looks like a cold pattern in Y, against D Y that aims to distinguish G(x) from y ∈ Y as much as possible. A similar objective is applied to F and D X . (2) For each hotspot pattern x, G and F together should yield the pattern similar to x, i.e., F(G(x)) ≈ x. Similarly we require G(F(y)) ≈ y for each cold pattern y. Experiments demonstrate an efficient correction of one or more various hotspots (tip-to-tip, tip-to-bar, pitch variation, density of neighbor patterns, etc.). Conclusions A number of computational lithography applications which employ machine learning models have been reviewed. Practical success so far is observed in OPC and lithography modeling. OPC using machine learning can provide a good initial OPC solution, which greatly helps reduce MB-OPC runtime. Lithography modeling has relied on empirical compact model for long; machine learning provides extensive modeling capability, which helps improve model accuracy. SRAFs (insertion and printability check) and hotspot patterns (detection and correction) are also popular applications of machine learning. Preparation of sample training data is a challenge in these applications. Printed SRAF samples are obtained through scanning electron microscope (SEM) images, which should be carefully captured, measured, and classified [39]. Hotspot patterns as well as printed SRAF samples are often scarce, even though the success of machine learning model heavily relies on the coverage of training samples. Test patterns (extraction, classification, and synthesis) are thus important topic and need more study and development.
2021-04-17T13:13:24.492Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "dd521620cce88b4336d900a1572351e8bffcdc6a", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/ipsjtsldm/14/0/14_2/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8fbbc5a8e4abf04aca3e227bf056c369f52abaa5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221917910
pes2o/s2orc
v3-fos-license
Defluorosilylation of Trifluoromethane: Upgrading an Environmentally Damaging Fluorocarbon T HE RAPID , ROOM - TEMPERATURE DEFLUOROSILYLATION OF TRIFLUOROMETHANE , A HIGHLY POTENT GREENHOUSE GAS , HAS BEEN ACHIEVED USING A SIMPLE SILYL LITHIUM REAGENT . A N EXTENSIVE COMPUTATIONAL MECHANISTIC ANALYSIS PROVIDES A VIABLE REACTION PATHWAY AND DEMONSTRATES THE UNEXPECTED ELECTROPHILIC NATURE OF L I CF 3 . T HE REACTION GENERATES A BENCH STABLE FLUORINATED BUILDING BLOCK THAT SHOWS PROMISE AS AN EASY - TO - USE DIFLUOROMETHYLATING AGENT . T HE DIFLUOROMETHYL GROUP IS AN INCREASINGLY IMPORTANT BIOISOSTERE IN ACTIVE PHARMACEUTICAL INGREDIENTS , AND THEREFORE OUR METHODOLOGY CREATES VALUE FROM WASTE . T HE POTENTIAL SCALABILITY OF THE PROCESS HAS BEEN DEMONSTRATED BY ACHIEVING THE REACTION ON A GRAM - SCALE . Despite being widely employed as refrigerants, hydrofluorocarbons (HFCs) are potent greenhouse gases and important contributors to global warming. 1,2The threat posed by HFCs was highlighted by a recent amendment to the Montreal Protocol seeking to reduce HFCs by >80 % by 2050. 3 Trifluoromethane (HCF3, HFC-23) has a global warming potential 11,700 times greater than CO2, and an atmospheric lifetime of 264 years. 4It is produced on a vast scale (c.a.20 kilotons per year) as a by-product from a range of industrial processes, such as the manufacture of PTFE (Teflon) and refrigerant gases (e.g.ClCF2H). 2,5Despite its widespread production, there is currently little application for trifluoromethane.Consequently, it is either stored or destroyed at high cost to prevent its release into the enviroment. 2,4 the pharmaceutical industry, fluorine substitution is commonly used to improve drug efficiency and quality by enhancing the metabolic stability and overall bioavailability of a drug. 6,7There is a particular growing interest in the use of the difluoromethyl (CF2H) group in drug design, where it is considered a lipophilic bioisostere of the hydroxyl, thiol and amine groups. 8,9The CF2H moiety is already present in various commercialised pharmaceuticals such as Eflornithine and Pantoprazole (Figure 1b). 10,11The growth in interest for CF2H installation has created a growing demand for an easy-to-use, mild difluoromethylating agent. 12,13 This Work Previous Work We postulated the potential environmental and economic benefit in the use of trifluoromethane as a feedstock gas for the synthesis of a valuable difluoromethyl building block, by developing a process to transform the C-F bond into a reactive C-Si bond. Much progress has been made in the field of upgrading fluorocarbons into reactive building blocks, particularly with the use of nucleophilic main group reagents. 1,14,15Fluoroalkanes remain the least reactive substrates due to high sp 3 C-F bond dissociation energies and a lack of charge stabilisation in the bond-breaking transition state. 1,16Despite this, our group has in recent years demonstrated the C-F activation of simple fluoroalkanes using aluminium and magnesium nucleophiles. 17,18Furthermore, the groups of Shibata and Martin have both reported C-F activation of a range of fluoroalkanes using group 1 metal silyl nucleophiles. 19,20We also recently reported the defluorosilylation of industrially relevant hydrofluoroolefins (HFOs) with simple silyl lithium reagents, 21 and in this work we sought to extend the methodology to HFCs. Trifluoromethane itself has very limited synthetic use, stemming from its low boiling point (-83 °C) and its relatively acidic C-H bond (pKA ~25 in H2O). 2 The CF3 -anion generated from deprotonation can decompose into difluorocarbene (:CF2) and a fluoride anion (F -), 2,5 although under appropriate conditions it has been utilised in trifluoromethylation reactions (Figure 1a). 2,224][25][26][27][28] Mikami and co-workers.used a highly nucleophilic boryl lithium reagent to demonstrate the defluoroborylation of HCF3 to form an organoboron building block. 26While a mechanistic study was not carried out for this system, a related computational study by Mikami on the α-difluoromethylation of lithium enolates was utilised to propose a pathway for the defluoroborylation. 24The authors suggest initial deprotonation of HCF3 occurs to form LiCF3, before C-F cleavage then proceeds via an SN2-type attack by the nucleophilic boryl lithium at LiCF3, in a bimetallic transition state. 24While an important discovery, any application of the defluoroborylation methodology is limited by issues regarding scalability.The boryl lithium reagent is extremely difficult to synthesise and is highly susceptible to degradation, in fact it could only be synthesised in situ and required a temperature of -78°C.The organoboron building block was reported as bench stable but its utility is unknown. 26 this paper, we report the rapid, room-temperature defluorosilylation of trifluoromethane using a simple silyl lithium reagent to form a promising difluoromethyl organosilicon building block.This methodology offers the potential to recycle a highly abundant, low-value fluorocarbon, minimising waste and environmental damage, to create a pharmaceutically relevant building block of high-value. 1 Trifluoromethane (1 bar, 22 °C) was added to a C6D6 solution of the silyl lithium reagent PhMe2SiLi•PMDETA (1•PMDETA) (PMDETA = pentamethyldiethylenetriamine), and the building block PhMe2SiCF2H (2) was formed in a 90 % spectroscopic yield (Figure 2).PhMe2SiH was also formed as a by-product in a 10 % yield.The optimum concentration of 1•PMDETA was found to be 0.02 M, which results in approximately 7.5 equivalents of HCF3 being added to the headspace of the reaction vessel.The yield of 2 was found to decrease with an increasing concentration of 1•PMDETA (and a consequently decreasing equivalence of HCF3).It was also found that the PMDETA ligand was crucial to the reaction, with alternative THF (1•THF) and TMEDA (1•TMEDA) (TMEDA = tetramethylethylenediamine) adducts resulting in no formation of the desired product 2.The structures of the silyl lithium nucleophiles 1•PMDETA and 1•TMEDA have previously been reported. 21,29A solvent scope showed that polar solvents such as THF were detrimental to the yield of 2, whilst low reaction temperatures (-78 °C) altered the reaction pathway to form undesired products (see ESI for full details of reaction optimisation). After achieving the defluorosilylation of HCF3 in a >90 % yield on an NMR scale, we sought to demonstrate the potential scalability of this methodology, and were able to achieve the transformation on a gram-scale of 1•PMDETA.The product PhMe2SiCF2H (2) was successfully isolated after work-up in a 68 % yield. In order to probe the mechanism, we set out to complete a kinetic analysis of the reaction by NMR spectroscopy.Unfortunately, we were unable to achieve this at room temperature as the reaction goes to completion within 5 minutes (as shown by 1 H NMR spectroscopy).Efforts to obtain an analysis of the reaction at low temperature (-78 °C) were thwarted by a change in reaction selectivity, where the desired product 2 was produced in only an 8% yield, with PhMe2SiH and H2CF2 instead formed as the major products (see ESI for full details). An extensive DFT study was carried out to explore the mechanism (Figure 3).Our calculations support a mechanism similar to that proposed by Mikami and co-workers for the defluoroborylation of trifluoromethane. 24,26The first step is deprotonation of HCF3 by 1•PMDETA, proceeding via TS1 (∆G1 ‡ = 20.5 states. 23,24TS2 is a concerted, albeit highly asynchronous transition state involving early C-F cleavage with concomitant LiF formation, and late C-Si bond formation.Finally, PhMe2SiCF2Li•PMDETA undergoes protonation by a further equivalent of HCF3 to give the desired product 2 and PMDETA•LiCF3, via TS3 (∆G3 ‡ = 18.7 kcal mol -1 ).The reaction is therefore proposed to be catalytic in LiCF3 (Figure 4).All three steps are exergonic processes. NBO analysis was carried out to elucidate the nature of the transition states (see ESI for full details).The NPA charge on the carbenoid carbon of PMDETA•LiCF3 in INT3 (and subsequently INT4 and TS2) was found to be positive, despite this species being viewed as carbanion 5).This is due to the strong electron withdrawing effect of the three fluorine atoms, and has been noted in previous calculations on LiCF3. 30The positive NPA charge explains the electrophilic nature of PMDETA•LiCF3 and hence why it is attacked by the silicon nucleophile.Notably, the positive NPA charge on the carbenoid carbon increases from INT4 (+0.55) to TS2 (+0.64), suggesting an accumulation of positive charge approaching the transition state.This is consistent with the asynchronous nature of TS2 where C-F cleavage occurs prior to C-Si formation.Second-order perturbation analysis of TS2 suggests there is a small donation of electron density from a Si lone pair to a vacant p orbital of the carbenoid carbon (» 6 kcal mol -1 ).We therefore suggest that TS2 possesses some SN1-like Alternative mechanisms were explored by DFT calculations and ruled out on the basis of identifying transition states that were prohibitively high in energy.A classical, direct SN2 attack by 1•PMDETA at HCF3 was calculated to proceed via TS4 (see ESI) (∆G4 ‡ = 53.8kcal mol -1 ).A 'frontside SN2' approach was also considered, as this mechanism has been proposed to operate with highly fluorophilic nucleophiles, 18,31 and the high-energy TS5 (see ESI) (∆G5 ‡ = 44.7 kcal mol -1 ) was found.We were unable to find a transition state for difluorocarbene formation from PMDETA•LiCF3. The experimental observation of PhMe2SiH as a reaction byproduct is consistent with deprotonation of HCF3 as the first step of the reaction to form LiCF3.It has been reported that LiCF3 can decompose to form LiF and :CF2. 2,27,28There was no evidence for the presence of difluorocarbene (:CF2) from several carbene trapping experiments that were carried out (see ESI for full experimental details).While these results cannot rule out a carbene mechanism entirely, they strongly suggest, in combination with results from DFT, that a carbene pathway is not occurring. The difluoromethyl building block 2 has already been applied as an easy-to-use reagent for the installation of the CF2H moiety in carbonyl substrates. 32,33Its use is somewhat scarce, however, and this could be due to the difficulty or cost of its synthesis (it requires the now-banned substance HCF2Cl). 34,35Our methodology provides a simple, gram-scale synthesis of this promising difluoromethylating agent, which we believe could lead to an increase in the use of the difluoromethyl group in new pharmaceutical and agrochemical products. kcal mol - 1 ) to form PMDETA•LiCF3 and the experimentally observed by-product PhMe2SiH.The rate-determining C-F activation step then occurs by an SN2-like attack by a further equivalent of 1•PMDETA at the PMDETA•LiCF3 carbenoid, proceeding via TS2 (∆G2 ‡ = 23.4kcal mol -1 ).In this transition state, one lithium cation stabilises the fluoride leaving group, and the other stabilises the carbenoid carbon, acting as an anchor for C-Si bond formation.It has been suggested that strong Li•••F interactions are crucial for stabilising similar transition , is overall considered a highly asynchronous SN2-like step as it is concerted in C-F cleavage and C-Si formation.The geometry of TS2 is somewhat similar to the transition-state proposed by Mikami for the attack of a THF-stabilised lithium enolate on LiCF3.24 Figure 5 : Figure 5: Calculated structures for the stationary points of the C-F activation step, annotated with relevant NPA charges and Wiberg Bond Indices for C-F cleavage and C-Si formation.
2020-08-20T10:01:18.154Z
2020-08-18T00:00:00.000
{ "year": 2020, "sha1": "ab2e0f31a3e9863c9aa2860adb440cfe3e0a41d7", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/cc/d0cc04592f", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b9d9359c48bd8d4d0d8526148a998206876a3b0", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
31221553
pes2o/s2orc
v3-fos-license
Activity PAtterns of south AmericAn silver cAtfish ( Rhamdia quelen ) the south American silver catfish (Rhamdia quelen) is a widely distributed species in central and south America in areas east of the Andes between venezuela and the northern parts of Argentina. the bottom dwelling species occurs in lakes and reservoirs as well as in rivers. between June 2000 and December 2001, sixteen silver catfish were tracked during fourteen 24-h cycles in two-hour-intervals, with the aim of investigating daily movements and habitat use. covered distances varied between 0 m/2 h and 326 m/2 h and the mean distance covered in 2 h was 25.6 m. the mean activity of individual silver catfish varied between 5.6 m/2 h and 81.4 m/2 h. the swimming activity was linearly related to the total fish length. the highest mean swimming activity occurred in the morning and at nightfall. silver catfish concentrated in three areas of frequent use. All of them were characterized by steep banks providing shelter in the form of rip-rap or large woody debris. vertically, silver catfish preferred the upper 2 m layer where tracked fish encountered higher temperatures and higher dissolved oxygen concentrations. INTRODUCTION many aspects of life history of even the best known south American migrating fish species are unknown.some conventional tagging studies have revealed extreme mobility of some species like dourado (Salminus brasiliensis) or curimata (Prochilodus platensis). in the uruguay river, dourados were recaptured at a distance of 850 km from the point of release and curimata 620 km (Delfino & baigun, 1985).telemetry studies are extremely rare, due to infrastructural, funding and safety restrictions in south American countries.until now, results of only two telemetry studies were published in indexed journals (mochek et al., 1991; morais & raffray, 1999).both were carried out in reservoirs.several studies are underway in the são francisco, uruguay, Paraná and rio dos sinos rivers.however, published results are not available. the south American silver catfish (Rhamdia quelen; popular name: jundiá) is a widely distributed species in central and south America in areas east of the Andes between venezuela and the northern parts of Argentina (silvergrip, 1996).it is a bottom dweller which occurs in lakes and reservoirs as well as in rivers.gomes et al. (2000) reported different growth between sexes: Asymptotic length l ∞ of males was 52.0 cm, of females 66.5 cm, with a maximum weight of 3 kg.the minimum length for sport fishery in the uruguay river is 30 cm (cAru, 2000).recently R. quelen was quoted as a candidate for aquaculture, substituting exotic species like common carp (Cyprinus carpio) or tilapia, Oreochromis niloticus (baldisserotto, 2003). the few existing publications described jundiá as omni-or carnivorous (gomes et al., 2000).moreover, the most frequent food items are fish and crustaceans and feeding intensity increases in the autumn and winter (meurer & zaniboni filho, 1997).several studies describe jundiá as a predominantly nocturnal species (Winemiller, 1989;gomes et al., 2000).During the day R. quelen was observed under cover in areas of large woody debris or undercut banks (casatti & castro, 1998). in the upper uruguay river spawning activities were most frequent during the spring and tended to continue throughout the year with less intensity (cassini, 1998).Premature fish performed a lateral migration into the affluents of the main river stem, where final gonadal maturation occurred.the lateral migrations seemed to be triggered by an increase in water temperature and by flood events (zaniboni filho & schulz, 2003). the present study was part of a radiotelemetry training program for university students.the objective was to investigate: • Daily movement patterns; and • habitat use of jundiá in a small reservoir. MATERIAL AND METHODS the study was performed in the reservoir of universidade do vale do rio dos sinos near Porto Alegre, brazil's southernmost state (29° 47, 75' s and 51° 09, 47' W). the total area was 2.7 ha, with extended shallow zones (28% < 1 m) and maximum depth of 5.8 m. the riparian vegetation consisted of grass, bush and tree sections.the bathymetry of the reservoir was mapped before the beginning of the tracking experiments. the jundiás were captured with castnets or by electrical fishing (700 v unpulsed direct current, max. 4 A; efKo, germany).the fish were anesthetized (2-phenoxyethanol 350 mg/l) and the transmitter was implanted surgically into the peritoneal cavity, according to the procedure described in Adams et al. (1998) with the antenna protruding the body wall about 1.5 cm posterior to the lateral incision.the antenna was conducted through the body wall by the shielded-needle method with an intravenous catheter (niPro medical ltDA; size 16gx2).the incisions were closed with three stitches of non-absorbable monofilament suture (ethicon ethipoint sc-20).the transmitters were standard 10-28 models (Advanced telemetry systems, inc.u.s.A.) of 90 days life span.the transmitter weight never exceeded 2% of the body weight of the fish. the surgery was performed in five to eight minutes.Previous tests with dummy implants did not reveal negative effects on tagged individuals (schulz, 2003). the study was carried out between June 2000 and December 2001.sixteen jundiás between a total length of 30.5 cm and 41.5 cm and weight between 240 g and 641 g were tracked.between september 2000 and April 2001, the tracking was interrupted due to a receiver failure and necessary maintenance.the interruption caused a lack of tracking data for the summer season (table 1). the fish were tracked during 24 h cycles, monitoring the position of each fish in two hour intervals by boat.two to eight jundiás were tracked simultaneously.out of sixteen tagged fish, one expelled the transmitter or died after the surgery.Data of this fish were not considered.for long range localization a loop antenna was used, close range localization was performed acoustically only with the bnc-cable (antenna disconnected). Previous tests with submersed transmitters showed an accuracy of about 1 m. the position of the boat's stem was considered to be the same as the position of the fish.numbered buoys were used to mark all identified positions during a 24 cycle.their coordinates (utm) were measured with submetrical precision after tracking by a differential gPs (leica -system 300) and transferred to a digital map of the reservoir (Autocad map).During the tracking dissolved oxygen concentrations, the temperature, ph, conductivity, turbidity, and water depth were measured at the position of the fish about 10 cm above the ground by a multisonde (hydrolab multiprobe u. s.A.).Additionally, the same parameters were measured in two hour intervals at a 1 m spaced vertical profile from the surface to ground at the deepest point of the reservoir. Daily activity patterns were established by measuring the distances between two subsequent positions, referring to meters covered in two hours.When a straight connection between positions included terrestrial areas, the straight connection was substituted by the shoreline.Diel activity patterns and seasonal activity patterns were compared using the non-parametric friedman test for related samples, hence data were not normally distributed (one sample of Kolmogorov-smirnov tests).to calculate the linear regression between the mean distance covered during two hours and the total length of the test fish, fish with extraordinary activity levels of more than two times the overall activity mean value (= 51.3 m 2 h -1 ) were not considered.the mean activity during the night (light intensity < 1 lux) was compared to the mean activity during the day with the Wilcoxon-rank test.horizontal and vertical distributions were compared by the chi 2 -test.the significance level for all performed tests was P < 0.05. During winter and spring, high swimming activity levels occurred, which were called excursion phases throughout the test.these excursion phases were defined as the highest 5% of all ranked activity values, and in this particular study all values were higher than 127 m/2 h. the areas of intensive use were identified measuring the distance to adjacent nearest positions.if the distance was less than five meters, the point was included.the minimum point number per area was defined as 30 positions. since the transmitters did not provide information about the vertical position of the fish in the water column, it was assumed that it was on the ground.Depth values at fish positions were transformed into corresponding depth classes from 0 m to 5 m in 1 m steps.Pairwise comparisons of the mean temperature, dissolved oxygen concentration, conductivity, ph and turbidity were made between profile and fish positions for each variable and the depth class by the mann-Whitney u-test as variables were not normally distributed. RESULTS During the fourteen 24 h tracking cycles, we recorded 652 positions.the covered distances varied between 0 m/2 h and 326 m/2 h and the mean distance covered in 2 h was 25.6 m (s.d. 44.9). 56% of all movements were less than 10 m (fig.1).Activity levels of individual silver catfish were significantly different and varied between 5.6 m/2 h (s.d.= 5.4; n = 20) and 81.4 m 2 h -1 (s.d.= 55.4,n = 12; chi 2 = 54.6,d.f.= 14, p < 0.001; Kruskall-Wallis). the swimming activity was linearly related to the total fish length (fig.2).significant differences in movement patterns were detected comparing the mean day (22.6 the jundiá were concentrated in three areas of frequent use.out of 652 positions during the tracking period, 338 were encountered within one of the three areas.the areas measured 34 m 2 (A), 152 m 2 (b) and 175 m 2 (c).the chi 2 -test reveals that silver catfish were highly selective for these areas (chi 2 = 402.9;d.f.= 3; p < 0.001).All of them were characterized by a relatively steep bank, providing abundant shelter in the form of rip-rap (boulders), large woody debris or riparian plant cover.vertically, jundiá were more frequently located in depths of up to two meters than in deeper waters (chi 2 = 20.4;p < 0.0001). When comparing the temperature, dissolved oxygen concentration, conductivity, ph and turbidity in the profile and at the fish positions, some main differences were observed (fig.4): in shallow water (less than 1 m) temperature and oxygen concentrations at the fish positions and in the profile did not differ significantly (table 2). in deeper depths, jundiá chose water layers with warmer temperatures than in the profile.oxygen concentrations were significantly different in depths lower than 2 m. the jundiá were found in layers with higher oxygen concentrations than in corresponding depths in the profile.considering ph in different depths, jundiá constantly chose positions with lower ph values than the profile.the temperature graph of the 5 m depth class shows an extremely high confidence interval for fish positions due to a low number of observations in this depth class.the main characteristic of the conductivity and turbidity values are very small confidence intervals at the fish positions.table 3 shows overall arithmetic means including all depth classes for temperature, dissolved oxygen concentrations, conductivity, ph and turbidity measured in the vertical profile and at the fish positions.With the exception of the mean conductivity values (p = 0.48), all others differed significantly (p < 0.001; mann-Whitneytest), confirming the trend of higher temperatures, oxygen and turbidity and lower ph values at the fish positions. DISCUSSION A constant concern in radio tracking studies is the possible interference of the tagging method with the behavior.in a previous study, silver catfish were tagged with dummy transmitters intraperitonally.under controlled conditions in tanks, no differences in growth were detected between a control group and a dummy tagged group (schulz, 2003).the field experiments of the present study prove the former results.only one fish out of a total of 17 died or expelled the transmitter after the surgery and was excluded from the data analysis.in general, catfish are supposed to be predominantly active at night (Wheeler, 1983), although divergent patterns were observed between different species.tracking African catfish Clarias gariepinus in lake ngezi, zimbabwe, hocutt (1989) did not observe clear diel patterns.most radio tagged individuals seemed to be more active during the day. in aquarium experiments, juveniles of the same species were nocturnally active and took 70% of their daily feeding ration at night.they decreased total feeding activity when food was only offered during the day (houssain et al., 1999). in a study on shoaling and activity levels of the congeneric catfish Corydoras ambiacus and C. pygmeus, both species were crepuscular, but C. ambiacus showed more activity in the evening, and C. pygmeus was more active during the day (Paxton, 1997). in lagoa dos Quadros, a highly turbid coastal lagoon in south brazil, juveniles of Loricariichthys anus, a bottom orientated armored catfish species, were more active during the day than at night.high swimming activity levels coincided with high feeding activity (Petry & schulz, 2000).however, most references cite catfish as predominantly nocturnal (hahn et al., 1997(hahn et al., , casatti & castro, 1998(hahn et al., , lu & Peters, 2003)).A shift from nocturnal to diurnal behavior can be caused by turbidity.in clear water and in a turbid lake, the european roach (Rutilus rutilus; cyprinidae) showed higher swimming activities at dawn and dusk and lower activities during the night (Jacobsen et al. 2004). in summer, activity at noon was low in a lake with low turbidity.fish stayed in areas covered with macrophytes forming aggregations and moved into deeper areas during the night.the roach in the turbid lake were dispersed in the pelagial zone during the day and moved inshore at night. the behavior in the clear water lake is seen as a mechanism for predator avoidance.metcalfe et al. (1999) observed activity patterns of juvenile Atlantic salmon (Salmo salar) under controlled laboratory conditions.fish activity depended on food availability.higher food density increased nocturnal activity, although food capture efficiency was lower.Predator avoidance seemed to be mandatory and was supposed to have a more positive impact on fitness than growth.the major predator which occurs in the university reservoir is largemouth bass (Micropterus salmoides), which is visually orientated. up to turbidity levels of 37 ntu, predation rates of adult black bass were not influenced by impaired vision (reid et al., 1999).the median turbidity in the university reservoir of 15.1 ntu means optimum predation conditions for black bass.the observed median turbidity value of 24.9 ntu at the tracked fish positions indicates that silver catfish prefer higher turbidity.this behavior together with predominantly nocturnal activity is seen as a mechanism to reduce predation pressure.many species show elevated activity at dawn and dusk. in most cases, the increase in activity is related to movements between nocturnal and diurnal habitats.hohausova et al. (2003) report diel movements between the main river channel and backwaters in river morava; lilja et al. (2003) observed migration activities in the main channel of a river in finland; schulz & berg (1987) detected movements of bream (Abramis brama) between shallow littoral zones of lake constance (germany) and pelagic areas; baade & fredrich (1998) described habitat shifts of Rutilus rutilus in the spree river (germany) between the main channel and backwaters. in the case of jundiá, elevated activity levels at dawn and dusk were not caused by habitat shifts, but rather by a more frequent occurrence of excursion phases at these hours of the day. excursions are a common phenomenon in radio or ultrasonic tracking studies.schulz & berg (1992), in a study on brown trout (Salmo trutta f. lacustris) movements in lake constance, distinguished random movements, which are characterized by zig-zag swimming in a restricted area, from excursions such as sporadic displacements of several kilometers, where tagged fish kept to a certain direction without extensive side movements.random swimming was interpreted as foraging behavior and excursions as a mechanism to take advantage of temporarily occurring food resources in a patchy distribution pattern.As patchiness of food resources increased in autumn and winter, excursions were more frequent during these seasons of the year.silver catfish tracked in the present study as well as C. gariepinus (hocutt, 1989) showed more excursions during the autumn and winter at lower water temperatures, probably as a result of increased patchiness of food resources. During the present study, larger jundiá moved longer distances.no consistent tendency is available when comparing this result with other studies.travnichek ( 2004) tagged different size classes of flathead catfish Pylodictys olivaris in the missouri river.they showed different dispersal patterns and larger fish moved longer distances.however, cooke & mcKinley (1999) did not find a significant relationship between the fish length and distances moved in channel catfish (Ictalurus punctatus).young (1999) found that larger brown trout traveled longer distances and explained this observation by the ability of larger fish to defend or exploit larger diel areas.Additionally, food limitations might have spurred greater traveling during foraging. Jundiás highly significant preference for two of the three areas of frequent sojourn was caused most likely by the physical underwater structures.two areas were characterized by the presence of large woody debris or large boulders.these structures provide cover where fish usually stay in periods of rest.casatti & castro (1998) observed silver catfish associated with large woody debris or undercut banks during the day.Rhamdella minuta, a small crepuscular-nocturnal benthic pimelodid catfish, preferred boulders with vegetal detritus and bank structures such as exposed roots or submerse vegetation for shelter (sazima & Pombal, 1986;sabino & castro, 1990).the third preferred area was shallow (< 1 m) with overhanging vegetation.Jundiá may have preferred this area because of the shading effect of vegetation, where Hoplias aimara occurred frequently under low water level situations (morais & raffray, 1999) or because of a potential food resource.Abujanra et al. (1999) think that insects falling into the water from overhanging vegetation contribute to an essential part of the diet of Pimelodus ortmanni.R. quelen showed a significant preference for depths of up to two meters.in addition to shelter and food availability in these depths, this preference may be caused by temperature and oxygen requirements of the species.Due to the loss of tracking data during the summer, mean temperatures, even in the surface layer of the reservoir, remained lower than 20 °c throughout the investigated period.Although temperatures in the profile were higher than critical temperatures for R. quelen (chippari-gomes et al., 1999), tracked jundiá tended to avoid temperatures below 18 °c.up to a depth of 2 m, no significant differences were detected between oxygen concentrations in the profile and at fish positions.this may indicate that oxygen concentrations higher than 4 mg/l do not evoke a positive or negative response.however, for depth ranges lower than 2 m, the differences were significant and the mean concentrations at fish positions were constantly higher than the profile values.this indicates that in conditions lower than 4 mg/l the silver catfish chose the highest available oxygen concentrations per depth layer.experiments in aquaculture with channel catfish (Ictalurus punctatus) showed lower mortality and higher production in continuously aerated ponds with oxygen concentrations of 4 mg/l or higher (Abdalla & romaire, 1996).At mean constant dissolved oxygen concentrations of 3.5 mg/l or less, channel catfish consumed less food and growth was significantly reduced (carlson et al., 1980). in a small Kansas lake, artificial aeration prevented thermical stratification and hypoxic conditions in depths more than 3 m.this increased oxygen concentrations to 4 mg/l, which had positive effects on the harvest of channel catfish (mosher, 1983).unfortunately, published information on oxygen requirements of adult R. quelen was not available. lopes et al. (2001) found that the ideal ph range for R. quelen is between 8.0 and 8.5. the profile values show that the reservoir is slightly acid and that the optimum ph range was not available throughout the investigated period.the reason why tracked jundiá constantly chose even lower ph conditions than measured in the profile needs further research.the observation that the confidence intervals of conductivity and turbidity at fish positions are extremely low is also not well understood.silver catfish seem to avoid alterations of conductivity and turbidity, whenever possible. the depth dependent measurements of temperature, ph, oxygen and turbidity may be bias to a certain extent due to the fact that tagged fish were always considered to be on the ground.consequently, when jundiá were moving during tracking, the position fix may have occurred in "deep" water, but the test fish was swimming in lower layers.this may explain the minimum values shown in table 3.
2017-05-31T01:48:08.051Z
2006-05-01T00:00:00.000
{ "year": 2006, "sha1": "8d5e8f5574c6bcd9374ae2c6c7230c2dee8fa20a", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bjb/a/JMHKJ4Z4KdzdwYrCCSSRhQq/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8d5e8f5574c6bcd9374ae2c6c7230c2dee8fa20a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
11395825
pes2o/s2orc
v3-fos-license
Differences in Gene Expression between Mouse and Human for Dynamically Regulated Genes in Early Embryo Infertility is a worldwide concern that can be treated with in vitro fertilization (IVF). Improvements in IVF and infertility treatment depend largely on better understanding of the molecular mechanisms for human preimplantation development. Several large-scale studies have been conducted to identify gene expression patterns for the first five days of human development, and many functional studies utilize mouse as a model system. We have identified genes of possible importance for this time period by analyzing human microarray data and available data from online databases. We selected 70 candidate genes for human preimplantation development and investigated their expression in the early mouse development from oocyte to the 8-cell stage. Maternally loaded genes expectedly decreased in expression during development both in human and mouse. We discovered that 25 significantly upregulated genes after fertilization in human included 13 genes whose orthologs in mouse behaved differently and mimicked the expression profile of maternally expressed genes. Our findings highlight many significant differences in gene expression patterns during mouse and human preimplantation development. We also describe four cancer-testis antigen families that are also highly expressed in human embryos: PRAME, SSX, GAGE and MAGEA. Introduction Infertility is a significant medical problem affecting tens of millions of couples worldwide [1].In vitro fertilization (IVF) is commonly used to treat infertility, but improvements are still needed as indicated by the low live-birth rate of 32% [2].The IVF treatment includes culturing of the human embryo up to the whole preimplantation period, covering many crucial steps in the early embryo development: the fusion of the oocyte and sperm pronuclei at 1-cell stage, maternal transcript degradation, activation of the zygotic genes at 4-and 8-cell stages and lineage decisions in the blastocyst stage.It is necessary to understand better the molecular mechanisms of preimplantation development in order to improve infertility treatment. Global gene expression studies in human have identified thousands of genes expressed in human oocytes and preimplantation embryos [3][4][5][6][7][8][9][10][11].Maternally loaded genes are downregulated before the blastocyst stage and include genes essential for oocyte maturation and embryo development, such as HSF1 [12] and NLRP5 [13].However, up to 45% of genes detected in oocytes have unknown functions [8], highlighting maternally loaded genes as important candidates for functional research.The start of gene transcription in embryo, called the zygotic genome activation (ZGA), takes place in the 4-and 8-cell stages in human [8,11,14] and in the 1-and 2-cell stage in the mouse [15,16].ZGA includes the transcription of known genes important for pluripotency, embryo development and lineage specification, such as NANOG [17,18]. Mouse is a common model organism used for understanding the function of genes in preimplantation development [19].Although both similarities and differences between mouse and human global gene expression patterns have been described using genome-wide experimental approaches [10,11,20], differences or similarities of genes for human and mouse early development still need verification. We aimed to identify genes relevant for human preimplantation development and study the expression of these genes in the mouse.We used two independently published microarray expression datasets for human preimplantation development [8,11] and online databases to define the genes of interest.Expression clusters of upregulated genes such as NANOG, and downregulated genes such as NLRP5 were identified.In addition, we studied genes that are activated in ZGA and thus upregulated in mouse by 2-cell stage [10,11].We show that 29 out of 30 downregulated genes share an expression profile between human and mouse, whereas the expression profile differs for 16 upregulated genes out of 25.These results indicate that there are species differences between human and mouse early gene expression that might affect the interpretation of the results obtained in mouse as a model organism. Microarray analysis Raw data for human preimplantation embryos on Affymetrix GeneChip HGU133 Plus 2.0 were obtained from ArrayExpress, accession numbers E-MEXP-2359 [8] and E-GEOD-18290 [11].Arrays were analyzed as previously described [8].Briefly, the invariant set normalization method was used and expression values were extracted from PM-values using the Li-Wong method [21].Arrays were normalized independently, rescaled to the same median intensity and the Li-Wong method was applied to all the normalized arrays together to get summary expression measurements.Data from the following stages were used in this study: MII, 4-cell, 8-cell and blastocyst from Zhang et al. ( 2009) and 1-cell, 4cell, 8-cell and blastocyst from Xie et al. (2010).The analysis of differential expression between the consecutive developmental stages was performed using a Bayesian approach [21,22] as implemented in the Limma package (www.bioconductor.org).Differential expression p-values reported were corrected for multiple testing using the FDR method and q-values less or equal to 0.05 were considered significant.No cut-off value was set for fold-change.In order to display results comparatively with qPCR data, the expression values called log2(comparative expression) were obtained as follows: log2(comparative expression) = log2[gene]-log2[average controls], where [gene] is the value of a certain probe for the gene and [average controls] is the mean value of probes for endogeneous controls Hprt1 (202854_at) and Psmb6 (208827_at).Gene names with corresponding probesets are listed in Table S1. Gene expression analysis qPCR was performed using Custom TaqMan Low Density Array Cards.RNA from mouse unfertilized oocytes (MII), 1-cell embryos (1-cell), 2-cell embryos (2-cell) and 8-cell embryos (8-cell) was extracted using Arcturus PicoPure RNA isolation kit (Applied Biosystems) according to manufacturer's instructions using optional DNase treatment with RNase-Free Dnase (p/n 79254, Qiagen).RNA quality and concentration were measured by Agilent Bioanalyzer using Agilent RNA 6000Pico Kit.One oocyte or embryo yielded 128 pg of total RNA on average.Samples of 12 or 5 ng of RNA for each sample were converted to cDNA using High Capacity cDNA Reverse Transcription Kit (Invitrogen) according to the manufacturer's instructions.An additional 5 ng of RNA for replicas in each stage was treated similarly, except that oligo(dT) 20 primer (Invitrogen, 55063) was used instead of random hexamers provided with the cDNA synthesis kit.cDNA was mixed with TaqMan Universal PCR master mix (p/n 4304437, Applied Biosystems (ABI) Foster City, CA, USA) and RNase-free water.Two loading ports were used per sample and 100 ul was loaded into each of the 8 ports.The array was sealed and centrifuged for 2 min at 1200 r.p.m. and loaded on qPCR machine 7900HT (ABI, Singapore) with ABI software SDS v2.4.Standard TLDA array cycling was used.Additional 5 ng samples with random hexamers cDNA synthesis were pre-amplified.Array specific custom TaqMan pre-amp pool (Invitrogen) was used for preamplification of the cDNA prior loading to cards according to manufacturer's instructions.Three biological replicas of all stages were collected for each protocol, except for the 12 ng protocol, where two replicas for both MII and 1-cell samples were used instead of three. TaqMan Array Cards analysis Ct values were analyzed using RQ Manager version 1.2.2.(Applied Biosystems).Automatic threshold was set and subsequently adjusted by using manual threshold where needed.One assay (Rfpl4b) did not pass our quality criteria and was thus excluded from further analysis.DCt values were obtained using DataAssist Software version 3.0 (Applied Biosystems).The endogenous controls Hprt1 and Psmb6 were used for normalization.Nanog and Nlrp5 were used as positive controls for ''Up'' and ''Down'' clusters, respectively.The Ct value 40.0 was included in the calculations for not detected transcripts.The lowest calculated 2DCt value in the samples in the same protocol was set for all the not detected transcripts in this protocol.2DCt values for undetected samples were not included in the calculation of average values for plotting, unless all replicas were undetected.Changes in the expression between MII vs 1-cell, 1-cell vs 2-cell, and 2-cell vs 8-cell were calculated using student's t-test when at least two replicas were detected in both stages.p-values equal to or less than 0.05 were considered significant (Table S1).heatmap.2function from gplots package in R was used for drawing heatmaps. Expression analysis from public sequencing dataset Normalized RPKM (reads per kilobase per million) values for human and mouse preimplantation stages were obtained from the Gene Expression Omnibus database (GSE44183_human_expres-sion_mat.txt.gz,GSE44183_mouse_expression_mat.txt.gz)[10].p-values and ratios were calculated for pairwise comparisons between oocytes and 4-cell blastomeres and oocytes and 8-cell blastomeres in human after addition of 0.1 to every value.p-values equal to less than 0.05 were considered significant.Genes that were upregulated more than 5 times by 4-cell or 8-cell embryo, were used for ortholog search in mouse.Mouse orthologs were obtained from the Biomart database by using human gene names as query.The upregulated human genes and their mouse orthologs along with expression values are shown in Table S2.Average values for each stage in human and mouse were calculated between the cells or embryos from the same biological stages.Number 1 was added to each value before logarithmic transformation of the data for plotting, resulting in values ln(RPKM+1). Ethics Statement The use of experimental animals and the research protocol in this study were approved by the appropriate Animal Care Board (Jordbruksverket), ethical permits S137-10 and S167-11.The animals were treated in accordance with Swedish law and the regulations of Karolinska Institutet. Results Identification of three expression clusters: ''Up'', ''Updown'' and ''Down'' Two independent human preimplantation microarray datasets were analyzed in order to define genes with consistent gene expression profiles between different embryo stages [8,11].Only probes with significant changes in both datasets were included for further analysis, and classified into three clusters according to the expression pattern: ''Up'', ''Up-down'' and ''Down''.Probes in the cluster ''Up'' were upregulated between MII to 4-cell (958 probes in Zhang et al. 2009) or 1-cell to 4-cell (11 probes in Xie et al. 2010) or between 4-cell to 8-cell stages in both studies (454 probes in Zhang and 6112 probes in Xie).336 probes corresponding to 295 different genes were significantly upregulated in both studies.Probes in the ''Up-down'' cluster were upregulated by 4-or 8-cell stages and were then downregulated by 8-cell or by the blastocyst stages including 176 probes common in both datasets (472 probes in Zhang, 1243 probes in Xie), corresponding to 156 genes.Probes belonging to the cluster ''Down'' were downregulated by 8-cell or blastocyst stages in both studies (8319 probes in Zhang, 7520 probes in Xie) including 2474 common probes corresponding to 2025 genes.A list of genes in all clusters is shown in Table S1, and examples of genes in each cluster are shown in Figure 1. Selection of genes for comparison between mouse and human We selected genes from each cluster ''Up'', ''Up-down'' and ''Down'' for analyzing the expression profile of mouse preimplantation embryo by qPCR.Five different criteria for selecting these genes were applied (Table 1).First, the expression in various tissues was considered by using the Amazonia database [23] that combines microarray expression data from various human tissues and embryonic stem cells as well as from three different studies on human oocytes [5,7,24].Genes with higher expression in oocytes compared to other tissues were preferentially chosen from the ''Down'' cluster (Figure 1).Second, we were interested in transcription factors that might play a role during early development.We used a combined list of transcription factors that was compiled from public databases [25].Association with cancer was used as a third criterion in gene selection, because many early development related genes, such as NANOG, OCT4, SOX2, DPPA5A and STELLAR are relevant for cancer [26][27][28][29].Fourth, we performed PubMed searches to find novel genes; a gene was considered novel if no publications were found for its function.A final inclusion criterion was expression in mouse preimplantation embryos.Mouse Genome Informatics (MGI) database contains cDNA source data for mouse early embryos [30,31].Mouse orthologs for the selected human genes were identified in the Ensembl database.A gene was included if its ortholog was found in any of the following samples in MGI: oocyte, unfertilized oocyte, fertilized oocyte, 2-cell embryo, 4-cell embryo, 8-cell embryo, 16-cell embryo, morula or blastocyst. All information was curated manually and 55 genes with orthologs in mouse were selected for gene expression profiling.The selection included 11 genes in ''Up'', 14 in ''Up-down'' and 30 in ''Down'' cluster.Human microarray data from Zhang et al. (2009) were used for unsupervised clustering and plotting a heatmap (Figure 2A).In addition, members from the SSX, PRAMEF and NLRP gene families were selected for profiling in mouse.All the selected genes with their respective inclusion criteria are shown in Table 1.A list of the genes, the corresponding microarray probesets, mouse orthologs, mouse TaqMan assay names and expression values is in Table S1. Investigating gene expression during mouse early development We studied the expression patterns of the selected genes in mouse.Custom TaqMan Low Density Array Cards (TLDA) were used for detecting the expression in the following mouse preimplantation stages: MII oocytes, 1-cell, 2-cell and 8-cell embryos.Five ng of RNA per sample was used in the first experiment in three biological replicas using TaqMan custom preamp pool for pre-amplification of cDNA with this approach.However, many assays did not pass our quality control criteria (Figure S1A).The experiment was then repeated with 12 ng of RNA per sample and no pre-amplification step.The quality of amplification curves was improved comparing to the pre-amplified samples (Figure S1B).69 assays were analyzed in total, 4 were used as controls, and 1 assay was rejected for technical reasons.The upregulation control Nanog was detected only in the 8-cell stage as expected and the downregulation control Nlrp5 decreased significantly from MII to 8-cell stage.Psmb6 and Hprt1 were used as endogeneous controls for normalization.Two (MII and 1cell) or three (2-cell and 8-cell) biological replicas were used per developmental stage.Expression values were obtained using the comparative DCt method.A heatmap plotted for the selected genes is shown in Figure 2B.Mouse orthologs clustered remarkably different from the selected human genes (Figure 2).Human genes clustered into three groups based on the previous analysis, but mouse genes clustered as one downregulated group containing most genes, and a smaller group for genes that were significantly upregulated between 1-and 2-cell stages.Twenty-nine out of 30 orthologs for the genes in cluster ''Down'' were downregulated in the course of preimplantation development (p-val,0.05)similar to human, but only nine genes out of 25 in clusters ''Up'' or ''Updown'' were upregulated by 2-cell stage in mouse.The human genes and mouse orthologs for ''Up'' and ''Up-down'' clusters are shown in Figure 3. Four and five orthologs in the human clusters ''Up'' and ''Up-down'', respectively, were upregulated by the mouse 2-cell stage that is similar to human ZGA 4-to 8-cell stages (Figure 3B, E).However, seven and nine orthologs in the respective clusters were not upregulated by the 2-cell stage, but only downregulated by the 8-cell, except for Magea2 and 2410004A20Rik.Overall, more than half of the mouse orthologs for genes in ''Up'' and ''Up-down'' clusters shared the maternal gene expression profile being present already in the oocyte and downregulated later. The methods for the human microarray experiments included poly(T) priming for cDNA [8,11], whereas random hexamers were used for cDNA synthesis in the current study. To exclude a possible bias caused by differences in cDNA priming, the experiment was repeated with 5 ng RNA in each stage in triplicate using poly(T) primers.The data were overall consistent, with Pearson correlation coefficients between 0,883 and 0.967 for comparisons of average 2DCt values between the same stages (Figure S2).The p-values for genes in ''Up'' and ''Updown'' clusters were not significant between 1-cell and 2-cell stages for Cpsf6, Ddx39, and Hipk3, although the trend for upregulation persisted (Figure S3).All other differentially regulated genes still had the maternal expression pattern in mouse, supporting the concept of differentially regulated genes. Another difference between human and mouse embryos is the culture conditions of the embryos.Culture medium has been shown to influence gene expression in mouse early embryos [32]. In order to further confirm our conclusions, a comparison was made using a recently published RNA sequencing dataset on human and mouse oocyte and blastomere cells and preimplantation embryos [10].The human embryos in the Xue et al. ( 2013) study were frozen, thawn, fertilized and cultured by using different protocols compared to Zhang et al. (2009), and the mouse eggs and embryos were obtained differently from the current study. The RPKM values of the Xue dataset were analyzed as described in Materials and methods.The selected genes in ''Up'', ''Updown'' and ''Down'' categories were extracted from both mouse and human sequencing datasets.A comparison of human and mouse genes between different methods is shown in Figure S4.The lowest correlation was observed for the 4-cell stage in humans (R = 0.366) and for the 2-cell stage in mouse (R = 0.543).This might result from the rapidly changing global gene expression patterns in these stages, requiring exact timing for embryo collection for better correlation between different studies.The SNRPA1, SFPQ AND ZNF639 genes in the ''Up'' cluster were not significantly upregulated by 4-or 8-cell stage in this dataset, although the trend remained (Figure S5A).Surprisingly, only C21ORF91, HIPK3, ZIK1 and KLF17 belonged to the ''Updown'' cluster in both datasets, while Trim43, KLHL11, ZSCAN5A, PNRC1, PHF20, KHDC1, CXORF40B and CCSAP showed no significant expression changes in the sequencing dataset (Figure S5D).A further look on the data showed that although the changes in expression were not statistically significant, all of the genes in the ''Up-down'' cluster still shared the same trends of upregulation by 4-or 8-cell stage and downregulation by the morula stage.All genes that were similarly upregulated in the mouse in TaqMan array dataset were also upregulated between the 1-cell pronuclear and 2-cell stages (Figure S5B, E).Differences occurred in genes that had maternal expression in TaqMan array, but were upregulated in the mouse sequencing dataset: Snrpa1, 2500003M10Rik, Sfrs7 and Zfp639 in the ''Up'' and Khdc1b, Zfyve and Pnrc1 in the ''Up-down'' cluster.To expand on the described differences, we decided to analyze more highly upregulated genes in humans and their orthologs in mouse in the sequencing dataset.Only genes with more than 5 times overexpression by 4-or 8-cell stages in human compared to the oocytes (p-val,0.05)were used, resulting in 412 and 1010 upregulated genes, respectively.The orthologs in mouse were identified using the Biomart database, resulting in 324 and 857 genes, respectively.Heatmaps for the upregulated genes in human 4-cell and 8-cell stages are shown in Figures S6A and S7A).Both gene sets containing mouse orthologs clustered into two: upregulated and not upregulated (Figures S6B and S7B).This expanded analysis suggested that even more differences in early upregulated genes between human and mouse exist. Genes belonging to developmentally interesting gene families were analyzed separately.The NLRP family members in human array and sequencing dataset were mostly downregulated, with the exception of NLRP7, which was upregulated after 8-cell stage in both datasets (Figure 4A, D).The NLRP family members in mouse were also downregulated in the course of time with the exception of Nlrp12 that was low expressed overall (Figure 4B, E). All the available PRAME and many SSX, MAGEA and GAGE family members in the human microarray belonged to the ''Updown'' cluster and were highly upregulated between MII to 4-cell or 8-cell stages (Figure 5A).Similar ''Up-down'' expression pattern was observed for these gene families in the human sequencing dataset (Figure 5B), where the MAGEA and PRAME family genes clustered separately into ''Up-down''.Most MAGEA and SSX family genes shared a common cluster for ''Up-down'' genes in the microarray dataset, but a separate clustering was seen for GAGE (microarray) or GAGE and SSX (sequencing) family genes that were upregulated also in the later stages (blastocyst and morula).The two datasets included many but not necessarily exactly the same members from both families.However, genes in all the selected families had dynamic expression profiles in the preimplantation human embryo. The PRAME and SSX family genes were assessed in the mouse.Unfortunately, all assays failed to detect product in SSX family genes and there was no annotation of SSX family genes in the sequencing dataset.Three out of four PRAME family members in the mouse were upregulated by the 2-cell stage and Pramef12 had a maternal expression pattern (Figure 4C, F).The mouse Pramel6 and Pramel7 genes were most highly upregulated in both TaqMan array and sequencing datasets (Figure 4C, F).Pramel5 and Pramel4 were also upregulated, but they were not distinguishable in the TaqMan array dataset. Similar gene expression profiles between mouse and human We identified selected genes relevant for human preimplantation development and studied orthologous gene expression in the mouse.We used two independent microarray datasets to identify differentially regulated genes in human preimplantation development.Five criteria were applied and 69 selected genes were successfully assayed for expression profiling in the mouse.Many of these genes had similar expression patterns between mouse and human in the course of preimplantation development; we found no changes in gene expression between MII oocyte and 1-cell stage in mouse nor between MII and zygote in humans. Most genes in the NLR family, pyrin domain containing (NLRP family), were downregulated both in human and mouse.Most NLRP family genes, including NLRP5 (Mater) are maternally loaded in human oocytes and downregulated by blastocyst stage [33].Our results show that most mouse genes in the NLRP family were downregulated similar to human (Figure 4. B, E).We saw no differences in expression between mouse and human for 29 other genes in the ''Down'' cluster (Figure 2.A, B).This similarity of expression between human and mouse genes in the cluster ''Down'' was further supported by a comparative microarray study that showed consistent expression patterns between human and mouse for almost 70% maternally deposited transcripts, whereas only 40% of transcripts upregulated by ZGA displayed a similar expression pattern [11]. Differentially regulated genes between human and mouse We studied 25 genes that were upregulated in human by the ZGA at 4-and 8-cell stages.Nine of those were similarly upregulated, but sixteen were not.We found 7 and 9 genes from the classes ''Up'' and ''Down'', respectively, that were not upregulated in the mouse ZGA stage in 2-cell embryos.All these genes, except for MAGEA2, were downregulated between 2-and 8-cell stages, showing a maternally loaded expression pattern.These include the transcription factors SFPQ, ZNF639 and PHF20 (Figure 2, Table 1).This difference did not depend on polyadenylation (Figure S3) nor on cell culture and analysis methods (Figure S6 and S7). The three transcription factors SFPQ, ZNF639 (also known as ZASC1) and PHF20 have been associated with cancer [34][35][36][37][38][39][40][41].SFPQ is an essential pre-mRNA splicing factor required early in the spliceosome formation [42].Two other splicing factors in our study, SNRNP70 and SNRPA1 were upregulated in humans by ZGA, but maternal and downregulated in mouse.Zygotic transcription in mouse starts one day earlier than in human, perhaps suggesting earlier requirement for the splicing factors in the mouse. A microarray study by He et al. (2010) suggested global differences in the mouse and human early gene expression, while a sequencing study by Xue et al. (2013) proposed similar expression.However, Xue et al. used stage-specific modules as the basis of their analysis, thus looking at gene expression values at different time-points as opposed to expression changes between stages.He et al. analyzed gene expression changes between stages and compared gene ontology categories.Neither of the studies compared expression profiles of differentially expressed genes between mouse and human.Our approach permitted the detection of specific gene clusters with differential expression profiles between mouse and human that have not been described before.In addition, we verified the observed patterns in the dataset from Xue et al. (Figure S6 and S7). Differences in gene expression between the human and mouse preimplantation development might in part account for the timing differences between these organisms.Mouse preimplantation development is faster than human, requiring 84-96 h to reach blastocyst stage while it takes 24-30 h more for human [43,44].Furthermore, ZGA starts at 1-to 2-cell stage in mouse and at 4-to 8-cell stage in humans.This might be due to the presence of necessary transcripts already in the oocyte stage in mouse, while the genes are not yet expressed in human.Three such maternal genes in mouse described in this study are involved in splicing, which might contribute to the difference in timing for development.The developmentally important lineage-specific marker proteins are detected at different stages in human and mouse embryos [45]. Cancer-testis antigens expression in the human and mouse preimplantation Cancer/testis (CT) antigens are a category of tumor antigens with mostly unknown functions that are expressed in various types of cancer but have their expression otherwise restricted to male germ cells in the testis [46,47].We investigated four CT antigen families with dynamic expression profiles in human: Preferentially Expressed Antigen in Melanoma (PRAME), Synovial Sarcoma X breakpoint (SSX), Melanoma antigen family A (MAGEA) and Gantigen (GAGE).PRAME is a CT antigen with unknown biological function [48].Many human PRAME family genes are clustered in the genome [49] and PRAME family genes on the microarray (PRAMEF1/2, PRAMEF10, PRAMEF11, PRA-MEF12) belonged to the ''Up-down'' gene cluster.Four genes of this family were investigated in the mouse: Gm13102, Pramef12, Pramel6 and Pramel7.Gm13102 is situated next to two more PRAME family genes in mouse called Oog2 and Oog3.In the course of this study, 4 members of the PRAME family called Oog1 -Oog4 were shown to be expressed in early mouse embryos or oocytes [50,51].We found that 3 members of the family -Gm13102, Pramel6 and Pramel7 -were upregulated by mouse 2cell stage and thus had similar expression pattern as their human counterparts.The remaining gene, Pramef12 was not upregulated, but already present in mouse oocytes.The PRAME gene family was predicted to have a role in spermatogenesis due to the expression levels and positive selection in mammals [52].Our analysis on human microarray and mouse qPCR in early embryos showed that the PRAME family genes were highly upregulated in early embryos and suggested a role for this family in preimplantation development. SSX genes are known to be expressed in normal testis and different types of cancer [53].Our data show that several members of the SSX family had ''Up-down'' expression profiles in human preimplantation development (Figure 5).In contrast, the GAGE family genes persisted longer in the preimplantation embryo compared to the other CT antigens, until the blastocyst stage (Figure 5).Consistently, GAGE and MAGE family members had been found as highly expressed in the trophectoderm of mouse preimplantation embryo [54].Both GAGE and MAGEA family members were detected in the postimplantation human embryo, suggesting an important role for CT antigens in cell differentiation processes [55].MAGEA family proteins were also detected in placentas, whereas GAGE family members were not [56].We conclude that there is strong evidence for important role of CT antigens both in the pre-and postimplantation embryo. Conclusion We selected 70 differentially regulated genes with possible importance in human preimplantation development and investigated their expression in the mouse oocyte, 1-cell, 2-cell and 8-cell embryos.We found small differences in the maternally expressed and downregulated genes between human and mouse.In contrast, we found a set of genes that were upregulated in humans but not in mouse after zygotic genome activation.Sixteen out of 25 the genes in human ''Up-down'' and ''Up'' clusters had this difference in expression.Fifteen mouse orthologs shared the expression profile with maternally expressed genes and were downregulated in the course of preimplantation development, but were upregulated in humans.This difference in gene expression between human and mouse early embryos might account for part of the different preimplantation time in humans compared to mouse or for the differences in splicing.In addition, we described high expression levels for four cancer-testis antigen family members in ZGA and later stages of human preimplantation development.We suggest that the CT antigens have a function in the early embryos.Our findings show significant differences in the expression between mouse and human, limiting the generalizations from mouse to human preimplantation development.Knowledge about model systems limitations is crucial when investigating a complex process such as human preimplantation development. by using oligo(dT).primers for cDNA synthesis.The orthologs are plotted according to their distribution in the Figure 3: similar to human (A, C) and not similar to human (B, D).Genes marked by an asterisk do not share the same statistical significance as the ones primed with random hexamers (Figure 3), however the trends for up-and downregulation remain unchanged.Average 2DCt values are plotted for each stage using the TaqMan array dataset generated in this study when using oligo(dT) primers for cDNA synthesis.Undetected samples were attributed a 2DCt value of 2 Selected genes from the human clusters ''Up'' and ''Up-down'' and their orthologs are plotted according to their distribution on Figure 3. Cluster ''Up'' genes for human (A) and their mouse similar (B) or different (C) orthologs are plotted by using ln(RPKM+1) values from the human and mouse sequencing data.''Up-down'' genes are plotted for human (D) and their mouse orthologs (E, F).Human genes that are not significantly upregulated by 4-or 8-cell in the human sequencing dataset are indicated by an asterisk.Mouse genes that are significantly upregulated in the current sequencing dataset, but not on the TaqMan array in Figure 3, are indicated by an asterisk.(TIF) Figure S6 Expression profiles for upregulated genes by 4-cell stage in human and their mouse orthologs.All genes that were at least 5 times upregulated in human sequencing data by the 4-cell stage (p-value,0.05),were used for expression profiling (A).Their orthologs in mouse clustered into two large expression clusters ''Up'' and ''Down'' (B).(TIF) profiles for upregulated genes by 8-cell stage in human and their mouse orthologs.All genes that were at least 5 times upregulated in human sequencing data by the 8-cell stage (p-value,0.05),were used for expression profiling (A).Their orthologs in mouse clustered into one large expression cluster ''Up'' and a smaller cluster with mostly downregulated genes.(TIF) Table S1 List of all selected genes in humans, corresponding probesets from microarray, mouse orthologs, TaqMan Low Density Array assay names, DCt for mouse qPCR data (current study) and average comparative expression values as log 2 (comparative expression) for human microarray data from Zhang et al. Figure 1 . Figure 1.Examples of genes from the three different expression profile clusters.Three different expression profiles are shown for three genes: ''Up'' (ZNF622), ''Up-down'' (PHF20) and ''Down'' (ZSWIM3).Average gene expression from normalized arrays are shown for two independent preimplantation microarray sets: Zhang et al. (2009) and Xie et al. (2010).Expression for various human tissues from Amazonia database show typical examples of selected genes.The larger groups of tissues are labeled, more information about the samples can be found from Amazonia database http://amazonia.transcriptome.eu/.Selected genes in the cluster ''Down'' display high expression in oocytes and low expression in various other human tissues.doi:10.1371/journal.pone.0102949.g001 Figure 2 .Figure 3 . Figure 2. Clustering of selected human genes and their orthologs in mouse.Unsupervised clustering created three distinctive classes for human genes: ''Up'', ''Up-down'' and ''Down'' (A).Mouse orthologs did not cluster similarly, but had a large cluster with mostly downregulated genes and a small cluster with upregulated (or up-and downregulated) genes (B).Average log 2 (comparative expression) values for each stage were used for the human data obtained from Zhang et al. (2007) microarray expression dataset and average 2DCt values were used for the mouse expression data produced in the current study.Undetected samples were attributed the 2DCt value of 214.8.Asterisks indicate mouse orthologs of human ''Up'' and ''Up-down'' cluster that were significantly upregulated in mouse between 1-cell and 2-cell stages.doi:10.1371/journal.pone.0102949.g002 Figure 4 .Figure 5 . Figure 4. Expression profiles of NLRP and PRAME family genes in human and mouse.Most NLRP family members in the human microarray data share maternal expression pattern or are expressed at low levels (A).Mouse NLRP family orthologs in the TaqMan array share the similar maternal expression pattern, except for Nlrp12 (B).One gene in the mouse PRAME family is maternally expressed, Pramef12, while others are upregulated after fertilization (C), consistenly with the human PRAME family members on Figure 5.The results are supported by sequencing data by Xue, et al (2013) for human and mouse NLRP families (D, E) and mouse PRAME family (F).NLRP7 is upregulated after human 8-cell stage in both the microarray and sequencing dataset, in contrast to overall trend in the family (A, D).Mouse Nlrp12 is lowly expressed in both mouse datasets while the other genes are mostly higher expressed in the oocyte and 1-cell embryo (B, E).Mouse Pramef12 gene is maternally loaded in both TaqMan array and sequencing method in contrast to the rest of the genes in the family (C, F).Average log 2 (comparative expression) values for each stage were used for the human data obtained from Zhang et al., 2007 microarray expression dataset and average 2DCt values were used for the mouse expression data produced in the current study.Undetected samples were attributed the 2DCt value of 214.2.Human and mouse sequencing data from Xue, et al, 2013 shows average ln(RPKM+1) values for the same biological stage.doi:10.1371/journal.pone.0102949.g004 Figure Figure S4 Correlation plots between human microarray and sequencing data, and between mouse TaqMan array and sequencing data.Different datasets for human preimplantation genes correlate with each other for mouse TaqMan array and sequencing study (A) and for human microarray expression and sequencing study (B).Mouse 2DCt values from TaqMan array data in the current study were correlated with the log 2 (RPKM+1) values from sequencing data from Xue, et al, (2012) (A).Human microarray log 2 (comparative expression) data from Zhang, et al. (2009) was correlated with ln(RPKM+1) sequencing study by Xue, et al. (2012) (B).Correlation plots were done for similar biological stages in both organisms.(TIF) Figure S5 Gene expression profiles of clusters ''Up'' and ''Up-down'' genes in human, and their orthologs in mouse using the sequencing data from Xue et al. (2012).Selected genes from the human clusters ''Up'' and ''Up-down'' and their orthologs are plotted according to their distribution on Figure3.Cluster ''Up'' genes for human (A) and their mouse similar (B) or different (C) orthologs are plotted by using ln(RPKM+1) values from the human and mouse sequencing data.''Up-down'' genes are plotted for human (D) and their mouse orthologs (E, F).Human genes that are not significantly upregulated by 4-or 8-cell in the human sequencing dataset are indicated by an asterisk.Mouse genes that are significantly upregulated in the current sequencing dataset, but not on the TaqMan array in Figure3, are indicated by an asterisk.(TIF) (2009).Gene clusters: Affy ID-s, corresponding gene names, significant fold-changes between consecutive changes in the Zhang et al. and Xie et al., distribution into the clusters.TLDA analysis: TaqMan assay names, corresponding DCt values for all the replicas and stages for the 12 ng protocol primed with random hexamers.Human vs mouse comparison: Clusters, human gene names and their mouse homologs together with average expression values between replicas in all the different stages.Similarities and differences between the expression profiles are shown.(XLS) Table S2 List of upregulated genes in human and their mouse orthologs from Xue, et al. (2013).RPKM+0.1 values are shown for all human genes that are significantly and more than 5 times overespressed either in the 4-cell or 8-cell stage compared to the Oocytes (p-val,0.05).Their homologues in mouse and their expression patterns are shown as RPKM+1.(XLS)
2016-05-04T20:20:58.661Z
2014-08-04T00:00:00.000
{ "year": 2014, "sha1": "701ae1c41a1a526e9cf81ecafbeb3691719552a5", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0102949&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c476d310a215e7eba2e48699bd9b405c438e96f2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
249723278
pes2o/s2orc
v3-fos-license
Independent Candidacy and Electoral Reform: New Nation Movement NPC v President of the Republic of South Africa In the New Nation case, the Constitutional Court declared the provisions of the Electoral Act that prevent independent candidates from competing in provincial and national elections unconstitutional. It ruled that the impugned provisions violated independent candidates' constitutional rights to stand for public office, to freedom of association and to dignity. In a minority judgment, Froneman J disagreed and held that the Constitution contemplates a right to contest elections as a party-nominee only. The differences between the majority and minority judgments are largely the result of distinct interpretive approaches. The majority conducted an analysis of the right to stand for public office within a restricted textual framework that has the potential to disturb the harmonious inter-relationship between the right and the electoral and parliamentary framework for its realisation. This result flows from the fact that the Constitution still reflects the exclusively party-based electoral and parliamentary systems of its predecessor in several important respects. At best, this situation may result in independents being largely at the mercy of political parties for meaningful execution of their legislative and oversight obligations. At worst, they may be excluded from exercising core parliamentary functions altogether. Therefore, to avoid disturbing the normative coherence between the right to stand for public office, the foundational democratic values, and the electoral and parliamentary arrangements, constitutional amendments appear to be necessary for the implementation of the court's order. In any event, expectations about the contribution to electoral reform of allowing independents to contest elections must be tempered by the low political impact of independent representatives on governance, as well as the ambivalence surrounding the democratic functionality of independent candidacy, when measured against the values of transparency and accountability. Introduction The Constitution of the Republic of South Africa 200 of 1993 (the Interim Constitution) introduced the current exclusively party-based proportional representation system for the election of members of the National Assembly and the provincial legislatures. In terms of the transitional provisions of the Constitution of the Republic of South Africa, 1996 (the Constitution), 1 this system was to remain in place until the 1999 national election, after which an electoral system had to be introduced through the enactment of national legislation. 2 However, no constitutional amendments pertaining to the electoral system followed and the Electoral Laws Amendment Act 34 of 2003 simply retained the pre-existing system of closed-list proportional party representation for the National Assembly and the provincial legislatures. 3 The Constitution also retained the initial parliamentary system, which in important respects made provision for party representation only. In particular, this concerns important aspects of the internal functioning of the legislatures, such as representation in committees, the participation of parties in certain decision-making processes, the role of opposition parties, the loss of membership of the legislatures and the funding of parties. 4 Under the closed-list system, parties compile lists with the names of candidates nominated and ranked in order of preference by them. Voters have no say in the ranking of candidates. 5 Although many have commended the closed-list system for its fairness, inclusiveness and simplicity, 6 calls for electoral reform came from various quarters. As one might expect, much of the criticism centred on the system's perceived public accountability deficiency brought about by the absence of a constituency-based direct relationship between representatives and the electorate. 7 As early as 26 March 1999, former president Nelson Mandela in his farewell speech in the National Assembly queried whether the electoral system should be revisited to improve the nature of the relationship between public representatives and voters. 8 The electoral task team established by Cabinet in 2002 to draft the new electoral legislation required by the Constitution for the national and provincial elections after 1999 shared the former president's sentiments. The task team submitted a majority and minority report on the reform of the electoral system. 9 The minority favoured the retention of the existing system. The majority recommended a mixed system where a percentage of representatives are elected in multi-member constituencies and the rest in terms of compensatory closed party lists to achieve overall proportionality. 10 They reasoned that the fact that candidates would have to campaign in their constituencies would result in ʺface to face representationʺ and a much closer link with the electorate than is the case under the present system. 11 No follow-up occurred after this report, except that, in 2009, the Report of the Independent Panel Assessment of Parliament 12 identified the electoral system as one of several serious structural weaknesses in the functioning of Parliament. The Panel mentioned the absence of a constituency-based electoral system and the top-down effect of the party-list system as a major impediment to Parliament's ability to exercise its oversight mandate properly and to members of Parliament being accountable to voters. 13 They proposed that the current electoral system should be replaced by a mixed system to capture the benefits of both the constituency-based and proportional representation systems. 14 Thereafter, in December 2015, the Speakers' Forum, as the representative body of the South African legislative sector, established an independent high-level panel of eminent South Africans to review legislation for its effect on the government's transformational and developmental agenda. report, 15 the Panel once again revisited the influence of the electoral system on effective parliamentary oversight of the executive and the public accountability of individual representatives. It reaffirmed the findings and recommendations of the foregoing reports regarding the accountability weaknesses of the closed-list proportional representation system and recommended that Parliament should amend the Electoral Act to provide for an electoral system that made members of Parliament accountable to defined constituencies on a combined proportional representation and constituency system for national elections. 16 No direct legislative changes followed from any of these initiatives. However, the recent judgment of the Constitutional Court in New Nation Movement NPC v President of the Republic of South Africa 17 (New Nation case) has now firmly forced the issue of electoral reform onto the legislative agenda for the immediate future. The court declared the provisions of the Electoral Act that prevent independent candidates from standing for election to the national and provincial legislatures unconstitutional. 18 It ordered Parliament to remedy the constitutional defect within two years. 19 Despite the fact that the reach of the judgment does not go beyond the legalisation of independent candidacy and does not directly address the foundational features of the electoral system, 20 some have hailed it as a landmark in fundamental reform. 21 However, so far there has been no indication that the government intends to use the court order to effect systemic electoral reform. 22 Nevertheless, the extent to which the legalisation of independent candidacy on its own can satisfy expectations for electoral reform still needs to be considered. In what follows, I will first evaluate the majority and minority judgments. For reasons that will become clear, I consider the result reached in the minority judgment as the correct one in terms of the law as it stood at the time. Nevertheless, since the reality is that the Constitutional Court has now declared the prohibition of independent candidacy unconstitutional, it is also necessary to reflect on the reformative implications of allowing independent candidates to participate in national and provincial elections. This assessment will be done with reference to foundational electoral principles, in particular inclusivity, transparency and accountability. The discussion is limited to this broad normative question and does not address the considerable practical logistical challenges associated with the implementation of the New Nation judgment. 23 Issues and main submissions After losing in the Cape High Court, 24 the appellants challenged the constitutionality of the provisions of the Electoral Act, 25 which allow only candidates nominated by political parties to stand for election to the National Assembly and provincial legislatures. The appellants' min contention was that the Electoral Act is unconstitutional for unjustifiably limiting the right conferred by section 19(3)(b) of the Constitution to stand for public office political-leaders-react-20200611. For a more sceptical view, see Griffiths 2020 https://www.dailymaverick.co.za/opinionista/2020-06-24-changes-to-electoral-actwill-not-fundamentally-alter-south-africas-political-landscape/; Fakir 2020 https://www.africanews24-7.co.za/in-political-rehabilitation dex.php/southafrica forever/a-referendum-and-thorough-going-system-reform-is-the-way-to/. 22 Grootes 2021 https://www.dailymaverick.co.za/article/2021-09-16-our-politicalsystem-has-failed-the-election-structure-and-the-players-within-it-may-have-tochange/; Paton 2021 https://www.businesslive.co.za/bd/national/2021-09-15-ancleadership-opts-for-minimal-changes-to-electoral-law/. The government's minimalist response to the New Nation case is also clear from the Electoral Amendment Bill [B1-2022] (explanatory summary published in GN 1660 in GG 45716 of 31 December 2021). The provisions of the Bill are limited to the logistical and administrative changes required of the Electoral Act to accommodate independent candidates. The Bill has several problematic features, which, however, fall outside the scope of this article. 23 For a discussion of the need for broader electoral reform to address the practical logistical obstacles in implementing the judgment, see Wolf 2021 SALJ 77-87. 24 New Nation Movement NPC v President of the Republic of South Africa 2019 5 SA 533 (WCC). 25 Section 57A read with sch 1A of the Electoral Act. PER / PELJ 2022 (25) 6 and, if elected, to hold office. In addition, some applicants submitted that the Electoral Act infringes their right to freedom of association. JL PRETORIUS On their part, the respondents contended that section 19 of the Constitution does not explicitly include or exclude the right to stand as an independent. However, such an exclusion is to be found in other constitutional provisions, in relation to which section 19 should be interpreted. An exclusive political party proportional representation system, so the argument went, is indicated by several provisions connected to the electoral system, as well as the composition and operational functioning of the legislatures. In sum, they contended that the way the Constitution has institutionalised political parties in these contexts makes it clear that only members of political parties are allowed to stand for election. Textual analysis of section 19 Given the issue before it, the court first had to interpret the scope of the right to stand for public office, in particular the question of who would qualify as beneficiaries of this right. As will appear in what follows, there is frequent resort to ʺnaturalʺ or ʺplainʺ (that is, purportedly interpretation-free) word meanings during the court's referencing of the relevant constitutional provisions. The court's preferred literalist approach largely obviated the need for wider contextualisation, with reference to underlying democratic values and the operative electoral and parliamentary systems within which the right to stand for public office is constitutionally constructed. In so far as the majority judgments resorted to intra-textual contextualisation, this contextualisation was limited mainly to the Bill of Rights itselfin particular, the inter-relationship between the right to stand for public office and the right to freedom of association. 26 This resulted in a contextualisation exercise that did not afford due weight to foundational democratic values and the way that the right has been embedded in the electoral and parliamentary systems. Moreover, this tendency was aggravated by the court's reluctance to make value judgments regarding electoral systems that best serve democratic values. 27 It is one thing to say that a court should not be prescriptive about 26 New Nation case paras 14, 20-59. 27 New Nation case para 15: "Before I proceed to deal with the interpretative exercise, let me mention that a lot was said about which electoral system is better, which system better affords the electorate accountability, etc. That is territory this judgment will not venture into. The pros and cons of this or the other system are best left to JL PRETORIUS PER / PELJ 2022 (25) 7 which electoral system best serves core democratic values (such as accountability, inclusiveness, fairness, transparency and responsiveness), but quite another to treat these values as if they have only a marginal bearing on a dispute about electoral norms. 28 These values imply at least minimum legally binding thresholds for the constitutionality of electoral provisions, including the provisions that impact on independent candidacy. 29 Also, the principle of interpreting the Constitutionnot just the Bill of Rights 30as comprising a coherent normative unity requires the right to stand for public office to be interpreted within a harmonious interrelationship with the associated electoral and parliamentary framework of the Constitution. According to the court, the ʺplain meaningʺ of section 19 of the Constitution suggests that its central theme is individual freedom of choice; namely, the individual right to make political choices, such as to form or join or not to form or join political parties, to take part in their activities or not, to stand for public office or not, etcetera. 31 Once adult citizens are compelled to exercise the right to stand for public office through a political party, they are divested of the very freedom of choice not to form or join a political party. This, in the court's view, is exactly what the denial of the right to stand as an independent candidate entails; in order to stand for election, they are forced to join or form a political party. That the right to stand for public office includes the right to stand as an independent is therefore a necessary implication of the right so understood. 32 The court felt strengthened in this conclusion by its analysis of the right to freedom of association. 33 In this instance too, in the court's view, the core content of the right is one of individual choice -the right to choose to associate or to disassociate. It considered the constitutional purpose of freedom of association and its treatment in international 34 and foreign law Parliament which … has the mandate to prescribe an electoral system. This Court's concern is whether the chosen system is compliant with the Constitution." The court considered several judgments of the European Court of Human Rights (ECHR), which confirm that in principle freedom of association includes the right not to associate. However, none of these cases dealt with the prohibition of independent candidates specifically. As I will mention later, many member states of the European Union explicitly forbid independent candidates taking part in elections. Moreover, in JL PRETORIUS PER / PELJ 2022 (25) 8 and concluded that section 18 of the Constitution protects not only the positive right to associate but also the ʺnegative rightʺ not to be compelled to associate. 35 The court thus held that the right to freedom of association is limited when the state compels individuals to associate with a political party against their willwhether by joining or forming a party. The court's portrayal of the precise nature of the evil of being forced to form or join a party is completely in line with its libertarian understanding of the nature of these rights. It rejected the respondents' view that requiring candidates to stand for public office only as political party nominees does not deny the right, but simply prescribes a particular avenue for its exercise. If a prospective candidate therefore does not find any existing party acceptable, they are free to form their own party, which is a relatively undemanding option. 36 The court argued that although for some there may be advantages in being a member of a political party, undeniably political party membership also comes with impediments that may be unacceptable to others. It may be too trammeling to those who are averse to control. It may be overly restrictive to the free spirited. It may be censoring to those who are loath to be straight-jacketed by predetermined party positions. In a sense, it just mayat timesdetract from the element of self; the idea of a free self; one's idea of freedom. 37 Based on this understanding of the right to stand for political office, the court then proceeded to assess the arguments of the respondents that other constitutional provisions indicate an exclusive party-based system. Oran v Turkey (ECHR) Applications nos 28881/07 and 37920/07 (2014) the court upheld a ban on independent candidates standing for election in Turkey. In addition, in Castañeda Gutman v México (IACHR) Series C no 184 (6 August 2008) (Gutman case) the court also found no violation of the American Convention on Human Rights by México's prohibition of independent candidates taking part in the elections. The first majority judgment in the New Nation case did refer to Tanganyika Section 1(d): the founding value of a multi-party system of democratic government The Republic of South Africa is one, sovereign, democratic state founded inter alia on the value of a multi-party system of democratic government, to ensure accountability, responsiveness and openness. The respondents contended, also with an appeal to the ʺplain meaningʺ of the words, that this founding provision entails an exclusively party-based proportional representation system. The court disagreed, by arguing that all that this provision does is to stipulate that the Republic must never be a one-party state, 38 which does not exclude the participation of independent candidates in elections. 39 Clearly, there is no single unambiguous word meaning at play here, which could dictate either of the interpretations with absolute certainty. Interestingly, in the Gutman case the Inter-American Court of Human Rights (the IACHR) found historical and political justification for the ban on independent candidacy in México's national elections, by specifically emphasising the necessity to create and strengthen a system of multi-party democracy. Such a system did not exist for a considerable period of the history of that country under the dominance of a hegemonic official state party regime. 40 The court also pointed out that allowing independent candidates to stand for election could actually impede the development of a viable multi-party system because of the large-scale fragmentation of popular representation it would bring about. 41 Given our own history, it seems therefore that applying a similar historical context to the interpretation of section 1(d) of the Constitution would not have been out of place either. Sections 46(1)(a) and 105(1)(a): the electoral system to be prescribed by legislation These sections respectively provide that the National Assembly and provincial legislatures consist of women and men elected in terms of an electoral system prescribed by national legislation. The respondents argued that these provisions empower Parliament to prescribe the electoral system, 38 The court relied on its earlier interpretation of this founding value in which is what it did with the Electoral Act. 42 The court rejected their contention by pointing out the obvious that this mandate does not imply that the legislation prescribing the electoral system is free of constitutional constraints, particularly section 19. 43 Sections 47(3)(c) and 106(3)(c): loss of membership of the legislatures The two sections respectively provide that a person loses membership of the National Assembly or a provincial legislature if that person ceases to be a member of the party that nominated her or him for membership. According to the respondents, this supports the view that membership of the legislative institutions is exclusively party based. 44 The court again found otherwise and held that the provisions mean no more than that it is the membership of members nominated by parties that is lost in this manner. That says nothing about loss of membership of members who were not sponsored by parties. Nor, in the court's view, is it in any way indicative of their exclusion from membership. 45 Here is another example of the lack of depth of analysis due to the literalist approach, which, given the inherent ambiguity of word meanings, could frankly support both interpretations. One could just as plausibly argue that had the Constitution contemplated independent candidacy, it would have explicitly dealt with the conditions for loss of their membership also. This underlines the need for broader contextual analysis, especially with reference to the underlying constitutional values of the electoral system. In particular, the court's finding in this respect could be questioned in terms of its implications for a consistent and equal application of the principle of accountability to all categories of members of legislative bodies. Should the Constitution be interpreted to allow independent candidates, then the lack of a constitutionally prescribed functional equivalent to sections 47(3)(c) and 106(3)(c) applicable to them would result in imputing to the Constitution a serious voter accountability deficit compared to members nominated by parties. The generic constitutional conditions for loss of membership of the legislatures 46 do not suffice to fill this voter accountability gap. An example of a constitutional arrangement for eventualities such as these, in a system that expressly caters for independent candidacy, is to be found in section 42 New Nation case para 74. 43 New Nation case para 75. However, see the discussion in section 3.7 below. PER / PELJ 2022 (25) 11 83(h) of the Constitution of the Republic of Uganda, 1995. This section specifically provides for a member of Parliament to vacate his or her seat, if, having been elected as an independent candidate, that person joins a political party. On the court's reading, it would be constitutionally acceptable to allow members elected as independent candidates, but who decide during their tenure to join a political party, to retain their seats, whereas party-nominated candidates who change their party affiliation cannot. Sections 46(1)(d) and 105(1)(d): the electoral system must in general result in proportional representation These sections respectively provide that the National Assembly and provincial legislatures consist of women and men elected in terms of an electoral system that ʺresults, in general, in proportional representationʺ. The respondents argued that this implies an exclusive party-proportional representation system. The court disagreed again. It correctly held that proportionality does not equal exclusive party-proportional representation. Proportional representation is not incompatible with independent candidate representation. 47 According to the court, these sections do not refer to partyproportional representation, let alone exclusive party-proportional representation. The sections only require that elections result, in general, in proportional representation, whoever the participants may be. 48 The constitutional provisions regarding ward representation for local government elections also show that proportional representation is not incompatible with independent candidates. 49 Section 157(2)(a): provisions for local government elections This section provides that the electoral system for municipal elections must be either one of proportional representation, based exclusively on the election of candidates from closed party lists, or one of proportional representation that combines ward representation and party representation. On the strength of the first option, which patently disqualifies independent candidates, the respondents contended that section 19 could not be interpreted to render the Electoral Act unconstitutional. It would be illogical to argue that a system that provides for exclusive party representation is unconstitutional under section 19 ( The court, again, was not convinced that this is so. It held that the provisions of section 157(2)(a) do not contradict sections 18 and 19 of the Constitution. Here, obviously, the court could not fall back on ostensibly ʺnaturalʺ word meanings to resolve the contradiction between the sections. In the exercise of redefining an apparent contradiction as an exception, the court resorted to historical contextualisation in aid of its view. The court considered it significant that -without explaining how -constitutional negotiations in respect of municipalities were conducted separately from the rest of the negotiation process. 51 It also pointed to the history of race-based spatial separation and the concomitant inequality of services and living conditions. 52 From this it concluded that the framers of the Constitution must have seen fit to make an exception in the case of local government elections and thus sanction the option of party-nominated candidates only within a proportional representative system. However, the court did not in any explicit way make clear how the electoral options mentioned in section 157 of the Constitution relate to the unique position of local governments, or how their position differs substantially from the historical legacies also to be found at the provincial and national levels of government. The court appears to have implied that the Constitution provided for the option of exclusive party representation for local government to avoid the perpetuation of the legacy of spatial apartheid that could flow from geographical ward representation. If this is the motivation for section 157(2)(a) of the Constitution, then it is difficult to comprehend why as much as fifty per cent of ward representation was implemented country-wide for local government elections. 53 This has the potential to accentuate racial divisions in the geographic distribution of voters and even skew the over-all proportionality of elections -should a significant number of ward representatives be made up of non-party-aligned candidates. narrowly loses in a substantial number of wards won on the first-past-the post system by independents. 54 More importantly for the point in question, however, is that there is no indication in the judgment as to how the historical considerations canvassed by the court actually relate to the question of whether the Constitution contemplates specifically independent candidacy for provincial and national elections. If independent candidacy is linked to some form of constituencybased representation, then the same spectre of the legacy of spatial apartheid in municipalities will also be present at the national and provincial levels. If, on the other hand, independent candidacy is not linked to geographical constituencies, then the history of spatial apartheid has no obvious relevance for the question of whether independent candidacy is constitutionally mandated or not. In this context, the respondents' reading of the Constitution therefore seems the more plausible one. Schedule 6: transitional provisions The respondents also relied on the transitional provisions in schedule 6 of the Constitution, which provide for the exclusive party-based proportional representation system of the Interim Constitution to remain in place temporarily. However, the court pointed out that this was to apply only until the first election after the coming into force of the Constitution. Once this moment had passed, this provision on its own could therefore not be sourced as a basis for arguing in favour of the perpetuation of an exclusive party-based system. 55 The Court seems to have erroneously assumed that since the exclusive party-based system was contained in transitional provisions, it meant that this system had to be discarded after the 1999 elections. 56 (2) of the Constitution are prescriptive about the electoral system that was to be adopted after the 1999 elections, 57 which means that the retention of the existing exclusive party-based system remained a constitutionally endorsed possibility. The 2003 Electoral Laws Amendment Act then simply implemented the option left open by these constitutional provisions. What is more, as I will show next, constitutional provisions that hang closely together with this system have also been preserved unchanged. After all, if the intention had been to bring about systemic electoral modifications after 1999, then why change nothing regarding the compositional and operational constitutional arrangements pertaining to the legislatures that were devised with an exclusive party-based system in mind? Sections 57(2), 178(1)(h), 193(4), 193(5) and 236: provisions regarding the institutionalisation of political parties in the composition and functioning of the National Assembly The respondents also relied on the way that the Constitution institutionalises political parties in the composition and functioning of the National Assembly and its committees. In particular, they referred to the following provisions. First, the rules and orders of the National Assembly must make provision for the participation of minority parties in its proceedings and those of its committees; provide for the financial and administrative assistance of all parties represented in the National Assembly; and make provision for the recognition of the leader of the largest opposition party in the National Assembly as the Leader of the Opposition. 58 Secondly, the Constitution reserves the National Assembly's participation in making core appointments for political parties only. The Judicial Service Commission must consist of six people designated by the National Assembly from amongst its members, at least three of whom must be members of opposition parties. 59 a committee of the Assembly proportionally composed of members of all political parties represented in the Assembly. 60 Thirdly, to enhance multiparty democracy, national legislation must provide for the funding of political parties that participate in the national and provincial legislatures. 61 The substance of the respondents' argument is that since the Constitution provides for funding of and participation in the above-mentioned processes by parties only, it was never intended for independents to be elected to these bodies. If the intention were otherwise, independents would have been included in these arrangements. 62 There is another important constitutional provision related to the composition and functioning of the legislatures that adds strength to this conclusion, which was, however, not canvassed by the parties or addressed by any of the judgments. Provincial delegates to the National Council of Provinces are drawn exclusively from political party nominations. 63 This would be anomalous if it was constitutionally envisaged that independent candidates could stand for election to provincial legislatures. This argument also failed to impress the court. The court held, somewhat incongruously, that the reason that the Constitution refers only to political parties in the relevant operational arrangements of the National Assembly is because of the founding provision endorsing multi-party democracy. 64 In the court's view, the particular focus on political parties in these provisions seeks to strengthen multi-party democracy but does not negate the possibility of the participation of independents in the National Assembly and provincial legislatures. 65 If this is so, then the question remains: why would the Constitution discriminate in such a deliberate manner to privilege only some categories of representatives (and, by extension, only some categories of voters), if it was the case all along that not only political party nominees could be represented in the legislatures? If independent candidates could be elected, would these constitutional provisions then not conflict with the founding democratic principles of equality and fairness and detract from the representative, legislative and oversight powers and obligations of the non-party-aligned categories of representativesthus also compromising the founding value of democratic accountability? 66 Independent candidates would be either expressly excluded from taking part in the parliamentary processes mentioned above or -to the extent that they could be included -exposed to the whims of party representatives. In so far as the court's interpretation has these implications, it negates the much-vaunted interpretive principle of maintaining the normative coherence of the Constitution. 67 It would mean that multi-party democracy is enhanced at the cost of the foundational democratic values of the Constitution. It is difficult therefore to escape the inference that these operational provisions of the Constitution were initially designed and retained after 1999 with an exclusive party-representative system in mind. Justification None of the respondents submitted evidence in justification of the limitation of the right to stand for public office. This is a pity, since the limitation analysis would have offered the framework for arguing the broader democratic implications of the opposing views, since section 36 of the Constitution requires a consideration of the reasonableness and justifiability of the limitation of rights in an open and democratic society, based on human dignity, equality and freedom. Although the lack of evidence on justification does not exempt the court from the obligation to conduct the justification analysis, 68 Madlanga J merely concluded, without further elaboration: ʺI can conceive of no reason to hold that the limitation is justified.ʺ 69 Second majority judgment Jafta J delivered a separate judgment in which he agreed with the outcome reached in the first judgment but chose to underscore further the importance of section 19(3) of the Constitution. 70 He stressed that the right to stand for public office, which is closely linked to the right to vote, is unequivocally afforded to all individual adult citizens. In his view, it would be a clear violation of the individual nature of the right if citizens were to be compelled to exercise this right through the medium of political parties only. 71 majority judgment also raised this point, 72 and it therefore needs no further elaboration. Apart from also rejecting the respondents' arguments regarding section 157(2) of the Constitution, Jafta J did not address the other submissions considered in the first majority judgment. Minority judgment Froneman J opened his judgment by taking issue with the majority's literalist approach. To him, the right to stand for public office and, if elected, to hold office does not have an uncontested, pre-given meaning that can be determined without having regard to the constitutional context. 73 The content of this right is not to be determined notionally, 74 but contextually by considering the foundational values and the constitutional norms governing the electoral system. 75 Given the accepted interpretive principle of the "inner unity" of the Constitution, the right to stand for public office must not be interpreted on its own, but with reference to the Constitution as a whole. 76 He therefore faulted the majority judgments for not having proper regard to the constitutionally required electoral and parliamentary framework within which this right must be exercised. 77 In pursuing this route, Froneman J commenced with the foundational features of the democratic system. The democratic framework established by the Constitution allows for representative, participatory and direct democracy. In his view, representative electoral government requires a multi-party system, which, ʺin ordinary parlance and understanding, constitutional detail and the Court's jurisprudenceʺ, has political parties at its core. 78 The Constitution is devoid of indications that any other grouping than political parties is included under this term. 79 He found support for this proposition in section 236 of the Constitution, which makes provision for the funding of political parties only in order to enhance multi-party democracy, as well as in the dictum of the in facilitating the exercise of political rights in our multi-party democratic system. Froneman J distinguished participatory democracy as a further foundational feature of the democratic system. 81 The democratic government contemplated by the Constitution is one that is accountable, responsive and transparent, and makes provision for public participation by way of public access to and involvement in the legislative and other processes at national, provincial and local government levels. 82 While contestation among multiple political parties is an essential feature of the system of elected democratic government, the Constitution's vision of democracy is complemented by additional forms of participatory democracy. Apart from envisioning democracy as being participatory in relation to representative government, the Constitution also makes provision for direct democracy. He believed this to be ʺa counterweight to the importance of political parties in a representative democracyʺ, because it provides an alternative for those individuals and groups whose interests are neglected by political parties, or who find it difficult to make use of the possibilities for participation. 83 Direct forms of participatory democracy are found in the right of freedom of assembly, demonstration, picket and petition, and in the constitutional provisions that provide for the calling of national and provincial referendums. 84 In Froneman J's opinion, the constitutional recognition of different forms of democracy dispels the allegation that the choice not to form or join a political party under section 19(1) of the Constitution has the consequence of rendering the prohibition of independent candidates constitutionally defective. He contended that those who do not wish to participate through the party-political process are not deprived of their democratic political voice. 85 The consequence of that choice is that democracy may be pursued directly, by the use of the right to assembly, demonstration, picket and petition, or by calling for a referendum. 86 The choice to champion a cause rather than a political party, thus, still remains and may be pursued by other constitutionally protected democratic means. 87 Froneman J argued that this also explains why the appellants' attempt to seek support from the right not to associate is inapposite. No one is compelled to form or join a political party, but should they decide not to, then the Constitution itself limits their political participation to the direct democratic means at their disposal. 88 These arguments regarding the Constitution's menu of different forms of democratic participation are unpersuasive and do noton their ownprovide a sufficient constitutional basis for the premise that both representative and direct democratic participatory rights are reserved for party-affiliated members of the legislatures, not for independent members. 89 Nevertheless, based on his understanding of the relevant constitutional values and electoral norms, Froneman J concluded that the right to stand and hold elective office in terms of section 19(3)(b) of the Constitution is an individual right to represent the people in a multi-party system through the medium of political parties that results, in general, in proportional representation. 90 Froneman J also addressed the implications of the constitutional endorsement of proportional representation. He observed that the choice of proportional representation at the provincial and national levels amounts to the prioritisation of equality above accountability. 91 He stated that accountability might be better secured through a constituency-based system or a mixed system, and that at local government level the option of ward representation therefore points to the prioritisation of accountability over equality. 92 This reasoning seems questionable. The Constitution requires that the electoral system at all levels of government must in general result in proportional representation. 93 If his linking of proportional representation with a prioritisation of equality over accountability is correct, then it is puzzling to assume, as he does, that at the municipal level, the Constitution has prioritised accountability over equality, since, as noted above, both electoral options for local government must be proportional in result. 94 Equality is therefore endorsed in the same way regarding the over-all 88 New Nation case para 218. 89 Also see Wolf 2021 SALJ 18. 90 New Nation case para 208. 91 See New Nation case para 221: "The entrenchment of proportional representation, and its achievement through the vehicle of political parties, flows from the prioritisation of equality in political voice (every vote counts equally) over the accountability that might be better secured through a constituency-based system or a mixed system." electoral result. Secondly, forms of accountability that are functionally equivalent to constituency-based representation are also to be found in exclusive party-list systems, such as open-list systems, where voter preference regarding candidate selection is accommodated. Moreover, it is not immediately apparent what the alleged relative prioritisation of equality and accountability at the different levels of government implies for the specific question of whether the legalisation of independent candidacy is constitutionally mandated or not. Froneman J appears to have assumed that independent candidacy is only possible on a constituency basis, 95 and since the Constitution expressly caters for constituency (ward) representation at the municipal level only, it is accordingly not permitted at the provincial and national levels. Proportional representation is, however, not incompatible with constituencies, as is the case, for example, in multi-member constituency electoral systems. 96 Such proportional representation systems are common around the world and should the Electoral Act be amended to make provision for it, it would be constitutionally compliant. This cannot therefore be an argument against independent candidates. What is more, although most elections under proportional representation systems are conducted exclusively with candidates who belong to a political party, this is not necessarily the case. For instance, because of its candidate-centredness, independent candidates are a common occurrence under the single transferable vote proportional representation system (as in the Republic of Ireland). 97 Independent candidates could simply be treated as a one-person party, presenting a list with only one name on it, and gain a seat if they receive the required electoral votes. 98 Therefore, no conceptual contradiction between proportional representation, whether constituency-based or not, and independent candidacy exists. Notwithstanding the caveats expressed above, Froneman J's general interpretive approach, to embed the right to stand for public office in the overall democratic, electoral and parliamentary framework of the Constitution, cannot be faulted. As argued earlier, such an analysis makes it difficult to avoid the conclusion that the initial institutionalisation of political parties in the functioning of the electoral and parliamentary systems by the interim Constitution was retained by the Constitution after 1999. It would be challenging to give effect to the court's order without a simultaneous amendment of these provisions, if the internal contradictions mentioned above are to be avoided. 99 The correctness of the outcome of the case in terms of the current state of the law aside, the broader normative question remains whether, from a purely constitutional reformist point of view, constitutional space should be made for independent candidacy. One can sympathise with the democratic sentiment expressed in the first majority judgment that the right to stand for public office must be interpreted generously to make the scope of electoral participation and choice as wide as possible. 100 Self-evidently, independent candidacy can enhance democratic participation and inclusivity and is practised in many countries of the world. However, experience at home and elsewhere reveals a general picture of independent candidacy as only modestly politically impactful and frequently ambivalent in terms of its democratic functionality, particularly when measured against the democratic values of transparency and accountability. Increased inclusivity and participation South Africa's current democratic malaise speaks strongly in favour of legalising independent candidacy as a legitimate option for non-partyaligned voters. Findings from a recent Afrobarometer survey confirm the very low levels of public trust in most of South Africa's public institutions. 101 99 The "integrated roadmap" for the amendment process of the Electoral Act, presented at a joint sitting of the Portfolio Committee on Home Affairs and the Select Committee on Security and Justice on 18 August 2020, proposed four different scenarios -none of which anticipates constitutional amendments. However, in his response to the roadmap, the Minister of Home Affairs, Aaron Motsoaledi, advised that it would not be possible to implement the legislative process without constitutional amendments. votes (those who do not wish to vote for rival parties). 108 In addition, if a party candidate under the current system in South Africa can win a parliamentary seat with only a tiny fraction of the vote (0.18 per cent or about 30 000 votes in the 2019 elections), 109 there does not seem to be good reason to deny independents the same opportunity. However, expectations should be tempered by the modest effect independent candidates generally have on governance. As a rule, independents and groups of non-affiliated candidates have such limited practical prospects of competing successfully in regional and national 110 elections that their role in modern democracies remains marginal. 111 They do not enjoy the electoral benefit of straight-ticket voting by being associated with established parties, or a party's significant organisational and financial support. 112 Add to this independents' limited access to free broadcast time in public media and the fact that very few jurisdictions make provision for independents to access state financial support in advance for their election campaigns. 113 The Political Party Funding Act 6 of 2018 restricts state electoral financial support to political parties represented in the national and provincial legislatures. 114 The same applies to the Multi-Party Democracy Fund established by the Act, which is mandated to raise and distribute donated funds from corporate and private donors. 115 Statistics on independent candidates' voter support reflect the difficulties they experience in competing successfully in regional and national elections. A comprehensive transnational study covering 34 countries 108 For an example of the latter, see Ehin and Solvak 2012 Journal of Elections, Public Opinion and Parties 269-291. Also see Eisner 1993 U Pa L Rev 973-1027 (independents and small parties can be an important voice for change and conduits for the expression of discontent with the major parties). 109 Feltham 2020 https://mg.co.za/politics/2020-08-18-reforming-a-broken-system-canelectoral-act-amendments-revive-faith-in-sas-democracy/. revealed that whilst independents comprised seven per cent of the candidates that competed for office, they won approximately two per cent of the vote and only one per cent of the seats. 116 Similar results appear from a 2013 study commissioned by the European Parliament's Committee on Constitutional Affairs, 117 which covered national elections in the 18 European Union member states that allow independent candidates to contest national elections. In most elections in which they competed, independent candidates attracted only marginal voter support, with an average share of the vote of under two per cent. Since most of these countries also apply threshold requirements for election, the number of dependents actually winning seats is even more modest: only 36 out of 1368 participating independents over the course of two successive national elections in their particular countries. 118 Although South Africa after 1994 does not have similar statistics for provincial and national elections, the success rate of independents who stood for election as municipal ward councillors in 2016 points in the same direction, with an overall representation of independents in councils across the country of less than one per cent. 119 In the 2021 municipal elections, 1546 independent candidates drew 1.75 per cent of voter support and won 51 seats. 120 Once elected, independents generally have no sustained political impact on governance. Being independent comes at the price of foregoing factional strength and cohesion, which means that -except as a minor coalition participant -independents cannot form a governing majority to dictate legislative policy. Although it is not unheard of, the notion of independents joining party caucuses seems hard to reconcile with standing as an independent in the first place. Yet, aligning with particular parties is often the only plausible way to realise any of their manifesto promises. Independents can sponsor private member bills, but these receive little discussion time and are rarely enacted. 121 interpellations remain a prerogative of parliamentary caucuses in many countries. 123 Transparency From the perspective of predictability of policy orientation, there is something inherently perplexing about the notion of being ʺindependentʺ. Seeking election as an independent means no more than that a candidate is not affiliated to a political party. 124 However, nominal independence of this kind does not imply a substantive absence of ideological partisanship or even strong non-party political group loyalties. 125 The ease with which the label of ʺindependenceʺ can obscure such partisanships makes evident that the choice to exercise one's political commitments free of the organisational control and discipline of a party could come with clear transparency deficiencies. Being open about strong ideological partisanship and nonparty group alliances always runs the risk of contradicting candidates' claim to independence. Parties develop the transparency of their policy platforms over the course of regular policy conferences where important issues are publicly deliberated and decided upon. They also have relatively easy access to the media for issuing statements and holding press conferences through designated spokespersons and media liaison offices to articulate party positions. As Brancati 126 notes, in developing their political agendas, independents frequently do not formulate full political programmes and their manifestos are thin on detail. Independents also commonly stand on single issues. Adding to the lack of clarity about the ideological commitments of independents is the fact that the dividing line between independents and party-endorsed candidates is often blurred. Independents sometimes organise themselves into larger ʺnon-partisan associationsʺ, which many recognise as de facto political parties in all but name. 127 Independents also form electoral alliances with parties and, in some countries, party lists even 123 Ehin include non-party members who claim to be independent. 128 Party members who have failed to secure a favourable position on party lists also often stand as independents. All of this makes it difficult to predict an independent's position on many important political matters beforehand with any measure of certainty. Accountability Enhanced accountability is seen as one of independent candidacy's strongest selling points. One can distinguish between periodic and interim, as well as individual and collective, accountability. 129 ʺPeriodic accountabilityʺ refers to voters holding parties or individual representatives to account through their vote at elections, whereas ʺinterim accountabilityʺ concerns the possibility of exercising some control over the conduct of parties or individual representatives between elections. ʺCollective accountabilityʺ is about holding parties to account, while ʺindividual accountabilityʺ concerns voters having effective means of expressing their approval or disapproval of individual representatives. In the case of independent candidacy, only individual/periodic and individual/interim accountability is at stake. Unlike party candidates under closed-list proportional systems, independents are not party nominees and could potentially have a closer individual relationship with constituency voters (if there are constituencies). Voters can withdraw support in a next election if a representative does not perform according to expectations. However, this accountability may be more apparent than real. If, as argued above, independents generally have a negligible impact on the policies adopted in national or regional representative bodies, there may be little point in holding independent candidates accountable for failure to fulfil promises they are not really able to keep. If the odds are stacked so heavily against them being able to substantially influence events in legislative bodies and carry out their manifestoes, it may be unreasonable to expect them to build up a record that could be used as a basis for holding them to account at the next election. Often, their only concrete accomplishments would be off the back of political parties. Only in unique circumstances would that accomplishment be something that the political parties would not have been able to do on 128 Ehin their own, for instance in those rare cases where an independent held the balance of power. What mechanisms are in place to hold independents accountable inbetween elections? It is here where independent candidacy has some obvious drawbacks. If party-nominated members fail to live up to reasonable standards of conduct they could be ousted from their parties in terms of the established disciplinary procedures. 130 In the absence of such organisational control, there is no functional equivalent to the loss of membership to regulate the behaviour of representatives elected as independents. It therefore begs the question whether there is much substance to the claim that independents are more ʺdirectly accountable to votersʺ if there is no organisational process for making this true. However, as mentioned earlier, a well-regulated right of recall could be a means of securing the accountability of elected representatives in-between elections. Since this would be the case for both independent and party-nominated elected representatives, it would not be the result of any inherent accountability advantage of independent candidacy as such. Conclusion Although one may disagree with the result reached in the New Nation case, the reality is that independents will in future stand as candidates in national and provincial elections. Implementing the court's decision without concomitant constitutional amendments may cause disharmony between the right to stand for public office and current constitutional arrangements that were designed with an exclusive party-representative system in mind. The latter mainly concern aspects of the electoral system and provisions regarding the composition and functioning of the legislatures. In particular, as things stand, independent representatives could be excluded from participating in some of the legislatures' committees and from involvement in making key appointments. In addition, no independent member of a provincial legislature would be eligible to be appointed as a delegate to the National Council of Provinces (unless nominated by a political party). This result would seriously detract from independent representatives' legislative and oversight functions. In any event, although the legalisation of independent candidacy may enhance electoral inclusivity, the actual political impact of independent representatives would likely be marginal. 130 Of course, this assumes a functional internal party disciplinary system, which is often compromised by a party-political culture strong on internal discipline and the maintenance of the perception of party solidarity.
2022-06-17T15:14:40.437Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "a4b57b60424b13cefeb1690e3a0f6859a3c178c7", "oa_license": "CCBY", "oa_url": "https://perjournal.co.za/article/download/12746/18575", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "174ec18fee9f44209ef1b0939b54309c8cb44837", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [] }
233211031
pes2o/s2orc
v3-fos-license
Is corona virus infection a risk factor for hematuria in secondary bladder amyloidosis? The first case report Urogenital amyloidosis is a rare disease that involved every site of the urogenital system. Involvement of bladder developed with gross hematuria, and any intrinsic or extrinsic stresses exacerbate hematuria. We reported a secondary bladder amyloidosis case that presented with gross hematuria without any risk factor except COVID-19 infection. Introduction Amyloidosis is a disease characterized by the extracellular deposition of amyloidal material in various organs. It is categorized as localized and systemic amyloidosis based on location and primary and secondary based on its pathology. 1 Genitourinary amyloidosis, especially primary amyloidosis, is a rare condition. Its location in the genitourinary system is variable and can occur anywhere: kidneys, pelvis, ureters, bladder, urethra, or even penis. 2 Renal involvement usually leads to renal failure, and lower urinary tract involvement presented with gross hematuria (GH) with or without irritative voiding symptoms and any underlying factor can exacerbate the symptoms. In such a situation, cystoscopy, biopsy, and histopathological tests are necessary to confirm the diagnosis. 2 For the first time, we report a COVID-19 positive patient with a history of rheumatoid arthritis (RA) and proven renal amyloidosis who developed spontaneous hematuria without any underlying risk factors except COVID-19 infection, that a cold cup biopsy of the bladder revealed the amyloidosis. Case presentation We document a case of a 68-year-old woman known case of RA since 15 years ago and amyloidosis since 6 years ago with kidney involvement (confirmed by renal biopsy) who developed dyspnea and fever 10 days before admission. The patient was receiving immunosuppressive drugs (Methotrexate, Prednisolone, Entracept) for more than 10 years. Ultrasound sonography of the kidney, ureters, and bladder revealed a normal size of both kidneys and no hydroureteronephrosis, but bilaterally increase in cortical echogenicity and decrease of corticomedullary differentiation and abdominopelvic CT was normal. During hospitalization, the patient developed GH that was not stopped with conservative management (bladder irrigation, platelet, and packed cell transfusion). A clot passage was added, so she referred to the urology department underwent cystoscopy that revealed a slather organized blood clot in the bladder. Clots were irrigated. A large diverticulum without any malignant portion was seen in the right lateral wall. A 2 cm linear ulcerated area with hyperemia without active bleeding in the posterior bladder wall was seen (Fig. 1). A cold cup biopsy was performed. After cystoscopy, her hematuria was stopped with bladder irrigation solely, but after 2 days, the patient developed GH again; due to her unstable condition and respiratory problem, the patient was not a candidate for the operation, and bladder irrigation was the only intervention that was done for her. Unfortunately, the patient died because of cardiac arrest due to respiratory failure. Her histopathological findings were consistent with amyloidosis and were confirmed using special stains such as Congo red and Haematoxylin & Eosin ( Fig. 2a and b,c). Discussion Primary isolated amyloidosis of the urinary bladder is a rare disease with 160 cases reported by Malek et al. 1 In primary amyloidosis, there is no underlying pathology except multiple myeloma, and cardiovascular, gastrointestinal, and respiratory systems are most affected (1), but secondary amyloidosis is related to a chronic inflammatory disease, specially RA, and genitourinary and lymphatic systems are most affected. 1 Renal involvement is always occurred in secondary amyloidosis and caused renal insufficiency. Still, the urinary bladder is involved in primary type and presented with GH that is due to amyloid deposition in the wall of vessels and impairment of arterioles contractility, so any stress caused vessels rupture and massive hematuria, that can mimic the urothelial carcinoma symptoms. 3 Secondary amyloidosis of the bladder is more common in females and mainly secondary to rheumatoid arthritis and ankylosing spondylitis, and long-standing chronic inflammatory conditions. There have only been a few reports published so far on vesical amyloidosis in patients with RA. However, 5 of 10 patients (50%) with vesical amyloidosis died because of continuous massive hematuria, which induced disseminated intravascular coagulation and multiple organ failure 4 and any additional stress increase the hematuria and mortality rate like our case that was COVID-19 positive and after intubation without any urological intervention her hematuria (that was stopped after cystoscopy and biopsy) recurred. Multiple studies revealed that COVID-19 infection caused hematoma (retroperitoneal, psoas) especially in patients who received anticoagulant agents and have abnormal coagulation tests 5 but our patient did not receive an anticoagulant agent, and her coagulation tests were normal, and she did not have any risk factors for bleeding. Nevertheless, she developed gross hematuria, so we should keep in mind that COVID-19 infection can increase hemorrhage risk. Diagnosis of amyloidosis is based on radiologic (contrast-enhanced CT scan) and cystoscopic evaluation, but bladder amyloidosis can mimic bladder cancer radiologically and clinically 2 so pathologic confirmation is needed for proving the diagnosis and apple-green birefringence under the polarized microscope is characteristic for it. 2 Management of bladder amyloidosis is based on controlling hematuria. A wide range of surgical interventions including coagulation, temporary urinary diversion, partial and total cystectomy and nonsurgical like bladder irrigation and dimethyl sulfoxide instillation were introduced 1,2 and also some case of spontaneous regression were reported so management of bladder amyloidosis should be the individualized base of the severity of symptoms, location, and size of lesion and type of amyloidosis. 3 In conclusion, this case highlights the importance of urologists to appreciate alternative diagnoses in patients with RA who have had GH. Nowadays, COVID-19 is a risk factor that exacerbates hematuria in these patients. Ethics approval and consent to participate The ethical committees of the Shaheed Beheshti University of Medical Sciences approved this study and permitted us to review patients' medical data. Consent for publication Verbal consent was obtained from husband of patient. Availability of data and material None. Funding None. AR: Conception and design, Critical revision of the manuscript for important intellectual content, Administrative, technical or material support Supervision All authors have read and approved the manuscript. Declaration of competing interest None.
2021-03-29T23:26:21.384Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "8299cf9faeb199d0370389770a1e7fe27e068179", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eucr.2021.101642", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97bf50555b9b588fec1cf9d92694254bbd309ea4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228950135
pes2o/s2orc
v3-fos-license
E ff ects of Various Polishing Techniques on the Surface Characteristics of the Ti-6Al-4V Alloy and on Bacterial Adhesion : Ti-6Al-4V, although widely used in dental materials, causes peri-implant inflammation due to the long-term accumulation of bacteria around the implant, resulting in bone loss and eventual failure of the implant. This study aims to overcome the problem of dental implant infection by analyzing the influence of Ti-6Al-4V surface characteristics on the quantity of accumulated bacteria. Ti-6Al-4V specimens, each with di ff erent surface roughness are produced by mechanical, chemical, and electrolytic polishing. The surface roughness, surface contact angle, surface oxygen content, and surface structure were measured via atomic force microscopy (AFM), laser scanning confocal microscopy (LSCM), drop shape analysis (using sessile drop), X-ray photoelectron spectroscopy (XPS), and X-ray di ff raction (XRD). The micro and macro surface roughness are 10.33–120.05 nm and 0.68–2.34 µ m, respectively. The surface X direction and Y direction contact angle are 21.38 ◦ –96.44 ◦ and 18.37 ◦ –92.72 ◦ , respectively. The surface oxygen content is 47.36–59.89 at.%. The number of colonies and the optical density (OD) are 7.87 × 10 6 –17.73 × 10 6 CFU / mL and 0.189–0.245, respectively. The bacterial inhibition were the most e ff ective under the electrolytic polishing of Ti-6Al-4V. The electrolytic polishing of Ti-6Al-4V exhibited the best surface characteristics: the surface roughness of 10 nm, surface contact angle of 92 ◦ , and surface oxygen content of 54 at.%, respectively. This provides the best surface treatment of Ti-6Al-4V in dental implants. The strain of Streptococcus was called "early colonizers" because they take part in formation of the early attachment of biofilm. A strain of Streptococcus can make a lot of extracellular polysaccharides when they gain the sucrose. Then, they can strengthen the mechanical properties and adhesiveness of the biofilm and cause biological complications [8]. The biofilms have influence on inflammatory response and any bone destroy [9]. Therefore, it is necessary to decrease early bacterial attachment to prevent biological complications. In order to improve the infection of dental implants, using antibiotics or drugs often treats infections of dental implant complications. However, the antibiotic resistance is reaching 71.7% [10]. Therefore, it is necessary to study the inhibition or adhesion characteristics of the metal surface characteristics of the implant. In past research, the optical density (OD) [11] has often been used to measure biofilms. The colony forming unit (CFU) [12] was used to measure the number of colonies to estimate the number of accumulated bacteria. The variation factors for the inhibition or adhesion characteristics of bacteria are mainly the surface characteristics of the implant, including surface roughness [13][14][15][16], contact angle [17,18], and oxygen content [19][20][21]. Surface alloying [22,23], amorphous treatment [24,25], and modification of surface properties [13,17,19] are the various methods for changing the surface characteristics. The improved version of surface alloying includes spraying metal elements, such as Cu [22] and Ag [23]; the Cu and Ag ions released from the surface of the implants, pass through the bacterial cell wall into the cell membrane of the bacteria, stopping the metabolic growth of the bacteria, killing them, and thereby, achieving antibacterial activity [26,27]. Surface alloying can not only increase the antibacterial properties but can also improve the wear resistance and corrosion resistance of the material. However, there are still doubts about the compatibility of antibacterial properties of metal elements within the human body, and excessive amounts may be toxic to human cells [28]. Amorphous treatment includes the use of organic forms of natural fungicides, such as positively charged chitosan (chitosan) [24], and inorganic forms of amorphous metals (such as iron, chromium, and nickel) [25]. Additionally, the methods to modify the surface properties include mechanical, chemical, and electrolytic polishing [29][30][31][32]. Mechanical polishing is mainly used to grind the surface of the material with granular media to reduce the roughness of the surface. This method can only be used on the surface of a workpiece with less complex shapes, and the surface roughness value can only reach approximately 0.3-0.6 µm, and, it is difficult to obtain a mirror surface (approximately 0.13 µm) [29]. Chemical polishing mainly uses chemical liquids to dissolve the oxide layer. Chemical polishing can achieve surface roughness less than 1 µm and is also used for complex shapes surfaces. However, it is difficult to control the polishing condition, and this hinders the attainment of a good polishing effect [31]. Electrolytic polishing can achieve a smooth surface roughness of less than 10 nm, and to improve the surface characteristics antibacterial effects are attained. The disadvantage of the process is that the parameters are difficult to control; therefore, it is difficult to find the exact values of the parameters suitable for the material [32]. In terms of surface characteristics, some researchers have proposed that the accumulation of bacteria is closely related to the interaction between the characteristics of the surface of the material, including roughness [13][14][15][16], the surface contact angle [33], and the surface oxygen content [19]. Studies have shown that a surface roughness of less than 0.2 µm can effectively reduce the accumulation of bacteria [14,15]. However, some researchers believe that when the surface roughness is less than 0.2 µm, there is no significant difference in the adhesion of bacteria [16]. Furthermore, the quantity of bacteria attached decreases as the contact angle increases [33]. However, other studies have suggested that bacteria will affect the amount of adhesion according to different surface topography. The adhesion of bacteria is related to the extracellular polymeric substances (EPS) produced by bacteria, and the contact angle has a small effect on the extent of bacterial adhesion [17]. With respect to the surface oxygen content, the surface oxygen elements and the thickness of the oxide layer also affect the adhesion of bacteria. When the surface structure contains oxygen and the thickness of the oxide layer is greater than 1.7-5 nm, bacteria easily accumulate to form biofilms [19]. Another study proposed that the active oxygen in the oxide layer can decompose the oxide film to achieve the effect of inhibiting the formation of bacteria [20,21]. In summary, the results of previous research on the influence of surface roughness, contact angle, and surface oxygen content on the quantity of accumulated bacteria are not consistent. And Futhermore, some of the previous studies only research among these two factors. There are rare studies that discuss the influence of the aforementioned three factors on the quantity of accumulated bacteria at the same time. In order to understand the degree of influence of the variation in these three surface factors on the quantity of accumulated bacteria, this paper discusses the different surface polishing methods of Ti-6Al-4V ELI (Extra Low Interstitials) to produce different surface characteristics, as well as through statistical analysis of data through one-way Analysis of variance (ANOVA), this paper discusses the degree of influence of the surface roughness, contact angle, and surface oxygen content on bacterial adhesion by Streptococcus mutans. Specimens Preparation The specimens were obtained from a grade 5 titanium bar (Ti-6Al-4V ELI, ASTM F136, Titanium Industries Inc, Rockaway, NJ, USA) with Ø = 12 mm. The specimens were cut to a thickness of 1 mm using a computer numerical control (CNC) machine (SR 20J Type-C, RDMO Machine-tools, Contamine-sur-Arve, Auvergne-Rhône-Alpes, France) at 1000 rpm. The specimens after cutting by CNC without surface treatment were the control group. The experimental groups are following below the surface treatment steps. For mechanical polishing, 1000# and 1500# silicon carbide papers were used. In addition, for chemical polishing (CP), the specimens were immersed in a chemical solution of 5% hydrofluoric acid, 30% nitric acid, and 65% deionized water, at 25 • C for 2 min. Electropolishing was conducted by exposing from the CNC cutting specimens to an area of Ø = 12 mm × 1 mm. Electropolishing was conducted by stirring the electrolyte mixed with 83% acetic acid, 22% perchloric acid, and 5% glycerol for 100, 200, and 300 s, respectively. Additionally, electropolishing was conducted under 25-30 V and approximately 0.5-1 A, and the distance between the anode and the cathode was 30 mm. Moreover, the temperature of the electrolyte was controlled between −15 and −10 using a low temperature circulator (CA-1111, EYELA, San Diego, CA, USA). In order to remove the residual reactants on the surface, the electrolyte was stirred at 300 rpm with a magnetic bar by stirring a hot plate (PC420D, CORNING, New York City, NY, USA). Finally, all the specimens were cleaned using an ultrasonic cleaner (O-LEO-801, Blossom, Kaohsiung, Taiwan) for 15 min with acetone, for 15 min with deionized water, then for 15 min with ethanol (99%), and finally dried for 15 min. Meanwhile, all specimens were kept in a vacuum. The specimens were divided into seven groups with different surface treatments. In Table 1, it shows that experimental specimen marked as A-F and control specimen G. Where, the samples A-C are electropolishing for 300, 200, and 100 s, respectively. Being chemically polished for 2 min is designed as specimen D. Futhermore, sample E and F are mechanically polishing by using of 1000# and 1500# silicon carbide papers. In addition, the control specimen G is obtained by cutting using a CNC machine. Each examinational test would be analyzed 3 samples. Surface Characterization All the topographical features of the specimens of the present study were identified via scanning electron microscopy (SEM, JEOL JSM-6380, Tokyo, Japan) at 20 keV. Ti-6Al-4V specimens after different polishing treatment were immersed in S. mutans suspension. Specimens were fixed in 2.5 vol.% glutaraldehyde at 4 • C for 2 h, and then dehydrated step-wise with series alcohol concentrations (50 vol.%, 60 vol.%, 70 vol.%, 80 vol.%, 90 vol.%, and 100 vol.%) before the SEM observation. Subsequently, in order to improve their conductivity, using gold film was used to cover the specimen surfaces. The biofilm morphologies of Ti-6Al-4V specimens after different polishing treatment were evaluated by using a SEM (JEOL JSM-6380, Tokyo, Japan). The micro surface roughness and topography were analyzed via atomic force electron microscopy (AFM, ARDIC P150, Taipei, Taiwan). The scan area of the surface roughness was 50 µm × 50 µm. In addition, the macro surface roughness was determined via laser scanning confocal microscopy (LSCM, VK-X250, Keyence, Osaka, Japan). The laser wavelength was 405 nm, and images were taken at 20× magnification and analyzed using Zen Blue software (2010, Keyence, Osaka, Japan). The values of surface roughness were expressed as arithmetic mean deviation (Ra), and all the groups were measured by three specimens. Contact angle measurements were conducted using an FTA 1000 drop shape analysis system (First Ten Angstroms, Portsmouth, VA, USA) under 20 • C ambient conditions. For the sessile drop method, the present study used deionized water with a volume of 1 µL, which was provided with a micro syringe. The contact angle results were analyzed by recording 100 pieces of photos using the software FTA22 (Drop snake analysis, Portsmouth, VA, USA). The direction of the acquired image of the droplet profile on the different polished surface regarding to the polish direction was as shown as Figure 1. The X direction is perpendicular, and the Y direction is parallel. The contact angle data were tested three times for obtaining the average and SD values. Surface Characterization All the topographical features of the specimens of the present study were identified via scanning electron microscopy (SEM, JEOL JSM-6380, Tokyo, Japan) at 20 keV. Ti-6Al-4V specimens after different polishing treatment were immersed in S. mutans suspension. Specimens were fixed in 2.5 vol.% glutaraldehyde at 4 °C for 2 h, and then dehydrated step-wise with series alcohol concentrations (50 vol.%, 60 vol.%, 70 vol.%, 80 vol.%, 90 vol.%, and 100 vol.%) before the SEM observation. Subsequently, in order to improve their conductivity, using gold film was used to cover the specimen surfaces. The biofilm morphologies of Ti-6Al-4V specimens after different polishing treatment were evaluated by using a SEM (JEOL JSM-6380, Tokyo, Japan). The micro surface roughness and topography were analyzed via atomic force electron microscopy (AFM, ARDIC P150, Taipei, Taiwan). The scan area of the surface roughness was 50 μm × 50 μm. In addition, the macro surface roughness was determined via laser scanning confocal microscopy (LSCM, VK-X250, Keyence, Osaka, Japan). The laser wavelength was 405 nm, and images were taken at 20× magnification and analyzed using Zen Blue software (2010, Keyence, Osaka, Japan). The values of surface roughness were expressed as arithmetic mean deviation (Ra), and all the groups were measured by three specimens. Contact angle measurements were conducted using an FTA 1000 drop shape analysis system (First Ten Angstroms, Portsmouth, VA, USA) under 20 °C ambient conditions. For the sessile drop method, the present study used deionized water with a volume of 1 μL, which was provided with a micro syringe. The contact angle results were analyzed by recording 100 pieces of photos using the software FTA22 (Drop snake analysis, Portsmouth, VA, USA). The direction of the acquired image of the droplet profile on the different polished surface regarding to the polish direction was as shown as Figure 1. The X direction is perpendicular, and the Y direction is parallel. The contact angle data were tested three times for obtaining the average and SD values. In addition, the present study conducted phase analysis via X-ray diffraction (XRD). An X-ray diffractometer (D8 Advance, Bruker AXS GmbH, Karlsruhe, Germany) with a current of 40 mA and a voltage of 40 mV was used with Cu Kα radiation and run with a step size 2θ of 0.4. X-ray photoelectron spectroscopy (XPS, JEOL, JAMP-9500F, Peabody, MA, USA) was used to analyze the surface elements and electronic states and recorded using an Al Kα source at 150 W. The XPS analysis area was 8 mm × 9 mm. The binding energy (BE) scale refers to the C1s and is calibrated at 285.0 eV. Biological Analysis Bacterial strains and growth conditions were conducted for this study. Streptococcus mutans (S. In addition, the present study conducted phase analysis via X-ray diffraction (XRD). An X-ray diffractometer (D8 Advance, Bruker AXS GmbH, Karlsruhe, Germany) with a current of 40 mA and a voltage of 40 mV was used with Cu Kα radiation and run with a step size 2θ of 0.4. X-ray photoelectron spectroscopy (XPS, JEOL, JAMP-9500F, Peabody, MA, USA) was used to analyze the surface elements and electronic states and recorded using an Al Kα source at 150 W. The XPS analysis area was 8 mm × 9 mm. The binding energy (BE) scale refers to the C1s and is calibrated at 285.0 eV. Biological Analysis Bacterial strains and growth conditions were conducted for this study. Streptococcus mutans (S. mutans) ATCC 25175 was grown under microaerophilic conditions for 24 h at 37 • C on Brain Heart Infusion (BHI) agar plates (Sigma-Aldrich, St. Louis, MI, USA), supplemented with 3 g/L of yeast extract (Sigma-Aldrich, St. Louis, MI, USA) and 200 g/L of sucrose (Sigma, Winston, Oakville, ON, Canada). After incubation, approximately 10 6 colony-forming units (CFUs) of S. mutans cells were inoculated in BHI broth with 200 g/L of sucrose. All the specimens were placed into 24 well-plates and immersed in a bacterial suspension containing 500 µL of S. mutans, each well was covered for 48 h at 37 • C. Thereafter, the specimens were washed twice with phosphate buffer solution (PBS, Sigma-Aldrich, St. Louis, MI, USA). Ti-6Al-4V specimens were recovered from the 24 well-plates after incubation in S. mutans suspension. All the specimens were transferred to new plates for the evaluation of biomass by the crystal violet (CV) staining method, and the biofilm was measured using a spectrophotometer (Epoch, BioTek ® , Vermont, CA, USA) to determine the optical density (OD 550 nm ) in a microplate reader. Biofilm formation and analysis was conducted through the following steps: (1) The specimen was washed twice in PBS. (2) The washed specimen was placed in a 5 mL eppendorf tube with 1mL PBS and thereafter, the adherent bacterial was collected by vortexing (Genie 2, BERTEC, Taipei, Taiwan) treatment for 1 min and gain bacterial suspension. (3) The bacterial suspension was serially diluted (10 −3 , 10 −4 , 10 −5 , 10 −6 , and 10 −7 ). (4) The diluted solutions were used for repeated plate smearing and analysis of bacteriostatic effects. (5) The suspension was then diluted (up to 10 −6 dilution) in PBS and plated on BHI agar to quantify CFUs /mL. These experiments were performed three times and conducted in three independent assays. Statistical Analysis Statistical analyses were conducted using the statistical software SPSS 20.0 (IBM, Armonk, NY, USA). Statistical analysis of the data indicated values as mean ± standard deviation, as shown in Table 2. In each experiment, triple replicates of each surface were used. The difference in surface roughness results obtained via LSCM and AFM, contact angle, value of OD, and colony of Streptococcus mutans (CFU/mL) were analyzed by one-way ANOVA, followed by a least significant difference (LSD) test. Statistical differences with p < 0.05 were considered to be statistically significant. Table 2. The values of the OD, CFU, roughness, contact angles and oxygen content of the present alloys, specimens A-G. Results Results and their statistical significance are presented in Table 2. The polishing process of the specimen is also listed in Table 1 (Specimen G-raw material obtained by CNC cutting, Specimen F-mechanically polished by #1000 SiC, Specimen E-mechanically polished by #1500 SiC, Specimen D-chemically polished for 2 min, Specimen C-electropolished for 100 s, Specimen B-electropolished for 200 s, and Specimen A-electropolished for 300 s). The macroscopic and microscopic surface roughness of the Ti-6Al-4V alloy with various polishing processes using the LSCM and AFM methods are in the ranges of 10-120 nm and approximately 0.5-2.4 µm, respectively. The analysis of the surface contact angle revealed that the contact angle value was between 15 • and 100 • ; the surface oxygen content was between 47 at.% and 54 at.% after different conducting polishing procedures of Ti-6Al-4V. Ti-6Al-4V after elepolishing have the lowest surface roughness and the highest contact angle and the most oxygen atomic percentage leads to the lowest amount of Streptococcus mutans to attaching to the Ti-6Al-4V surface. The present study shows Ti-6Al-4V after elepolishing have the most effect of Streptococcus mutans inhibition. Additionally, the micro and macro surface roughness are shown in Figure 2. It is revealing that micro and macro surface roughness are 10.33-120.05 nm and 0.68-2.34 µm, respectively. The AFM micrographs, LSCM micrographs (the color is representative surface roughness; when the color is different, it is meaning the surface has different height), and SEM micrographs are also shown in Figure 3a-c. The surfaces of specimen A, specimen B, and specimen C are smooth, it is revealing less undulation in AFM morphology; however, macroscopic scratches and undulations are observed on the other surfaces of the specimens. Specimen D is chemical polishing and the SEM image show the directional scratches and α phase base and β phase. AFM micrographs of specimen D shows more undulate than specimen A, B, and C. Specimen E, F, and G show the scratches of consistent direction. AFM and LSCM micrographs are revealing the consistent morphology. Coatings 2020, 10, x FOR PEER REVIEW 6 of 23 Additionally, the micro and macro surface roughness are shown in Figure 2. It is revealing that micro and macro surface roughness are 10.33-120.05 nm and 0.68-2.34 μm, respectively. The AFM micrographs, LSCM micrographs (the color is representative surface roughness; when the color is different, it is meaning the surface has different height), and SEM micrographs are also shown in Figure 3a-c. The surfaces of specimen A, specimen B, and specimen C are smooth, it is revealing less undulation in AFM morphology; however, macroscopic scratches and undulations are observed on the other surfaces of the specimens. Specimen D is chemical polishing and the SEM image show the directional scratches and α phase base and β phase. AFM micrographs of specimen D shows more undulate than specimen A, B, and C. Specimen E, F, and G show the scratches of consistent direction. AFM and LSCM micrographs are revealing the consistent morphology. Figure 6 shows that the smaller the surface roughness of the alloy, the larger the contact angle. The surface roughness and contact angle shows a linear relationship with a negative correlation. The linear equation of the surface X direction contact angle and microscopic surface roughness is y = −0.02x + 2.71 (slope is −0.02). The linear equation of the surface X direction contact angle and macroscopic surface roughness is y = −1.32x + 145.36 (the slope is −1.32). The linear equation of the surface Y direction contact angle and microscopic surface roughness is y = −0.01x + 2.57 (slope is −0.01). The linear equation of the surface Y direction contact angle and macroscopic surface roughness is y = −1.22x + 135.76 (the slope is −1.26). Therefore, surface roughness and contact angle have negative correlation. The surface roughness is decreases when the contact angle increases. It enhances hydrophobicity because of electropolishing, and it is demonstrated that contact angle resulted from the decreased surface roughness. Coatings 2020, 10, x FOR PEER REVIEW 8 of 23 Figure 4 shows the surface contact angles of the test pieces A-G. Among them, the surface contact angles of the test pieces A and B are the largest, while the surface contact angle of test piece G is the smallest. Contact angles of the present specimen A-G with the Figure 4a X direction and Figure 4b Y direction revealing 21.38°-96.44° and 18.37°-92.72°, respectively, and contact angles of X directions more than the Y direction. Figure 5 shows contact angles images of the present specimen A-G with the Figure 5a X direction and Figure 5b Y direction, and they have consistent variety of contact angle. The variant contact angles of a parallel and perpendicular nature are 0.86°-7.74°. Figure 6 shows that the smaller the surface roughness of the alloy, the larger the contact angle. The surface roughness and contact angle shows a linear relationship with a negative correlation. The linear equation of the surface X direction contact angle and microscopic surface roughness is y = −0.02x + 2.71 (slope is −0.02). The linear equation of the surface X direction contact angle and macroscopic surface roughness is y = −1.32x + 145.36 (the slope is −1.32). The linear equation of the surface Y direction contact angle and microscopic surface roughness is y = −0.01x + 2.57 (slope is −0.01). The linear equation of the surface Y direction contact angle and macroscopic surface roughness is y = −1.22x + 135.76 (the slope is −1.26). Therefore, surface roughness and contact angle have negative correlation. The surface roughness is decreases when the contact angle increases. It enhances hydrophobicity because of electropolishing, and it is demonstrated that contact angle resulted from the decreased surface roughness. Figure 7b shows the XRD data. In addition to the α-phase and β-phase diffraction peaks, the αTiO2 phase diffraction peaks can also be observed. This shows that there is a certain titanium dioxide structure in the present titanium alloy. The equiaxed α phase was α-Ti structure with the representative triangle symbol and island β phase with the representative circle symbol are shown in Figure 7b. Figure 7b shows the XRD data. In addition to the α-phase and β-phase diffraction peaks, the αTiO 2 phase diffraction peaks can also be observed. This shows that there is a certain titanium dioxide structure in the present titanium alloy. The equiaxed α phase was α-Ti structure with the representative triangle symbol and island β phase with the representative circle symbol are shown in Figure 7b. Figure 8b is O1s binding energy in each layer of specimens G and shows the O1s bond energy of Ti-6Al-4V after different polishing treatments through the 532 eV energy analysis diagram (the bond energy is the average value of energy required for each chemical bond when gaseous molecules are disassembled into gaseous atoms under standard conditions, or the atom from which the electrons derive the orbital binding energy). It can be observed that O1s has a higher energy value on the display surface between 0.2 and 1.4 μm in depth. Figure 8b is O1s binding energy in each layer of specimens G and shows the O1s bond energy of Ti-6Al-4V after different polishing treatments through the 532 eV energy analysis diagram (the bond energy is the average value of energy required for each chemical bond when gaseous molecules are disassembled into gaseous atoms under standard conditions, or the atom from which the electrons derive the orbital binding energy). It can be observed that O1s has a higher energy value on the display surface between 0.2 and 1.4 µm in depth. Figure 9a shows the distribution of the typical O1s bond energy around 532 eV after Ti-6Al-4V is scanned at every 0.1 μm in depth after different polishing treatments. At the depth of approximately 0.2-1.4 μm, a high peak value of O1s can be observed, indicating that the oxygen content is high in this interval. Figure 9b shows the distribution of O1s bond energy and oxygen content (integrated value of the peak near 532 eV from Figure 8b in the depth range of 1-1.8 μm from the surface of the alloy after different polishing procedures. After integrating the wave peak area, the atomic percentage of the relative oxygen content of the alloy after different polishing procedures can be obtained. It can be found that the atomic percentage of oxygen content is relatively high, approximately 45 at.%-60 at.% at a depth of approximately 0.2-1.4 μm. Specimens A, B, and C (electropolishing) have 51.32 at.%-53.89 at.% oxygen content and are showing the highest oxygen content. Specimen D has 50.42 at.% oxygen content, and is lower than electropolishing. Specimens E and F have 47.49 at.% and 48.44 at.% oxygen content and are lower than chemical polishing. Specimen G has a natural oxide layer (47.36 at.% oxygen content) and has the lowest oxygen content. Therefore, the order of oxygen content of polishing process is electropolishing > chemical polishing > mechanical polishing. Figure 9a shows the distribution of the typical O1s bond energy around 532 eV after Ti-6Al-4V is scanned at every 0.1 µm in depth after different polishing treatments. At the depth of approximately 0.2-1.4 µm, a high peak value of O1s can be observed, indicating that the oxygen content is high in this interval. Figure 9b shows the distribution of O1s bond energy and oxygen content (integrated value of the peak near 532 eV from Figure 8b in the depth range of 1-1.8 µm from the surface of the alloy after different polishing procedures. After integrating the wave peak area, the atomic percentage of the relative oxygen content of the alloy after different polishing procedures can be obtained. It can be found that the atomic percentage of oxygen content is relatively high, approximately 45 at.%-60 at.% at a depth of approximately 0.2-1.4 µm. Specimens A, B, and C (electropolishing) have 51.32 at.%-53.89 at.% oxygen content and are showing the highest oxygen content. Specimen D has 50.42 at.% oxygen content, and is lower than electropolishing. Specimens E and F have 47.49 at.% and 48.44 at.% oxygen content and are lower than chemical polishing. Specimen G has a natural oxide layer (47.36 at.% oxygen content) and has the lowest oxygen content. Therefore, the order of oxygen content of polishing process is electropolishing > chemical polishing > mechanical polishing. Figure 10 shows the analytical values of the bacterial culture. Figure 10a shows the data analysis of the biofilm of the bacterial culture, and the OD data is between 0.16 and 0.27. Figure 10b shows the analysis data of the number of bacterial colonies, and the CFU data are between 7 × 10 6 and 18 × 10 6 . Figure 10a shows the data analysis of the biofilm of the bacterial culture, and the OD data is between 0.16 and 0.27. Figure 10b shows the analysis data of the number of bacterial colonies, and the CFU data are between 7 × 10 6 and 18 × 10 6 . Figure 11 is an SEM image of Streptococcus mutans on the surfaces of Ti-6Al-4V after different polishing treatments. What can be seen is a large amount of Streptococcus mutans covered on the surface of Ti-6Al-4V after cutting without polishing treatments (specimen G), mechanical polishing (specimen E and F) and chemical polishing (specimen D). On the contrary, the amount of Streptococcus mutans was decreased on the surface of Ti-6Al-4V after electropolishing (specimens A, B, and C). It is indicating that Ti-6Al-4V after electropolishing can inhibit against the biofilm formation of Streptococcus mutans. Figure 11 is an SEM image of Streptococcus mutans on the surfaces of Ti-6Al-4V after different polishing treatments. What can be seen is a large amount of Streptococcus mutans covered on the surface of Ti-6Al-4V after cutting without polishing treatments (specimen G), mechanical polishing (specimen E and F) and chemical polishing (specimen D). On the contrary, the amount of Streptococcus mutans was decreased on the surface of Ti-6Al-4V after electropolishing (specimens A, B, and C). It is indicating that Ti-6Al-4V after electropolishing can inhibit against the biofilm formation of Streptococcus mutans. Figure 12 is the scatter diagram that is drawn to represent the variability of surface roughness, contact angle, and oxygen content and its influence on the quantity of accumulated bacteria, and a linear relationship was observed. The results show that the micro surface roughness is positively correlated with the quantity of accumulated bacteria, as shown in Figure 12a. The linear relationship equations between the OD absorbance, number of colonies, and surface roughness are y = 0.040x + 18.80 (the slope is 0.040) and y = 0.074x + 6.86 (the slope is 0.074), and there is a positive correlation between the two. The statistical Figure 12 is the scatter diagram that is drawn to represent the variability of surface roughness, contact angle, and oxygen content and its influence on the quantity of accumulated bacteria, and a linear relationship was observed. In the present study one-way ANOVA statistical analysis was performed, and surface roughness, surface contact angle, and surface oxygen content had a greater impact on the amount of bacterial adhesion. The statistical results are shown in Table 3. The F value is the ratio of the betweengroup variation to the within-group variation, and the statistical representative meaning is the strength of the independent variable's influence on the dependent variable. Therefore, when the F value is larger and p < 0.05, it means that the surface characteristic has a greater influence on the quantity of accumulated bacteria. The results of this study show that the F values of the macroscopic surface roughness, microscopic surface roughness, surface contact angle, and surface oxygen content are 79.39, 177.23, 52.43, and 20.54, respectively. This indicates that the order of influence of the three variables of surface roughness, surface contact angle, and surface oxygen content on the adhesion of bacteria is surface roughness > surface contact angle > surface oxygen content. The results show that the micro surface roughness is positively correlated with the quantity of accumulated bacteria, as shown in Figure 12a. The linear relationship equations between the OD absorbance, number of colonies, and surface roughness are y = 0.040x + 18.80 (the slope is 0.040) and y = 0.074x + 6.86 (the slope is 0.074), and there is a positive correlation between the two. The statistical analysis shows a square ratio of 0.83 and 0.83. This means that the influence between surface roughness and bacterial adhesion has an explanatory power of 83% and 83%, and p < 0.05 shows that the explanatory power is statistically significant. Figure 12b shows the macro surface roughness is positively correlated with the quantity of accumulated bacteria. The linear relationship equations between the OD absorbance, number of colonies, and surface roughness is y = 2.559x + 17.52 (the slope is 2.559) and y = 4.56x + 4.58 (the slope is 4.56), and there is also a positive correlation between the two. The statistical analysis shows a square ratio of 0.72 and 0.71. This means that the influence between surface roughness and bacterial adhesion has an explanatory power of 72% and 71%. Figure 12c shows the distribution diagram of the contact angle and the quantity of accumulated bacteria. The linear equations between the OD absorption value and the number of colonies and the surface contact angle are y =−0.06x + 25.25 (slope is −0.06) and y = −0.10x + 18.09 (slope is −0.10), respectively. This shows that the surface contact angle and the amount of bacterial adhesion are negatively correlated. Statistical analysis shows that the square ratio is 0.93 and 0.85, indicating that the effect of the surface contact angle and the amount of bacterial adhesion has an explanatory power of 93% and 85%, respectively. Figure 12d shows the distribution diagram of the contact angle and the quantity of accumulated bacteria. The linear equations between the OD absorption value and the number of colonies and the surface contact angle are y = −0.06x + 24.72 (slope is −0.06) and y = −0.96x + 17.20 (slope is −0.96), respectively. This shows that the surface contact angle and the amount of bacterial adhesion are negatively correlated. Statistical analysis shows that the square ratio is 0.87 and 0.80, indicating that the effect of the surface contact angle and the amount of bacterial adhesion has an explanatory power of 87% and 80%, respectively, and p < 0.05 shows that the explanatory power is statistically significant. The statistical analysis of the results shows that the characteristics of the surface contact angle are also important factors affecting the quantity of accumulated bacteria. The distribution diagram of the surface oxygen content and the extent of bacterial adhesion in this study are shown in Figure 12e. The linear relationship equations between the OD absorbance value the number of colonies and the surface oxygen content are y = −0.64x + 53.88 (slope is −0.64) and y = −1.31x + 77.65 (slope is −1.31), respectively. This shows that there is a negative correlation between surface oxygen content and bacterial adhesion. The statistical analysis shows that the square ratios are 0.88 and 0.85, respectively, indicating that the effect of surface oxygen content and bacterial adhesion has an explanatory power of 88% and 85%, respectively. In the present study one-way ANOVA statistical analysis was performed, and surface roughness, surface contact angle, and surface oxygen content had a greater impact on the amount of bacterial adhesion. The statistical results are shown in Table 3. The F value is the ratio of the between-group variation to the within-group variation, and the statistical representative meaning is the strength of the independent variable's influence on the dependent variable. Therefore, when the F value is larger and p < 0.05, it means that the surface characteristic has a greater influence on the quantity of accumulated bacteria. The results of this study show that the F values of the macroscopic surface roughness, microscopic surface roughness, surface contact angle, and surface oxygen content are 79. 39, 177.23, 52.43, and 20.54, respectively. This indicates that the order of influence of the three variables of surface roughness, surface contact angle, and surface oxygen content on the adhesion of bacteria is surface roughness > surface contact angle > surface oxygen content. Discussion The specimens were placed in a mixture of acetic acid, perchloric acid, and glycerin for electrolytic polishing in this study. The surface reactants were quickly removed by agitating the liquid, and a smoother surface was obtained in this process. The original surface roughness of 2.34 µm/120.05 nm (specimen G: macro/micro) was reduced to 0.68 µm/10.33 nm (specimen A: macro/micro). The reduction rate was by 4-12 times. Urlea et al. [34] also used a mixture of acetic acid and perchloric acid to perform electrolytic polishing on Ti-6Al-4V, reducing the original surface roughness from approximately 3.93-22.68 to 1.28-2.52 µm, reducing it by approximately 3-9 times. The studies have also explained the relationship between the current and voltage of Ti-6Al-4V in the electrolytic polishing process [35]. When electrolytic polishing is at a low potential of 0-14 V, a film forms on the surface of the anode that causes etching due to the passage of current. Polishing can take place at potentials above 16 V. This study shows Ti-6Al-4V in a mixture of acetic acid and perchloric acid, with the voltage set at 25-30 V and a current of 0.5-1 A. The electrolytic polishing process can obtain a good surface roughness. This result differs from that of Urlea et al. [34] in the reduction rate of surface roughness after electrolytic polishing that is attributed to the different initial surface roughness. When the initial surface roughness value is smaller, the value obtained after electrolytic polishing is smaller [36]. In this study, the influence of surface roughness on the quantity of accumulated bacteria was understood by expressing the quantity of accumulated bacteria using the parameters of OD absorbance and the number of colonies. The results of this study reveal surface roughness significantly influence the biofilm formation and have the same trend as in some previous studies. When the surface roughness range is below 10 nm, the quantity of accumulated bacteria increases as the surface roughness increases [13,17]. When the surface roughness is between 10 and 1200 nm, it exhibits the same trend as the results of this study [37][38][39]. However, when the surface roughness is approximately 1860-7890 nm, Taylor et al. [39] found that although the quantity of accumulated bacteria was still greater than that of a smooth surface, there was no significant difference in the quantity of bacteria adhered to the surface. This indicates that the surface roughness of the material needs to be controlled below approximately 1800 nm to ensure antibacterial properties. In addition, some researchers also mentioned that the surface roughness is not directly related to the quantity of accumulated bacteria. The quantity of accumulated bacteria depends on the characteristics of the surface and morphology of the bacteria [40]. Directional scratches are observed on the surface after mechanical polishing. The directional scratches are adhered to easily by early colonizers for the initial step of biofilm formation, because grooves can protect bacteria to against shear forces and favor to bacterial adhesion [41]. According to Park et al., decreasing surface roughness can decrease an adhesion early-colonizer such as S. mutans and S. sobrinus, and the late-colonizer Gram-negative anaerobes such as A. actinomycetemcomitans and P. gingivalis are not significant in their effect on surface roughness, but they would be decreased with a 4-days incubation time [42]. The microstructure of Ti-6Al-4V obtained is an α + β bimodal equiaxed structure after polishing with chemical solutions, and the surface has obvious tiny protrusions on the surface. The protrusions are β island-like structures (as shown in Figure 7), and these are the surface types to which bacteria can easily adhere. However, the electropolished surface is relatively smooth, without macro scratches, and therefore it is difficult for bacteria to adhere to. The results of this study show that the surface scratches and β island structure increase the surface roughness, thereby developing the quantity of accumulated bacteria. Therefore, the surface roughness was positively correlated with the quantity of bacteria adhered. In this study, revealing that surface roughness has a more significant effect on the adhesion of early-colonizers (S. mutans), the late-colonizers (A. actinomycetemcomitans and P. gingivalis) adhere to early colonizers and do not initially adhere on tooth surfaces. Therefore, inhibiting early-colonizers may decrease late-colonizers' adhesion to prevent infections of dental implant complications. In addition to surface roughness, some studies also mentioned that the surface contact angle affects the quantity of accumulated bacteria [13,17,18,40,43]. The present study reveals that the contact angle with X and Y direction do not show a significant difference, and thus the scratches of direction may not influence contact angle value. However, the previous study shows that anisotropic texture would affect the contact angle. Contact angle surfaces from un-textured to micro-groove textured 100-300 µm with constant depth of 10-30 µm reveal a droplet shape which becomes stretched and distorted in transformation [44]. Surface roughness transformations of the present study are low (10.33-120.05 nm and 0.68-2.34 µm), and thus anisotropic textures do not significantly influence the contact angle. Increasing contact angle can decrease bacterial adhesion. The results of this study are the same as those of some previous studies [13,17,18,43]. When the surface contact angle is between 9 • and 80 • , the number quantity of accumulated S. epidermidis [13], E. coli [17], and Streptococcus [18] bacteria decreases as the contact angle increases. This indicates that the quantity of accumulated bacteria has a negative correlation with the size of the contact angle. Additionally, the surface contact angle must be greater than 50 • in order to inhibit bacterial adhesion. Therefore, the surface may be called hydrophobic when the surface contact angle is greater than 50 • [43]. The results of this study show that the contact angle needs to be greater than 88 • to have good antibacterial properties. However, the results of the study by Bohinc et. al. [40] are different from the results of this study when the surface contact angle value is in a smaller range (70 • -95 • ). There was no significant difference in the contact angle with any change in the quantity of accumulated bacteria, because bacterial adhesion depends on the different surface topography, and the adhesion of bacteria is related to the extracellular polymeric substances (EPS) produced by bacteria. However, the EPS produced by bacteria cannot adhere on the surfaces with high hydrophobicity. Therefore, the results of this study show that the surface contact angle is negatively correlated with the quantity of accumulated bacteria [17], indicating that electropolishing can obtain smooth surface and Streptococcus mutans are hard to attach to the surface to form a biofilm. However, a large amount of cover on the surface of Ti-6Al-4V after cutting without polishing treatments (specimen G) and mechanical polishing (specimen E and F) can be seen. Streptococcus mutans can adhere on the surface because of the tool mark and consistent scratches by polishing via SiC paper. The bacteria are easily crowd gathering on scratches. According to Grivet et al. [45], hydrophobic bacteria including S. mutans, S. oralis, and S. sanguinis showed much bacterial attachment on the hydrophobic surface. Hydrophobic surfaces are beneficial to adhere for hydrophobic bacteria. However, according to Kang et al. [8], Streptococcus mitis had more bacterial adhesion on the more hydrophilic surface. The present study reveals that whether surface hydrophobicity has positive or negative correlation with bacterial attachment depends on the hydrophobicity of the surface. In addition, the surface elements and the thickness of the oxide layer also affect the adhesion of bacteria. An oxide layer on the surface of a material has the effect of inhibiting bacteria [20]. The oxide layer of the raw material (specimen G) is naturally stored in air. Because titanium alloys are highly active metals, when titanium alloys are exposed to the atmosphere, they can easily form a natural oxide layer [46]. The surface oxygen content of specimen E and F decreased after mechanical grinding. The surface of specimen D was slightly corroded that increased the activation energy of the titanium alloy and quickly combined with oxygen ions in the air after chemical polishing. Therefore, the oxide layer and oxygen content generated were more than those of the pristine titanium alloy. The pristine TiO 2 structure is rutile (the common crystals of titanium dioxide are rutile and anatase). Natural titanium is predominantly found as rutile titanium dioxide. However, titanium dioxide crystals with different structures can also be obtained through heat treatment and surface treatment processes [47]. The surfaces of specimens A, B, and C undergo anodic dissolution during the initial stage after electrolytic polishing; this increases the activation energy of the titanium alloy ionization process. Oxygen ions are adsorbed on the surface of the titanium alloy that diffuse and react with titanium to form a hydrophobic film of TiO 2 . Because specimen A has a longer electropolishing time, the oxygen content is higher. The XRD results show that the electropolished TiO 2 structure is anatase. Chang [20] and Lin et al. [21] also found that the anatase TiO 2 structure facilitates good antibacterial properties of the surface of the titanium alloy. The reason for this is the release of active oxygen generated during a photocatalytic process within the TiO 2 nanometer-scale oxide layer. The generated O 2− , OH, and 1 O 2 can change the permeability of the surface of the Staphylococcus aureus cell membrane. These free radicals then penetrate into the cell membrane to destroy the cell wall, allowing the protein or DNA to flow out of the cell membrane, causing the bacteria to lyse and die [21]. Therefore, the electrolytically polished titanium alloy can obtain better antibacterial properties than that obtained with mechanical and chemical polishing [30,48]. There may not be any direct relationship between the surface oxygen content and the quantity of accumulated bacteria. The structure of the oxide film on the surface of the titanium alloy is the main factor affecting the quantity of accumulated bacteria. Nanda et al. [49] also showed that there is no direct relationship between the thickness of the oxide film on the surface of CP-Ti and the quantity of accumulated bacteria. Therefore, the results of this study show that the linear relationship between the surface oxygen content and the amount of bacterial adhesion is negatively correlated that it is only applicable to the anatase surface oxide film of titanium alloy. Some researchers believe that the size of the surface contact angle has a greater impact on the quantity of accumulated bacteria than the surface roughness [17]. However, Schlisselberg and Yaron [37] believe that surface roughness is the main reason for the quantity of accumulated bacteria. The results of the studies which claim that the surface features have a greater impact on the quantity of accumulated bacteria, are not consistent. Few researchers have studied the differences between surface roughness, surface contact angle, and surface oxygen content. However, according to the aforementioned discussion, it is known that the effect of the structure of the oxide film on the material surface on the antibacterial property is more important than the surface oxygen content. The surface composition of the material is not directly related to the accumulation of bacteria [37], therefore the results of this study suggest that surface roughness is the most important reason for bacterial adhesion. Limitations This research uses CNC cutting of the pristine titanium bar to obtain specimens for polishing. The methods include mechanical polishing, chemical polishing, and electrolytic polishing. The surface roughness value of mechanical polishing depends on the particle size of the abrasive sandpaper. The range of the surface roughness value of mechanical polishing is approximately 86-98 nm in this study. The value of surface roughness obtained via chemical polishing depends on the chemical solution and polishing time. The value of surface roughness obtained via chemical polishing is approximately 74 nm in this study. The current is concentrated on the microscopic or macroscopic rough protrusions on the surface for quick dissolution, and the melting speed is slower on the lower surface than the higher surface in the electrolytic polishing process. The electrolytic polishing process can remove the bald or burrs on the surface and make the metal surface smoother, with good glaze of the surface [50]. The value of the surface roughness is 10-58 nm. Owing to the limitations of the manufacturing process, the results of this study only apply to the influence of the surface roughness in the range of 10-100 nm on the quantity of accumulated bacteria. Conclusions The basic structure of Ti-6Al-4V for medical use is an equiaxed α phase base, with an island β phase structure. The TiO 2 structure can be observed after mechanical grinding, chemical corrosion, or electrolytic polishing. The surface roughness of micro and macro are between 10 and 100 nm and 0.68 and 2.34 µm, respectively. The contact angle is between 15 • and 95 • . According to XPS analysis, the relative oxygen content is high within the depth of 0.2-1.4 µm of the alloy surface. The surface characteristics of Ti-6Al-4V are a surface roughness of 10 nm, contact angle of 92 • , and a relatively high oxygen content after electropolishing. It has the best bacterial inhibition. It is recommended as the best surface treatment method for Ti-6Al-4V dental implants. The surface roughness, surface contact angle, and surface oxygen content of the material are linearly related to the quantity of accumulated bacteria after different polishing procedures. The surface characteristics of the alloy will affect the adhesion characteristics of bacteria, and the surface roughness is the most important factor affecting the amount of bacterial adhesion. Conflicts of Interest: The authors declare no conflict of interest.
2020-11-05T09:10:39.104Z
2020-10-31T00:00:00.000
{ "year": 2020, "sha1": "f236226e00364b6d6d0eec84efc961c31f949852", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/10/11/1057/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "fe8af109114c7b9e5b57f0ff34b783be48b80673", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
24745991
pes2o/s2orc
v3-fos-license
Nipple Reconstruction with the Biodesign Nipple Reconstruction Cylinder: A Prospective Clinical Study Supplemental Digital Content is available in the text. nipple-areola complex, loss of projection over time remains a challenge. Projection loss is disappointing to patients, results in poor patient satisfaction, and degrades the aesthetic outcome. To overcome nipple projection loss with skin flaps alone, many surgeons have advocated the insertion of alloplastic materials, such as calcium hydroxylapatite 1 or polytetrafluoroethylene, 3 or autologous tissue grafts, to act as an internal stent or bolster to support projection. Although autologous techniques, which use rib cartilage, 4 auricular cartilage, 5 dermis, 6,7 or other autologous tissues, 8 generally have been successful, the harvesting of the autologous graft material can lead to increased operative times and greater patient morbidity, including pain and infection. To avoid the disadvantages of creating a donor site, off-the-shelf materials derived from collagen have also been developed and reported for use in nipple reconstruction. 9,10 These materials include devices made from small intestinal submucosa (SIS) 9 or human acellular dermis. 10 The Biodesign Nipple Reconstruction Cylinder (NRC; Cook Biotech Incorporated, West Lafayette, Ind.) is a tightly rolled cylinder of extracellular matrix collagen derived from porcine SIS. It is available in diameters of 0.7 and 1.0 cm and in lengths of 1.0 and 1.5 cm, can be trimmed to size as required aesthetically, and is the only FDA-cleared device specifically intended for implantation to reinforce soft tissue in plastic and reconstructive surgery of the nipple (Fig. 1). Like dermis or fascia, SIS is composed of fibrillar collagens, glycosaminoglycans, and adhesive glycoproteins, which serve as a scaffold into which cells can migrate and multiply. 11 Once implanted, the NRC material allows cells to migrate into the device and form an organized extracellular matrix through the deposition of collagen and other proteins while acting as a biomaterial stent over which skin flaps can be created to achieve aesthetic nipple projection. PATIENTS AND METHODS This study was designed as a prospective, nonblinded, multicenter, single-arm study to examine the use of the Biodesign NRC during reconstruction of the nipple after mastectomy. It was conducted according to international standards of Good Clinical Practice (ISO 14155 and International Conference on Harmonization guidelines) and additional institutional research policies and procedures. The study protocol and informed consent statements were reviewed and approved by either an independent institutional review board (IRB) or each location's governing IRB. All patients were consented before enrollment. The rights, safety, and well-being of study subjects were protected in accordance with the ethical principles laid down in the Declaration of Helsinki. As required by U.S. law, this study was listed at www.ClinicalTrials.gov and assigned #NCT01216319, listed on October 5, 2010. Adult patients with a history of breast cancer, having previously completed either unilateral or bilateral breast removal and reconstruction, were consented to participate. Patients with a history of radiation to the affected breast within the last 3 months and patients who had received chemotherapy within the past 4 weeks were excluded. Patient demographics, comorbidities, and relevant history, including the type of breast reconstruction, were recorded. At the time of nipple reconstruction, the size of the NRC implant was selected based on the patient's aesthetic preference and the size of the contralateral nipple. If a contralateral nipple was absent, the overall size of the reconstructed breast, the presence or absence of a well-vascularized skin flap, and/or the patient's desired final appearance were considered when determining the cylinder length and diameter, allowing for some shrinkage following implant. The NRC implantation technique was performed as follows. (See figure, Supplemental Digital Content 1, which depicts schematically the standardized surgical procedure used in the study, http://links.lww.com/PRSGO/ A238.) The position of the nipple was determined with the patient seated in a relaxed position, and the skin was marked with a surgical marker to guide the creation of the skin flap. Breast tissue flaps were raised at the superficial subcutaneous level to preserve the subdermal plexus using either a C-V or S-flap technique, using a specially designed template provided in the NRC kit. The flaps were formed into an appropriately sized silo to create the appearance of a breast nipple. The NRC was allowed to rehydrate for approximately 10 seconds immediately before it was inserted into the silo formed from the skin flap so as to bolster and maintain flap projection. Care was taken to ensure that an adequate blood supply was projected into the skin flap and reached the device. The cylinder was then secured into place with a combination of 3-0 vicryl and 4-0 monocryl (Ethicon, Somerville, N.J.) sutures at the base of the nipple reconstruction to prevent migration of the cylinder into the subcutaneous region under the flaps. After reconstruction, incisions were closed with a combination of inverted dermal 3-0 vicryl sutures and simple interrupted 4-0 monocryl sutures. Baseline projection measure- ments were taken directly on the reconstructed nipple, and reconstructed nipples were protected using a nipple shield for up to 4 weeks after surgery. Areolar tattooing was allowed according to individual patient preference but was discouraged until after the nipple reconstruction had fully healed. Photographs of reconstructed nipples were taken at 1 week, 1 month, 3 months, 6 months, and 12 months after reconstruction (Fig. 2). Nipple projection was measured directly on the patient at the time of each follow-up examination, and compared with baseline. A questionnaire was developed to assess the level of overall patient satisfaction with the cosmetic results and patient satisfaction with multiple aspects of the reconstructed nipple (eg, size, position, color, softness, symmetry, sensation, and overall appearance) because no validated questionnaire specific to nipple reconstruction is available. (See survey, Supplement Digital Content 2, which depicts the satisfaction questionnaire, http://links.lww.com/PRSGO/A238.) This questionnaire was completed by the patient at 1, 3, 6, and 12 months after reconstruction; for patients undergoing bilateral procedures, a questionnaire was completed for each nipple at each time point. Statistical Analysis Study data were collected and entered into a study database by a contract research organization (MED Institute, West Lafayette, Ind.) using quality-control procedures. A quality-assurance check of the database datasets versus the case report forms was performed. All statistical analyses were performed with SAS software (version 9.3 for Windows; SAS Inc, Cary, N.C.) on the intent-to-treat population. Continuous variables were reported as means, standard deviations, and ranges. Categorical variables were reported as percent. Logistic and mixed linear models were used to identify predictors of nipple projection at 12 months. A mixed linear model with repeated measures was used to assess differences in projection maintenance over time. RESULTS A total of 82 nipple reconstructions were performed in 50 patients between September 2011 and December 2012 (Table 1), and 46 patients were available for their final study visit at 12 months. Two patients were lost to follow-up and 2 patients were removed from the study early because of recurrence of their cancer, requiring additional surgery and chemotherapy. Although men were not excluded from participation if they met the eligibility criteria, all patients in this study were women. Mean patient age was 52.0 years; mean body mass index was 26.8. The majority of patients classified themselves as of white descent. Nine patients (18%) reported a hypertension diagnosis; there was 1 Type I diabetic and 2 Type II diabetics. A total of 8 patients (16%) reported a previous smoking history, although none of the patients were smokers at the time of nipple reconstruction. Furthermore, none of the patients reported a history of radiation to the affected nipple within the last 3 months. Of the total number of reconstructions performed, modified S-flaps were created in 6 nipples (7.3%), whereas the remaining 76 nipples (92.7%) were reconstructed using a C-V flap technique. Mean nipple projection 1 week after surgery was 10.5 mm (range: 6-16 mm). Flaps were oversized an average of 19% at surgery to allow for placement of the cylinder and to prevent tension that could lead to flap necrosis, cylinder extrusion, or wound complications. At 6 months, mean projection was 4.1 ± 1.6 mm (range: 1-8 mm), and at 12 months, mean projection was 3.8 ± 1.5 mm (range: 0-8 mm). A plot of projection over time is presented in Figure 3. The maintenance of nipple projection is the percentage of projection at 1 year in relation to the projection immediately after the reconstruction. The average difference in maintenance of nipple projection from 6 to 12 months was 2.7%. The mixed linear model with repeated measures showed significant decreases in projection maintenance from 1 to 3 months (P < 0.0001) and from 3 to 6 months (P < 0.0001). However, there was not a significant change in projection maintenance from 6 to 12 months (P = 0.16). Nipples in which the cylinder extruded after surgery (n = 3) were excluded from all projection measurements to provide an accurate picture of the maintenance of nipple projection when the cylinder remains in place. Of importance, covariate models found no significant relationship between the extent of projection loss and either the type of breast reconstruction or removal of the nipple shield at 1 month after procedure. There were no intraoperative adverse events reported. Related postoperative adverse events occurred in 7 patients (14.0%) and in 8 reconstructions (9.8%). These adverse events are presented in Table 2. In addition to these events, 2 patients had recurrence of malignant cancer during the follow-up period, one patient had an unrelated adverse event requiring breast implant revision, one patient complained of an allergic reaction to topical antibiotic, and one patient opted to have cosmetic surgery to remove excess scar tissue related to her mastectomy. DISCUSSION Maintenance of nipple projection has been reported to be as little as 30% after reconstruction, with significant flattening occurring in the first 3 months before eventually stabilizing. 6,12 We believed that the addition of a biomaterial stent, in this case the Biodesign NRC, may fill the dead space present beneath the skin when flaps are raised, prevent scar contraction, and lead to a better long-term aesthetic result. Although long-term projection appeared to stabilize over time, maintenance of projection was only 37.3% at 1 year, which is slightly less than the approximately 50% reported at 6 months by Tierney et al 9 when using the NRC and also slightly less than the 47% reported by Garramone and Lam 10 when human acellular dermis was used along with tissue expanders. We further thought that breast reconstruction with an expander and implant would provide a solid foundation for the NRC, whereas breast reconstruction using flaps would not provide as solid of a foundation, resulting in the NRC sinking into the breast tissue and leading to decreased projection over time. However, statistical analyses were unable to detect a significant correlation between projection and the type of breast reconstruction that had been performed. The time course of projection loss could be related to the known remodeling characteristics of the Biodesign implant or to the fact that we enrolled patients with a relatively recent history of radiation to the breast. Of note, radiation has been reported to impair wound healing for months to years after treatment is given. 13 Thus, it may be possible to improve results if patients were selected more stringently. Alternatively, the SIS material used in the NRC has an established time frame of remodeling, which in the abdominal wall has been demonstrated to occur within 6 to 9 months. 14,15 This natural tissue remodeling results in a nipple that retains a natural texture with adequate projection of 3 to 5 mm in the nipple area that becomes stable over time. This study demonstrated that the extent of projection loss changes minimally between 6 and 12 months, supporting this hypothesis, and suggests that 6-month projection may be predictive of longer-term projection for patients. This study has several limitations, the least of which include the associated out-of-pocket costs of an elective, cosmetic procedure using an off-the-shelf device. The pa- tient population in this study was generally homogenous, consisting of mostly white, nondiabetic, and nonsmokers. Therefore, these results may not be generalizable to the broader patient population that includes women of varying ethnicities and comorbidities. Additionally, because the majority of reconstructions were performed using C-V flaps, it is not possible to predict long-term outcomes if different flap techniques are used. Similarly, the absence of a control group limits the generalizability of the results, allowing comparisons to be made only to historical literature reports. Even though these are limitations of this study, this study is important because it is the first multicenter study on this device and demonstrates the amount of projection loss that can be expected when the NRC is used to reconstruct the nipple. The extent of projection observed at 12 months postimplant provides valuable information to surgeons to help refine their techniques and strategies to obtain an optimal aesthetic result for patients. Surgeons can use this information to tailor each nipple reconstruction more closely to the patient's final desired level of projection, and patient expectations can be set more accurately before the procedure. This study also demonstrated that the placement of the NRC can be performed safely with few postoperative complications. The most common complication, cylinder extrusion, occurred in 3 of the 82 nipples, yielding an extrusion rate of 3.7%. This extrusion rate is similar to that reported elsewhere 9 and would be expected for any type of implanted graft. Other complications were typical of nipple flap reconstruction regardless of the type of implanted graft and included flap ischemia and necrosis, wound complications, and unexpected bleeding after the procedure. What is important to note in this study, however, is that in all patients experiencing the typical adverse events of cylinder exposure, localized flap necrosis, and wound dehiscence, only one of these adverse events led to eventual cylinder extrusion, and in none of them was device removal required. It is likely that the inherent remodeling characteristics of the graft, including its native composition and ability to support rapid angiogenesis from adjacent vascularized tissue structures, prevented long-term infectious results that would have necessitated device removal. 16 Selecting the properly sized flap can affect outcome of the procedure and is a key reason that complications can occur. For example, small flaps may not leave sufficient space for the NRC, resulting in increased suture line tension and device exposure or extrusion. However, flaps need to be wrapped securely around the cylinder to promote incorporation of the device. Flaps that are too long may have decreased vascularization at the tips, leading to tissue necrosis. This can be avoided by carefully trimming the flap ends to proper size before wrapping them around the NRC. Additionally, if the flap has a thick layer of fat, limited or careful trimming away of that fat may increase contact between the NRC and skin tissue to promote device incorporation. Aside from patient safety, perhaps the most important measure of a successful nipple reconstruction is patient satisfaction. The primary goal of breast and nipple reconstruction is to ease the emotional and psychological burden of mastectomy for the patient, so patient satisfaction with the aesthetic outcome of the procedure is critical to considering the procedure a success. In this study, we asked patients about their satisfaction with many parameters of their reconstructed nipples, including overall appearance, symmetry, color, softness, sensation, nude appearance, clothed appearance, and size. The vast majority of patients (93%) were "pleased" or "very pleased" overall, in sharp contrast to the satisfaction rates of 16% previously reported in other series, 2,7 demonstrating that patient satisfaction depends more on the total aesthetic result of the reconstruction than on the level of sustained projection alone. The psychological and emotional benefit that comes from completing breast and nipple reconstruction with the NRC after recovery from breast cancer and mastectomy is seen in the patient satisfaction results obtained.
2018-04-03T05:45:53.294Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "7c9d726c211fd1e9016db560d37dd9fc32aa9295", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/gox.0000000000000846", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "880c5b96fbb74015d3576f8dd9c0c297f9d8c853", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84985155
pes2o/s2orc
v3-fos-license
Differential Protein Mobility of the γ-Aminobutyric Acid, Type A, Receptor α and β Subunit Channel-lining Segments* The γ-aminobutyric acid, type A (GABAA), receptor ion channel is lined by the second membrane-spanning (M2) segments from each of five homologous subunits that assemble to form the receptor. Gating presumably involves movement of the M2 segments. We assayed protein mobility near the M2 segment extracellular ends by measuring the ability of engineered cysteines to form disulfide bonds and high affinity Zn2+-binding sites. Disulfide bonds formed in α1β1E270Cγ2 but not in α1N275Cβ1γ2 or α1β1γ2K285C. Diazepam potentiation and Zn2+ inhibition demonstrated that expressed receptors contained a γ subunit. Therefore, the disulfide bond in α1β1E270Cγ2 formed between non-adjacent subunits. In the homologous acetylcholine receptor 4-Å resolution structure, the distance between α carbon atoms of 20′ aligned positions in non-adjacent subunits is ∼19 Å. Because disulfide trapping involves covalent bond formation, it indicates the extent of movement but does not provide an indication of the energetics of protein deformation. Pairs of cysteines can form high affinity Zn2+-binding sites whose affinity depends on the energetics of forming a bidentate-binding site. The Zn2+ inhibition IC50 for α1β1E270Cγ2 was 34 nm. In contrast, it was greater than 100 μm in α1N275Cβ1γ2 and α1β1γ2K285C receptors. The high Zn2+ affinity in α1β1E270Cγ2 implies that this region in the β subunit has a high protein mobility with a low energy barrier to translational motions that bring the positions into close proximity. The differential mobility of the extracellular ends of the β and α M2 segments may have important implications for GABA-induced conformational changes during channel gating. GABA A 1 receptors are allosteric proteins that mediate fast inhibitory neurotransmission in the central nervous system (1)(2)(3). They are members of the Cys-loop receptor ion channel gene superfamily that includes glycine, serotonin type 3 (5-HT 3 ), and nicotinic acetylcholine (ACh) receptors (4 -6). GABA A receptors are formed by five homologous subunits as-sembled around a central channel. Most endogenous receptors contain two ␣, two ␤, and one ␥ subunit arranged in a clockwise orientation ␣␤␣␤␥ when observed from the extracellular end of the channel (7,8). However, expression of just ␣ and ␤ subunits also results in functional receptors with the favored stoichiometry being two ␣ and three ␤ in the order ␣␤␣␤␤ (9 -11). Each subunit has an ϳ200-amino acid, extracellular, N-terminal, ligand-binding domain and a C-terminal, channel-forming domain with four membrane-spanning segments (M1, M2, M3, and M4). The channel is principally lined by the five ␣-helical M2 segments (12,13). An index numbering system facilitates comparisons between M2 segments of superfamily members (14). The 0Ј position is defined as the positively charged residue located near the cytoplasmic end of the channel, GABA A ␤ 1 R250. The 20Ј position, GABA A ␤ 1 E270, is aligned with the acetylcholine receptor extracellular ring of charge (15) and is predicted by amino acid sequence analysis to be the extracellular end of M2 (16). Experimental evidence indicates that M2 extends two helical turns beyond the 20Ј position (17). The 4-Å resolution structure of the homologous Torpedo ACh receptor confirms this and demonstrates that the 20Ј position lies at the level of the extracellular membrane surface (13). In the 4-Å resolution cryo-EM structure the narrowest region of the closed channel, inferred to be the gate, is near the midpoint, between the 9Ј and 14Ј positions (13). Cysteine accessibility studies in the 5-HT 3 receptor are consistent with this, although similar studies in the ACh receptor concluded that the gate was at the channel's cytoplasmic end (18,19). Evidence from ACh, GABA A , and 5-HT 3 receptors indicates that the structure of the cytoplasmic end of the channel is relatively fixed and rigid (20 -22). This would be consistent with evidence that the size and charge selectivity filters, and the major determinants of single channel conductance are located at the cytoplasmic end of the channel (12,15,(23)(24)(25). In contrast, the extracellular end of the channel undergoes conformational motion due to both thermal protein motion and agonist-induced gating (17,20,21,26). In the cryo-EMderived structure, the extracellular ends of the M2 domains are loosely packed, suggesting that these domains might possess a high degree of flexibility/mobility (13). Consistent with this, substituted cysteine accessibility method studies of the GABA A receptor ␤ 1 subunit M2 domain concluded that the M2 segment extracellular halves were loosely packed and/or highly mobile (21). We previously used disulfide trapping experiments in ␣␤ receptors to probe thermal protein motion and proximity relationships between M2 segment, channel-lining residues in different subunits (26). The ability for a pair of cysteines to form a disulfide bond depends on the presence of an oxidizing environment and on the collision frequency. The collision frequency depends on the average separation distance of the sulfhydryls, their relative orientation in the protein, and the flexibility/ mobility of the protein in the region of the Cys residues. We used copper phenanthroline (Cu:phen) to create an oxidizing environment. Cu:phen catalyzes the formation of reactive oxygen species, such as superoxide and hydroxyl radicals, from molecular oxygen (27). We showed that at the 20Ј level disulfide bonds formed between Cys substituted for the ␤20Ј but not between Cys substituted for the ␣20Ј. In order for a disulfide bond to form, the Cys ␣ carbons must come to within 5.6 Å of one another (28). Assuming that the 4-Å resolution ACh receptor structure is a good model for the GABA A receptor structure, the average distance between the 20Ј ␣ carbons of residues in adjacent and non-adjacent subunits is 12 and 19 Å, respectively (13). Because there are three ␤ subunits in the ␣␤ receptors used in our previous work, we could not distinguish whether the disulfide bond was forming between Cys substituted in adjacent or in non-adjacent positions. Thus, the extent of the thermal motion could not be determined (26). To resolve this issue the current experiments have been performed in ␣␤␥ receptors where the two ␤ subunits are not adjacent. The extent of the movements that would be required to explain the disulfide bond formation in our original studies highlights a potential limitation of disulfide trapping experiments. Because disulfide bonds are covalent they may trap relatively rare conformational states of the protein. In the aspartate chemotaxis receptor, a protein of known crystal structure, thermal protein movement allowed disulfide bond formation between pairs of engineered Cys whose ␣ carbons were separated in the crystal structure by 15 Å (28). Thus, disulfide trapping may provide insight into the extent of thermal motion, but it does not necessarily measure the average separation distances. To address this issue, in the present work we have measured the Zn 2ϩ binding affinity of receptors containing pairs of engineered Cys. Pairs of Cys can form bidentate, high affinity Zn 2ϩ -binding sites if they are positioned appropriately. The Zn 2ϩ affinity of these sites will depend on the orientation of the Cys sulfur atoms, their average separation distance, and the energy needed to distort the average protein structure to bring the Cys into position to bind the Zn 2ϩ ion. The Zn 2ϩ affinity will lie between the picomolar range, the affinity of Zn 2ϩ for peptides containing four Cys Zn 2ϩ fingerbinding protein sequences (29), and the 10 -1000 M range, the Zn 2ϩ affinity of single Cys (10,30,31). In crystal structures of high affinity Zn 2ϩ -binding sites the Cys ␣ carbons are separated by about 5-7 Å. The non-covalent nature of this interaction provides a better estimate of the average separation and/or the energy required to distort the average conformation to the structure necessary for high affinity binding. Here we report that in ␣␤␥ receptors disulfide bonds form when all five subunits contain 20Ј engineered Cys residues, but for the single Cys mutants they only form when the engineered Cys is in the ␤ subunit, not when it is in ␣ or ␥. Consistent with this, a high affinity Zn 2ϩ -binding site is only formed when the engineered Cys is in the ␤ subunit. These results provide insights into the extent and asymmetric nature of the thermal motion near the extracellular ends of the GABA A receptor channel-lining M2 segments and have implications for the channel gating process. Reagents-A 100 mM stock solution of GABA (Sigma) in water was aliquoted and stored at Ϫ20°C. 1 M stock solutions of dithiothreitol (DTT; Sigma) and o-phenanthroline (Sigma) were made in nominally calcium-free frog Ringer's solution (CFFR: 115 mM NaCl, 2.5 mM KCl, 1.8 mM MgCl 2 , and 10 mM HEPES, pH 7.5) and Me 2 SO, respectively, aliquoted, and stored for not more than 1 month at Ϫ20°C. A stock solution of 100 mM CuSO 4 was made in water. CuSO 4 and o-phenanthroline were mixed in CFFR directly before use to a final concentration of 100 M CuSO 4 and 400 M o-phenanthroline, expressed as 100:400 M Cu:phen. A 100 mM stock solution of N-ethylmaleimide (NEM, Sigma) was made in CFFR directly before use. A 10 mM diazepam stock solution was made in Me 2 SO and stored at Ϫ20°C. A 100 mM ZnCl 2 (Sigma) stock solution was made in water with 10 mM HCl, to prevent the precipitation of Zn 2ϩ (OH Ϫ ). Tricine (Sigma) was diluted directly into 1ϫ buffer at a concentration on 10 mM. Stock solutions of 500 mM N-(2-acetamido)iminodiacetic acid (Sigma), pH 7.3, and 100 mM diethylenetriaminepentaacetic acid (Sigma), pH 7.3, were made in water. Electrophysiology-Two-electrode voltage clamp recordings were conducted at room temperature in a 250-l chamber continuously perfused at 5-6 ml/min with CFFR solution. For the Zn 2ϩ dose-response curves, CFFR was replaced by a buffer containing 100 mM NaCl, 2.8 mM KCl, 0.3 mM BaCl 2 , and 5 mM HEPES, pH 7.3. Currents were recorded from oocytes using two-electrode voltage clamp recording at a holding potential of Ϫ60 mV. The ground electrode was connected to the bath via a 3 M KCl/Agar bridge. Glass microelectrodes filled with 3 M KCl had a resistance of Ͻ2 M⍀. Data were acquired and analyzed using a TEV-200 amplifier (Dagan Instruments, Minneapolis, MN), a Digidata 1322A data interface (Axon Instruments, Union City, CA), and pClamp 8 software (Axon Instruments). Currents were elicited by applications of GABA separated by at least 5 min of CFFR wash to allow complete recovery from desensitization. Currents were judged to be stable if the variation between consecutive GABA pulses was Ͻ5%. Diazepam-induced Current Enhancement-The GABA dose-response relationship was determined on each cell. A GABA EC 5 concentration was applied in the absence and presence of 1 M diazepam. To prevent disulfide bond formation receptors were kept reduced by the application of 10 mM DTT at the beginning of the experiment and between pulses of reagents. The amount of potentiation by diazepam was calculated by the equation, % potentiation ϭ (I Dzpm /I) ϫ 100, where I Dzpm and I are the GABA-induced currents with and without diazepam, respectively. The diazepam potentiation is presented as the mean Ϯ S.E. NEM Inhibition of Zn 2ϩ Binding-After a control pulse of 30 ⌴ GABA, 5 M Zn 2ϩ was applied for 1 min, followed by co-application of GABA and Zn 2ϩ . To eliminate Zn 2ϩ binding to the engineered cysteines, receptors were treated with 100 M NEM for 5 min. We then reapplied GABA, first alone and then in the presence of Zn 2ϩ . (In some cases the effect of NEM on Zn 2ϩ -induced inhibition was tested on separate cells. Because this did not alter the outcome, the results of all experiments were combined.) Receptors were kept reduced by the application of 10 mM DTT at the beginning of the experiments and between pulses of reagents. Prior to every pulse of GABA or GABA plus Zn 2ϩ , receptors were reduced with DTT (10 mM, 5-10 min) and either washed (GABA pulses) or treated with Zn 2ϩ (GABA plus Zn 2ϩ pulses) for 1 min. The degree of inhibition by Zn 2ϩ both before and after NEM treatment was determined by the equation, % inhibition ϭ [1 Ϫ (I Zn /I)] ϫ 100, where I Zn and I are the GABA-induced currents with and without Zn 2ϩ , respectively. The % inhibition is given as mean Ϯ S.E. Disulfide Bond-induced Inhibition-Once GABA-induced control currents had stabilized, 10 mM DTT was applied for 10 min. GABA was reapplied directly following DTT to determine the extent of potentiation produced by reduction of spontaneously formed disulfide bonds. The cells were treated with 100:400 M Cu:phen for 3-6 min, and then two or more GABA test pulses were applied. A second application of DTT was followed by a final pulse of GABA, to assure the reversibility of the Cu:phen effects. Intersubunit disulfide bond formation was completely spontaneous and did not require catalysis by Cu:phen. Cu:phen only increased the rate of disulfide bond formation. Therefore, the extent of inhibition attributed to the presence of disulfide bonds was calculated by the equation, % inhibition ϭ [(I DTT Ϫ I)/I DTT ] ϫ 100, where I DTT is the GABA-induced current following application of DTT, and I is the current prior to DTT treatment. A saturating concentration of GABA was used for all experiments, except where specified. The range of GABA concentrations was between 0.3 and 10 mM. The maximal currents and % effect for each reagent are presented as the mean Ϯ S.E. Reoxidation Rates-A control pulse of a low concentration of GABA (below the GABA EC 50 ) was followed by a 10-min application of 10 mM DTT. This led to an increase in the size of the currents. Pulses of GABA were then applied at 5-to 10-min intervals to monitor the return of the currents to their unreduced levels. The peak currents were fit with the equation, I t ϭ (I 0 Ϫ I ϱ )exp(Ϫt/) ϩ I ϱ , where I t is the current at time t, I 0 is the initial current, I ϱ is the final current, and is the time at which 63% of the total current decay occurred. Zn 2ϩ Dose-response Curves-A control pulse of GABA was followed by coapplications of GABA with increasing concentrations of Zn 2ϩ . Prior to every pulse of GABA or GABA plus Zn 2ϩ , receptors were reduced with DTT (10 mM, 5-10 min) and either washed or treated with Zn 2ϩ for 1 min. Currents were normalized to the initial, control current, and fit with the Hill equation, where I is the current, I max is the control current, IC 50 is the Zn 2ϩ concentration that produces half-maximal inhibition, Zn is the Zn 2ϩ concentration, and n is the Hill coefficient. Fits were performed in Prism 3.02 (GraphPad Software, San Diego, CA). To avoid artifacts caused by potential submicromolar levels of heavy metal contamination, Zn 2ϩ dose-response curves were performed in the presence of heavy metal chelators as described (38,39). 10 Statistics-All statistical analyses were performed in Prism 3.02 using a one-way analysis of variance followed by the Newman-Keuls multiple comparison test. A ␥ Subunit Is Present in Functional Receptors-In ␣␤␥ receptors ␤ subunits are found only in non-adjacent positions, whereas in ␣␤ receptors there is a pair of ␤ subunits in adjacent positions (see the introduction). Therefore, if a disulfide bond formed between ␤ subunits, we could only know that this bond occurred between non-adjacent subunits if we also knew that a ␥ subunit was present in the receptor. We tested for the presence of a ␥ subunit using two approaches, diazepam potentiation and Zn 2ϩ inhibition. In ␣␤␥ receptors, 1 M diazepam is reported to potentiate currents induced by an EC 5 concentration of GABA by more than 100%, whereas ␣␤ receptors are unaffected by diazepam (40,41). Zn 2ϩ also enables us to distinguish between ␣␤ and ␣␤␥ receptors, because ␣␤ receptors have a Zn 2ϩ IC 50 of ϳ0.5 M, whereas ␣ 1 ␤ 1 ␥ 2 receptors are insensitive to Zn 2ϩ (42,10). The high affinity Zn 2ϩ -binding site is formed in ␣␤ receptors by the ␤ 1 His-267 (M2 17Ј) in the adjacent ␤ subunits. In ␣␤␥ receptors there are no adjacent ␤ subunits and therefore no high affinity Zn 2ϩ -binding site. We tested the effects of diazepam on two populations of the mutant ␣␤20ЈC␥ receptors. Half of the cells were injected with a 1:1:1 and half with a 1:1:10 molar ratio of ␣, ␤, and ␥ mRNA. If the cells injected with a 1:1:1 ratio of mRNA expressed the ␥ subunit in all cell surface receptors, then there should be no increase in the amount of diazepam potentiation in the cells injected with 1:1:10 compared with 1:1:1. To assure that the presence of spontaneously formed disulfide bonds did not interfere with the effects of diazepam and Zn 2ϩ , the reducing (D) mRNA was tested before and after treatment with 100 ⌴ NEM (5 min). Zn 2ϩ inhibition was nearly abolished by NEM in ␣20ЈC␤20ЈC␥20ЈC, but not ␣20ЈC␤20ЈC, proving that a ␥ subunit was present in receptors injected with ␥20ЈC mRNA. Bars above traces indicate application of reagent. Leak currents have been subtracted. Current is not shown during application of Zn 2ϩ alone or NEM. Holding potential: Ϫ60 mV. agent DTT was applied for several minutes before the application of all reagents. As seen in Fig. 1 (A and B), the degree of diazepam-induced potentiation in the two populations of receptors was not significantly different (1:1:1, 179 Ϯ 70% (n ϭ 3); 1:1:10, 108 Ϯ 18% (n ϭ 3)). Therefore, in our hands, the majority of the GABA-induced current from oocytes injected with an equimolar ratio of the three subunits arises from receptors containing a ␥ subunit. Similar results were obtained with other Cys mutant receptors used in this study. To further support the conclusion that the functional cell surface receptors used in this study contained a ␥ subunit, we examined the extent of inhibition by Zn 2ϩ . Two different mutants were used for the experiments with Zn 2ϩ : the double Cys mutant, ␣20ЈC␤20ЈC, and the triple Cys mutant, ␣20ЈC␤20ЈC␥20ЈC. In both cases we injected equimolar amounts of mRNA for each subunit. 5 M Zn 2ϩ should inhibit more than 50% of the current in the ␣20ЈC␤20ЈC mutant while having no effect on the current of the ␣20ЈC␤20ЈC␥20ЈC mutant. Surprisingly, the two mutants showed similar amounts of Zn 2ϩ -induced inhibition: 85 Ϯ 4% (n ϭ 2) and 69 Ϯ 8% (n ϭ 5) for ␣20ЈC␤20ЈC and ␣20ЈC␤20ЈC␥20ЈC, respectively ( Fig. 1, C and D). Because the engineered cysteines in these mutants could potentially bind Zn 2ϩ , we retested the effects of Zn 2ϩ after exposing the reduced receptors to the alkylating agent NEM (100 M, 5 min). Alkylation should abolish the ability of cysteine to bind heavy metals. NEM diminished currents in both mutants. However, following alkylation with NEM, 5 M Zn 2ϩ inhibited the remaining currents in ␣20ЈC␤20ЈC and ␣20ЈC␤20ЈC␥20ЈC by 60 Ϯ 2% (n ϭ 3) and 6 Ϯ 2% (n ϭ 4), respectively. Therefore, we infer that double mutant ␣20ЈC␤20ЈC cells retained a high affinity for Zn 2ϩ following alkylation, because the native high affinity, Zn 2ϩ -binding site composed of ␤ 1 His-267 (17Ј) in adjacent ␤ subunits (42, 10) are insensitive to NEM. In contrast, in the triple mutant, ␣20ЈC␤20ЈC␥20ЈC, after NEM alkylation Zn 2ϩ no longer inhibited significantly, because in the presence of the ␥ subunit there are not adjacent ␤ subunits to form a high affinity Zn 2ϩbinding site. The overall conclusion from the experiments with Zn 2ϩ and diazepam is that, when injected with a 1:1:1 ratio of ␣, ␤, and ␥ mRNA, the large majority of cell surface receptors contain a ␥ subunit. The important implication of this finding for the experiments described below is that two ␣ subunits are not in adjacent positions nor are the two ␤ subunits. We next determined whether the oxidizing agent Cu:phen, which catalyzes disulfide bond formation, could reverse the effects of DTT in ␣20ЈC␤20ЈC␥20ЈC. A 3-min application of 100:400 M Cu:phen decreased the ␣20ЈC␤20ЈC␥20ЈC currents by 89 Ϯ 12% (n ϭ 3). This was not significantly different than their initial levels. In contrast, a similar Cu:phen application had no effect on wild-type currents (n ϭ 3) (Fig. 2, A and B). A subsequent DTT application restored the mutant receptor currents to within 5 Ϯ 2% (n ϭ 3) of the levels produced by the first DTT application. To ensure that the effect of DTT was not due to chelation of contaminating heavy metals in the buffer, we tested whether the metal chelator EGTA could also potentiate currents in ␣20ЈC␤20ЈC␥20ЈC. DTT potentiated currents by 981 Ϯ 162%, whereas, when applied for several minutes, 1 mM EGTA only potentiated currents by 35 Ϯ 16% (n ϭ 3). Therefore, we conclude that ␣20ЈC␤20ЈC␥20ЈC formed one or two spontaneous disulfide bonds, which could be reduced by DTT and reformed by Cu:phen. Disulfide Bond Formation in Receptors with Single 20Ј Cys Mutant Subunits-Further experiments were aimed at gaining insight into the subunits involved in disulfide bond formation in ␣20ЈC␤20ЈC␥20ЈC. We first examined mutant receptors containing a Cys in only one subunit: ␣20ЈC␤␥, ␣␤20ЈC␥, and ␣␤␥20ЈC. Disulfide bonds formed spontaneously in ␣␤20ЈC␥. The initial I max before DTT application, 1078 Ϯ 214 nA (n ϭ 6), was smaller than in the wild-type receptors, and after reduction with DTT I max increased to 3251 Ϯ 395 nA (n ϭ 6) similar FIG. 2. Disulfide bonds formed in ␣20C␤20C␥20C and ␣␤20C␥, but not wild-type, receptors. The effects of DTT (10 mM, 10 min) and Cu:phen (100:400 M, 3-6 min) were tested on the maximal GABA-induced current of wild-type (A), ␣20ЈC␤20ЈC␥20ЈC (B), and ␣␤20ЈC␥ (C) receptors. The initial currents of both mutants were smaller than that of wild-type receptors. DTT increased the mutant currents, and subsequent Cu:phen application reversed the effects of DTT. Subsequent DTT application increased currents to an extent comparable to the initial DTT application. Neither reagent significantly altered currents in wild-type receptors. Bars above traces indicate application of reagent. Leak currents have been subtracted. Current is not shown during application of DTT and Cu:phen. Holding potential: Ϫ60 mV. to wild-type currents (Figs. 2C and 3A). Furthermore, application of 100:400 M Cu:phen returned currents to within 3 Ϯ 23% of the initial untreated levels (n ϭ 3), and a second DTT application increased currents to within 33 Ϯ 5% (n ϭ 2) of those after the initial DTT application. In contrast to ␣␤20ЈC␥, in ␣20ЈC␤␥ the initial I max was 3422 Ϯ 664 nA (n ϭ 4) comparable to that of wild-type receptors. Application of DTT (3784 Ϯ 813 nA, n ϭ 4) or Cu:phen (2655 Ϯ 761 nA, n ϭ 3) did not significantly alter GABA I max from the initial I max (Fig. 3A). As expected, given that there is only one ␥ subunit per receptor, there was no evidence for disulfide bond formation in ␣␤␥20ЈC receptors. The initial I max was 2779 Ϯ 572 nA (n ϭ 4). Currents were unaffected by application of either DTT (2791 Ϯ 617 nA, n ϭ 4) or Cu:phen (2317 Ϯ 640 nA, n ϭ 3) (Fig. 3A). From the experiments on receptors containing a single mutant subunit we conclude that at the 20Ј position intersubunit disulfide bonds formed between ␤ subunits, but not between ␣ subunits. Disulfide Bond Formation in Receptors with Two Subunits Containing 20Ј Cys Mutants-We tested mutants containing Cys in two different subunits for their ability to form disulfide bonds. The initial I max of ␣20ЈC␤␥20ЈC, ␣20ЈC␤20ЈC␥, and ␣␤20ЈC␥20ЈC were 1781 Ϯ 209 nA (n ϭ 10), 1545 Ϯ 215 nA (n ϭ 11), and 567 Ϯ 106 (n ϭ 11) respectively. All were significantly less than the wild-type receptor I max . Reduction with DTT increased the currents to levels similar to those of wild type bringing the currents to 3414 Ϯ 265 nA, 3907 Ϯ 195 nA, and 2846 Ϯ 245 nA for ␣20ЈC␤␥20ЈC, ␣20ЈC␤20ЈC␥, and ␣␤20ЈC␥20ЈC, respectively (Fig. 3A). Cu:phen reversed the effects of DTT, and a second DTT application duplicated the effects of the first DTT application (data not shown). We conclude that disulfide bonds formed in all three double mutants. The extent to which disulfide bond formation at the 20Ј level inhibits GABA-induced currents can be quantified for each mutant using the equation % inhibition ϭ [(I DTT Ϫ I)/I DTT ] ϫ 100, where I and I DTT represent the GABA currents before and after DTT, respectively. Because the currents of untreated receptors (spontaneously oxidized) were of the same magnitude as those of receptors treated with Cu:phen, we used the initial currents in our calculation. The mutants containing a disulfide bond fall into three significantly different groups based on the extent of inhibition (Fig. 3B). Group 1 contains the mutant ␣20ЈC␤␥20ЈC with 49 Ϯ 4% inhibition. Group 2 contains ␣␤20ЈC␥ and ␣20ЈC␤20ЈC␥ with 68 Ϯ 6% and 61 Ϯ 5% inhibition, respectively. Group 3 contains ␣␤20ЈC␥20ЈC and ␣20ЈC␤20ЈC␥20ЈC with 81 Ϯ 3% and 89 Ϯ 2% inhibition, respectively. From the results above we infer that 1) the two ␤ subunits in a receptor can form a disulfide bond with one another (Fig. 3B, bar #3); 2) the disulfide bond in ␣20ЈC␤␥20ЈC must be between an ␣ and a ␥ subunit (Fig. 3B, bar #6), because a disulfide bond does not form in either of the single mutants ␣20ЈC␤␥ or ␣␤␥20ЈC (Fig. 3B, bars #2 and #4); and 3) some portion of the disulfide bonds found in ␣␤20ЈC␥20ЈC must be between a ␤ and a ␥ subunit, because there is more inhibition in the double mutant than the single ␤ mutant (Fig. 3B, compare bars #3 and #7). The Rate of Disulfide Bond Formation Is Fastest in ␣20ЈC␤20ЈC␥-To learn more about the relative proximity and mobility of the different subunits around the channel, we measured the rates of spontaneous disulfide bond formation in ␣20ЈC␤20ЈC␥, ␣20ЈC␤␥20ЈC, ␣␤20ЈC␥20ЈC, and ␣␤20ЈC␥. As shown for ␣␤20ЈC␥20ЈC in Fig. 4A, a test pulse of GABA was followed by a 10-min application of 10 mM DTT. Pulses of GABA were then applied every 5-10 min, depending on the mutant. Over the course of several minutes the currents returned to their initial values, indicating the spontaneous reformation of the disulfide bonds. The peak currents of all the pulses were fit with the single exponential equation (Fig. 4B). The values for the mutants were as follows (n Ն 4): ␣20ЈC␤20ЈC␥, 3 Ϯ 1 min; ␣20ЈC␤␥20ЈC, 23 Ϯ 7 min; ␣␤20ЈC␥20ЈC, 15 Ϯ 2 min; and ␣␤20ЈC␥, 17 Ϯ 4 min. ␣20ЈC␤20ЈC␥ is the only mutant to have a significantly different from the other mutants. The difference in disulfide bond formation rates between ␣20ЈC␤20ЈC␥ and ␣␤20ЈC␥ implies that at least some of the disulfide bonds in ␣20ЈC␤20ЈC␥ are between Cys in ␣ and ␤ subunits. This implies a higher collision rate between the ␣ and ␤ Cys than between the non-adjacent ␤ Cys in ␣␤20ЈC␥. This faster rate may be due to disulfide bond formation between adjacent ␣ and ␤ subunits. ␣␤20ЈC␥ Forms a High Affinity, Zn 2ϩ -binding Site-The intersubunit disulfide bond that forms between ␤20ЈCys in ␣␤20ЈC␥ receptors indicates that the combined movement of the two non-adjacent ␤ M2 segments was sufficient to traverse the channel diameter. The frequency of this event is unknown, because disulfide bonds can trap the receptor in a rare conformation. To address this issue we sought to determine whether these Cys could form a high affinity Zn 2ϩ -binding site. Due to the significantly lower energy involved in the Zn 2ϩ -Cys interaction compared with a covalent disulfide bond, Zn 2ϩ would be unable to trap the rare conformations that could be trapped with a disulfide bond. These experiments were carried out with reduced receptors to ensure that the Cys were fully available to bind Zn 2ϩ . In addition, the buffer contained heavy metal chelators to remove any trace metals that might compete with Zn 2ϩ for binding to the Cys. ␣20ЈC␤␥ receptors also showed some increased sensitivity to Zn 2ϩ compared with wild-type receptors (n ϭ 4; Fig. 5B). However, because the predicted IC 50 would be greater than 100 M a complete Zn 2ϩ dose-response relationship was not determined. The increase over wild-type sensitivity was abolished if, in addition to adding a Cys at ␣20Ј, the glutamate at the ␤20Ј position was replaced with an asparagine: 100 M Zn 2ϩ inhibited GABA-induced currents in ␣20ЈC␤␥ by 32 Ϯ 4% (n ϭ 4), but only altered the ␣20ЈC␤20ЈN␥ currents by 1 Ϯ 2% (n ϭ 3). Therefore, in ␣20ЈC␤␥, a low affinity, bidentate, Zn 2ϩ -binding site formed at the 20Ј position between the engineered cysteine in the ␣ subunit and the native glutamate in the ␤ subunit, but not solely between ␣20Ј Cys. An Intersubunit Disulfide Bond Does Not Form in ␣␤17ЈC␥-The 17Ј position is one ␣ helical turn down from the 20Ј position. Because the distance across the channel between 17Ј residues should be shorter than it is between 20Ј residues (13), we tested the ability of ␣␤17ЈC␥ to form a disulfide bond. DTT and Cu:phen had no effects on maximal GABA-induced currents in ␣␤17ЈC␥ (DTT: ϩ6 Ϯ 2%, n ϭ 3; Cu:phen: Ϫ6 Ϯ 3%, n ϭ 3). Receptors were also unaffected by DTT using an EC 10 concentration of GABA, which is more sensitive to modifications that affect gating (wild-type: ϩ46 Ϯ 1% (n ϭ 2); mutant: ϩ19 Ϯ 6% (n ϭ 3)). Thus, it appears that ␤-␤ disulfide bonds do not form between non-adjacent ␤ subunits at the 17Ј position in ␣␤␥ receptors. DISCUSSION We used disulfide trapping and Zn 2ϩ binding to study the mobility of the GABA A receptor M2 segments in ␣␤␥ receptors. In these receptors there are two ␣, two ␤, and one ␥ subunits. The two ␣ subunits are in non-adjacent positions around the channel axis as are the ␤ subunits (7)(8)(9)(10)(11). Our experiments showed that disulfide bonds formed between Cys residues substituted for ␤ 1 Glu-270 (20Ј) but not between Cys substituted for the aligned ␣ 1 subunit residue ␣ 1 Asn-275 (20Ј). Disulfide bond formation between the engineered ␤ Cys implies that the collision frequency between the engineered ␤20Ј Cys is significantly higher than between the ␣ engineered 20Ј Cys. In the FIG. 5. The engineered cysteines in ␣␤20C␥ form a high affinity, bidentate Zn 2؉ -binding site. A, receptors were exposed to increasing Zn 2ϩ concentrations. Prior to every pulse of GABA or GABA plus Zn 2ϩ , receptors were reduced with DTT (10 mM, 5 min) and either washed or treated with Zn 2ϩ for 1 min. A pulse of 30 M GABA was then applied in the presence of Zn 2ϩ . All experiments were done in the presence of metal chelators (see "Experimental Procedures"). Bars above traces indicate application of reagent. Leak currents have been subtracted. Current is not shown during application of Zn 2ϩ alone or DTT. Holding potential: Ϫ60mV. B, Zn 2ϩ dose-response curves for ␣20ЈC␤␥ (triangles) and ␣␤20ЈC␥ (squares). Data were normalized to the current in the absence of Zn 2ϩ and plotted against the Zn 2ϩ concentration. The IC 50 for ␣␤20ЈC␥ was determined by fitting the doseresponse curve with the Hill equation (line). A fit for ␣20ЈC␤␥ was performed to obtain a partial curve. ACh receptor 4-Å structure the 20Ј residues have a similar orientation to and distance from the channel axis (13), suggesting that these are not the bases for the disparity between ␣ and ␤. Thus, at this level in the channel the ␤ subunit M2 segments must be more mobile and/or more flexible than the ␣ subunit M2 segments in ␣␤␥ receptors. This implies that the ␤ M2 segments are less tightly packed with the rest of the protein than the ␣ M2 segments (43,21). In the ACh receptor 4-Å structure the ␣ carbons of nonadjacent 20Ј residues are ϳ19 Å apart (Fig. 6) (13). Because disulfide trapping involves formation of a covalent bond, it does not provide information on the energetics of bringing the 20Ј residues to within ϳ5 Å necessary to form the disulfide bond. High affinity Zn 2ϩ binding involves a non-covalent interaction with pairs of engineered Cys residues. The ␣ carbon separation of the two Cys residues is comparable in a bidentate Zn 2ϩbinding site and in a disulfide bond (29, 44 -47). However, unlike disulfide bonds, the energetics of apposing two Cys residues can be measured through the affinity of the resultant binding site for Zn 2ϩ . The higher the Zn 2ϩ affinity the lower the energy barrier to bringing the two Cys residues close enough to form a bidentate-binding site. Bound Zn 2ϩ ions usually display tetrahedral coordination (44,47,48). The affinity of a site for Zn 2ϩ depends on the number of chelating Cys residues. The Zn 2ϩ affinity of sites containing a single Cys residue is generally in the tens of micromolar to millimolar concentration range (29,31,49), whereas the Zn 2ϩ affinity of proteins containing four Cys residues chelating a Zn 2ϩ ion range from 10 Ϫ18 to 10 Ϫ12 M (38, 50 -52). The Zn 2ϩ affinity for ␤ 1 E20ЈC containing receptors was 34 nM. To achieve this affinity the Zn 2ϩ must be bound by both Cys. It is difficult to know what the theoretical maximum affinity of two ideally positioned Cys residues is for Zn 2ϩ in part because there are no structural Zn 2ϩ -binding sites with just two Cys ligands. With this number we could calculate the amount of energy lost to protein distortion by Zn 2ϩ binding to the ␤20ЈCys receptors to give the measured affinity of 34 nM. In proteins containing two Cys residues the Zn 2ϩ affinity ranges from nanomolar to micromolar (10,36,38). Thus, the 34 nM affinity that we have measured is toward the higher end of measured affinities for two Cys binding. This implies that there is a relatively small energy barrier to bringing the two ␤ Cys from their 19-Å separation distance in the ACh receptor structure to the optimal separation for Zn 2ϩ binding. Disulfide Bonds between ␣-␤, ␣-␥, and ␤-␥ 20Ј Cys Mutants-Although the ␣20ЈC␤␥ mutant did not form an intersubunit disulfide bond, the ␣20ЈCys was able to form disulfide bonds with the ␤20ЈCys and with the ␥20ЈCys. We infer the formation of these ␣-␤ and ␣-␥ disulfide bonds, because the extent of inhibition following oxidation was different in the double Cys mutant ␣20ЈC␤20ЈC␥ than in the single Cys mutant ␣␤20ЈC␥ and there was disulfide bond formation in ␣20ЈC␤␥20ЈC (Fig. 3). In both cases the disulfide bonds could form either between adjacent or between non-adjacent subunits. At present we cannot distinguish between these possibilities. If the disulfide bonds form between ␣20ЈCys and a Cys in an adjacent ␤ or ␥ subunit the M2 segments to which the residues are attached would need to both rotate and move ϳ7 Å based on the ACh receptor structure (Fig. 6). Therefore, while the ␣M2 20Ј region, in conjunction with the ␤M2 or ␥M2 regions, is sufficiently flexible to move the 7 Å necessary to disulfide bond with a Cys on an adjacent subunit, the 14 Å required for disulfide bonding with a non-adjacent subunit, i.e. the other ␣ subunit, appears to be too great a distance for the ␣M2 segments to overcome. Gating and Spontaneous Protein Movement-The mobility of the extracellular ends of M2 that we have detected may be related to the channel gating process. In the 4-Å ACh receptor structure the channel gate is in the region between the 9Ј and 14Ј levels (13). Transduction of agonist binding in the extracellular domain to the gate may proceed through the extracellular end of M2. The strong inhibitory effect of a single disulfide bond at the 20Ј position ranging from 49% inhibition in ␣20ЈC␤␥20ЈC to 81% inhibition for ␣␤20ЈC␥20ЈC demonstrates the importance of this region in channel gating. The motion that we have detected may represent the unsynchronized fluctuation of the extracellular ends of the M2 segments between their closed and open state conformations. Channel opening would require the concerted movement of all five M2 segments away from the channel axis into their open state conformation, an event that rarely occurs in the absence of agonist. The movement of the M2 segments may be similar to the spontaneous conformational fluctuations that the voltage sensing S4 segments undergoes in voltage-dependent K ϩ channels as they sense the membrane potential on the two sides of the membrane (53)(54)(55). Our observations of asymmetric motion in the ␤ and ␣ subunits raises the question of whether channel gating involves a larger movement of the ␤ subunit M2 segments than of the ␣ M2 segments. The GABA A ␤ subunits form the principle portion of the agonist-binding sites, analogous to the ACh receptor ␣ subunits (7). A greater fraction of the agonist surface area interacts with the principle subunit-binding site (7,56). Whether this causes a greater movement in the ␤ M2 segment is unknown. Differences in the potential coupling of the ␤ and ␣ extracellular domains to the membrane-spanning domains have been observed (57,58). Whether these differences relate to the extent of M2 segment movement during gating is unknown at present. Perhaps consistent with our finding of greater conformational change in the principle subunit, Unwin and colleagues (59), based on differences between the extracellular domain structure of the Torpedo ACh receptor, which is (13). Native residues have been replaced with cysteines. The van der Waals surfaces are only shown for the cysteines (yellow, sulfur; red, oxygen; blue, nitrogen; and white, carbon). The dotted lines show the ␣ carbon center-to-center distances between adjacent (12 Å) and nonadjacent (19 Å) subunits. To form a disulfide bond or a high affinity Zn 2ϩ -binding site the ␣ carbons must approach to within ϳ5-6 Å. probably in the closed state, and acetylcholine binding protein (AChBP), which is probably in the activated state, have suggested that agonist binding causes a larger shift in the ACh ␣ subunit structure. In the region of the gate in the 4-Å structure, the M2 domains from the different subunits appear to make close contact with one another. Specifically, hydrophobic side chains from the 9Ј to the 14Ј positions interact to form "a tight hydrophobic girdle around the pore" (13). This girdle, which should impede the mobility of the individual M2 domains, may explain why, at the more proximal 17Ј position, the ␤M2 ␣-helix is less mobile than at the more distal 20Ј position. The constriction at the central portion of the channel may also help to explain why, in a previous study, no disulfide bonds formed between aligned residues from the 17Ј to the 6Ј positions. The mobility that we have demonstrated in the 20Ј region with disulfide linkage is also consistent with the results of studies with unnatural amino acids. Using ␣-hydroxy acids in place of amino acids they converted the backbone peptide amide to an ester linkage. They inferred that there was more backbone conformational changes in the extracellular half of the ACh receptor M2 segment (20). Using linear free energy relationship analysis, it has also been shown that the extracellular half of M2 appears to move as a unit in the ACh-induced gating process (2,60). The range of motion that we have inferred for the ␤ subunit M2 ␣-helices is not without precedent. In a study of protein backbone flexibility in the Escherichia coli D-galactose chemosensory receptor, a protein of known crystal structure, Falke and colleagues found two cysteines that could traverse 15 Å to form a disulfide bond (28,61,62). The time constant for the formation of this disulfide bond in the chemosensory receptor mutant was 3630 s in the presence of 1.5:4.5 mM Cu:phen, more than ten times slower that the formation time constant in ␣␤20ЈC␥, where disulfide formation went to completion within 360 s in the presence of 100:400 M Cu:phen (data not shown). The difference in the formation rates are probably even greater because in the chemosensory receptor experiments the Cu: phen concentration was over 10 times higher and the temperature was 12-15 degrees higher (37°C and 25°C for the chemosensory and GABA A receptors, respectively). Extrapolating from the results of Careaga and Falke (28), the collision rate between the engineered ␤20Ј Cys must be at least 10 3 s Ϫ1 and is probably higher, because their experiments were performed under significantly more oxidizing conditions than ours. In the bacterial mechanosensitive channel MscL state-dependent disulfide bond formation occurs between engineered Cys residues that are separated by Ͼ10 Å in the crystal structure. Disulfide bond formation between the MscL V15C required 1 mM Cu:phen and took about 30 min to go to completion (63). This suggests a lower collision frequency in MscL V15C than we observed for the GABA A receptor ␤20Ј Cys in the present study. Alternative Interpretations-Although we believe that the above interpretation of our data is most likely, it rests on the structural foundation provided by the 4-Å resolution ACh receptor structure. An alternative interpretation that we believe is unlikely but that we cannot exclude is that our data suggest that the closed-state structure of the GABA A receptor is different than the published ACh receptor structure (13). The ability to form disulfide bonds and a high affinity Zn 2ϩ -binding site only between the ␤20ЈCys may indicate there is a significant structural asymmetry at the 20Ј level such that the average separation of the ␤20Ј residues is smaller than the ␣20Ј residues. CONCLUSION We have shown that, at the 20Ј level in the GABA A receptor M2 segments, a disulfide bond or a high affinity Zn 2ϩ -binding site can form between engineered Cys residues in the ␤ subunits. In contrast, an engineered Cys at the aligned position in the ␣ subunits forms neither. Based on the roughly symmetrical positions of the aligned residues relative to the channel axis in the 4-Å ACh receptor structure, we infer that the extracellular ends of the ␤ M2 segments are more mobile than the ␣ M2 segments. The increased mobility is likely due to looser protein packing around the ␤ M2 segments. In the ACh receptor structure the ␣ carbon atoms of the non-adjacent 20Ј residues are separated by ϳ19 Å. Thus, together the two ␤20ЈCys must move ϳ14 Å to form a disulfide bond or each must move about 7 Å toward the central axis. A similar amount of translational movement would be necessary to bring the two Cys into close proximity to form a Zn 2ϩ -binding site. Given the high affinity with which Zn 2ϩ was bound by the two ␤20ЈCys, we infer that there must be a low energy barrier to this movement. This suggests that there is a relatively flat potential energy surface for the movement of the ␤ M2 segments. These experiments begin to provide information on the dynamic movement of the channel-lining M2 segments and complement the static picture of the channel structure that is obtained from the cryo-EM structure of the homologous ACh receptor.
2019-03-22T16:16:49.161Z
2005-01-14T00:00:00.000
{ "year": 2005, "sha1": "ef84783768634b7202fb7de4e514b4db8d9c5e82", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/2/1573.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "58b9c297b5545a00dc32156c531f00bd025b4319", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
265660944
pes2o/s2orc
v3-fos-license
Long non-coding RNA KCNQ10T1/miR-19a-3p/SMAD5 axis promotes osteogenic differentiation of mouse bone mesenchymal stem cells Background Bone fracture is a common orthopedic disease that needs over 3 months to recover. Promoting the osteogenic differentiation of bone mesenchymal stem cells (BMSCs) is beneficial for fracture healing. Therefore, this research aimed to study the roles of long non-coding RNA (lncRNA) KCNQ10T1 in osteogenic differentiation of BMSCs. Methods BMSCs were treated with osteogenic medium and assessed by CCK-8 and flow cytometry assays. Alkaline phosphatase (ALP) staining, alizarin red staining (ARS), as well as concentration of osteoblast markers were measured to evaluate osteogenic differentiation of BMSCs. Western blot was employed to detect proteins; while, qRT-PCR was for mRNA levels. Additionally, targeted relationships between KCNQ10T1 and miR-19a-3p, as well as miR-19a-3p and SMAD5 were verified by dual luciferase reporter gene assay along with RNA pull-down method. Results Upregulation of KCNQ10T1 promoted the ALP staining and ARS intensity, increased the cell viability and decreased the apoptosis rate of BMSCs. Besides, KCNQ10T1 overexpression increased the ALP, OPG, OCN and OPN protein levels. KCNQ10T1 sponges miR-19a-3p, which targets Smad5. Upregulated miR-19a-3p reversed the overexpressed KCNQ10T1-induced effects, and depletion of SMAD5 reversed the miR-19a-3p inhibitor-induced effects on osteogenic medium-treated BMSCs. Conclusions Upregulation of KCNQ10T1 promoted osteogenic differentiation of BMSCs through miR-19a-3p/SMAD5 axis in bone fracture. Introduction Bone fracture is a common orthopedic disease that needs over 3 months to recover.Delayed fracture union and nonunion are common and worrisome complications in fracture treatment, placing a significant burden on individuals and society [1,2].Despite the increasingly comprehensive exploration of fracture healing mechanisms, these influencing factors remain a major clinical challenge for fracture treatment [3].Mesenchymal stem cells (MSCs), which are pluripotent stromal cells, have attracted much attention as powerful tools for tissue regeneration [4].MSCs have the ability to self-renew and differentiate into multiple lineages, such as osteoblasts (bone), chondrocytes (cartilage), muscle cells (muscle), and fat cells (adipocytes) [5].In addition, bone MSCs (BMSCs) are widely available from bone marrow, adipose tissue, cord blood, and any other tissue [6].In particular, BMSCs tend to promote osteoblasts and stimulate bone formation.There is much convincing evidence that MSCs can repair bone and related defects in animal models [7,8].Systemic and local administration of allogeneic BMSCs promotes fracture healing in rats [9].However, the ability of BMSCs to differentiate into functional osteoblasts remains limited in terms of bone regeneration in vivo.Therefore, stimulating osteogenic differentiation of BMSCs may be considered as a potential therapeutic approach to promote bone regeneration. Long non-coding RNAs (lncRNAs) are endogenous cellular ribonucleic acid RNAs with a length of 200 nt ~ 100 kb [10].Recently, several lncRNAs have been found to play an important role in the pathophysiological processes of various orthopedic diseases, such as LncRNA ROR [11], LncRNA THUMPD3-AS1 [12], lncRNA-CRNDE [13], etc. LncRNA-KCNQ1OT1 (KCNQ1OT1) is located on human chromosome 11p and is a chromatin regulatory RNA [14].KCNQ1OT1 is a well-studied lncRNA, which has a profound impact on the regulation of colon cancer [15], non-small cell lung cancer [16], ischemic stroke [17] and osteogenic differentiation [18].In the field of orthopedics, KCNQ1OT1 has been shown to accelerate osteoblast differentiation through up-regulating the Wnt/β-catenin signaling pathway [18].In addition, KCNQ1OT1 silencing inhibited osteogenic differentiation and downregulated expression of osteogenic differentiation related proteins [19,20].However, the relationship between KCNQ1OT1 and growth or osteogenic differentiation of BMSCs is still not well defined. MicroRNAs (miRNAs) are short non-coding RNAs, which are common in the expression of post transcriptional regulatory genes and mainly bind to the 3 '-untranslated region of targeted messenger RNA to regulate cell biological processes, including BMSCs differentiation [21][22][23][24][25]. miR-19a-3p is confirmed to be broadly conserved among vertebrates [26] and participate in the pathogenesis of preeclampsia and atherosclerosis [27,28].Recent studies have demonstrated the involvement of miR-19a-3p in the progression of various cancers including glioma, lung cancer, breast cancer, osteosarcoma, gastric cancer and hepatocellular carcinoma [29][30][31][32].However, the precise mechanism by which miR-19a-3p in bone fracture treatment remains unknown.Here, through the Starbase and Tar-getScan on line database, we found that KCNQ1OT1 targeted miR-19-3p and Smad5 was a target gene of miR-19-3p.Smad5 is a receptor regulated SMAD protein that is a key transcription factor for osteogenic differentiation.Under physiological conditions, Smad5 is mainly located in the cytoplasm.When Smad5 is phosphorylated, it is directed to the nucleus, thereby regulating the expression of osteogenic genes and inducing osteogenic differentiation [33].Inhibiting nuclear translocation of Smad5 can inhibit osteogenic differentiation of BMSCs [34]. Therefore, our study tended to investigate the molecular mechanisms of KCNQ1OT1 in bone fracture in vitro.We hypothesized that KCNQ1OT1 sponges miR-19-3p to regulate Smad5 expression to regulate osteogenic differentiation of BMSCs.Our research provided a novel understanding of bone fracture treatment. Cell culture The mouse Bone Mesenchymal Stem Cells (BMSCs) were purchased from Beijing Baiou Bowei Biotechnology Co., Ltd (Beijing, China).The cells were cultured in complete α-MEM, and 10% fetal bovine serum and 1% penicillin streptomycin were added to the culture medium.The cultivation environment is set to 95% air and 5% CO2, and the temperature is set to 37 °C. Osteogenic differentiation BMSCs were inoculated into a 12-well plate at 5 × 10 5 cells/well, and cultured in medium containing 5 mmol/L β-glycerophosphate sodium, 50μg/mL vitamin C, 100 mmol/L dexamethasone and 10% FBS.The supernatant was discarded after 7 days, then BMSCs were fixed with 4% paraformaldehyde for 15 min, and stained with alkaline phosphatase (ALP) and Alizarin Red according to the instructions of kits (Beyotime, Shanghai, China) for observation and semi-quantitative analysis of mineralized nodules. Flow cytometry BMSCs were digested by trypsin and washed by PBS, and the cell suspension concentration was adjusted to 1 × 10 4 cells/mL.The cells were incubated and stained by Annexin V-FITC and propidium iodide in dark for 15 min.The apoptosis rate of each group was detected by flow cytometry (BD FACSCalibur, NJ, USA). RT-qPCR BMSCs of each group were inoculated into 6-well plates.RNA was isolated with TRIzol (PrimeScript ™ RT Kit, Takara, Japan) and the mRNA was reversed into complementary DNA (cDNA) for RT-qPCR.The RT-qPCR was carried out using TaKaRa Ex Taq ® kit (Takara, Japan).GAPDH was used as a housekeeping gene for KCNQ10T1 as well as Smad5 while U6 was applied as an internal reference of miR-19a-3p, and the relative expression levels of target genes were calculated using 2 −ΔΔCt (ΔCt = Ct target gene − Ct reference gene, Ct value represents the number of cycles when the fluorescence signal in each reaction tube reaches the set threshold). Verification of binding relationship between mRNA and miRNA Dual luciferase reporter assay as well as RNA pull-down methods were carried out to confirm the prediction of bioinformatics concerning the interactions between miR-19a-3p and KCNQ1OT1 of Smad5.The wild-type (WT) KCNQ10OT1, mutant-type (MUT) KCNQ1OT1, WT Smad5 3′-UTR, MUT Smad5 3′-UTR were synthesized and cloned into pmirGLO luciferase vectors (Promega, Beijing, China) to determine whether miR-19a-3p directly targets KCNQ1OT1 and the Smad5 3′-UTR.The miR-19a-3p mimic negative control plasmids (nc mimic), miR-19a-3p mimic (mimic) along with Renilla luciferase plasmid (Promega, Beijing, China) were co-transfected into BMSCs.Firefly and Renilla luciferase activities were determined by a Dual-Luciferase Reporter Assay kit (Bio-Vision Tech, Guangdong, China).As for RNA pull-down method, 500 μg streptavidin magnetic beads were combined with 200 pmol biotin-labeled miR-19a-3p mimic, and added into RNA extracted from BMSCs.Eluting buffer was added to collect the pulled RNA complex after 30 min incubation at room temperature, and the KCN-Q1OT1 and Smad5 levels were quantitatively analyzed by RT-qPCR. Statistical analysis For bioinformatic analysis, the target miRNA of KCN-Q1OT1 was predicated by Starbase online database (http:// starb ase.sysu.edu.cn/), and the target gene of miR-19a-3p was predicated by TargetScan online database (http:// www.targe tscan.org/).The results were statistically analyzed using GraphPad Prism 9.0 (MacKiev Software).Normal distribution (Shapiro-Wilk) and homogeneity of variance test were carried out for multiple groups of data, and those who met the criteria were represented in the form of " x ± sd ".One-way ANOVA and paired T test of two independent samples were used for comparison between groups and populations.Each experiment was conducted independently three times (n = 3).p < 0.05 on both sides was considered statistically significant. Discussion In current study, we clarified the role of KCNQ1OT1 in bone fracture and the molecular mechanism in vitro.Heightening of KCNQ1OT1 helped to protect BMSCs from dysfunction by promoting osteogenic differentiation.Furthermore, KCNQ1OT1 played its role via the miR-19a-3p/Smad5 axis. Accumulating evidences have showed that lncRNAs are regarded as the regulator of bone fracture occurrence and development [35].The abnormally expressed lncRNAs played a different role in osteogenic differentiation of BMSCs.For example, Zhang et al. [36] demonstrated that lncRNA-NEAT1 upregulated the expression of osteogenic differentiation proteins to improve mitochondrial function.It indicated that lncRNA-NEAT1 might be a potential therapeutic target for skeletal aging.Yin et al. [37] suggested that lncRNA-Malat1 knockdown suppressed the osteogenic differentiation of BMSCs, which was reversed by decreasing the expression of miR-129-5p.In the pathogenesis of osteoporosis, high levels of lncRNA SNHG1 increased the expression of DNMT1 via interacting with PTBP1.LncRNA SNHG1 contributed to osteoporosis through leading to osteoprotegerin hypermethylation and downregulated osteoprotegerin expression [38].Thus, it can be seen, focusing on the role of differentially expressed lncRNAs in osteogenic differentiation of BMSCs may be the key to treating orthopedic diseases such as fractures and osteoporosis.KCNQ1OT1, a widely studied lncRNA, has been shown to exhibit different expression levels in different diseases.For instance, in the osteosarcoma [39], ovarian cancer [40], lung squamous cell carcinoma [41], etc. High levels of KCNQ1OT1 promoted the malignant behaviors, such as excessive proliferation.In other diseases, such as atherosclerosis, high levels of KCNQ1OT1 prevented cholesterol efflux and induced lipid accumulation in THP-1 macrophages.In contrast, KCNQ1OT1 silencing protected against atherosclerosis in apoE-/-mice and inhibited the lipid accumulation in THP-1 macrophages [42].However, in the process of cellular senescence, high levels of KCNQ1OT1 inhibited senescence-associated heterochromatin foci, transposon activation and retrotransposition as well as cellular senescence, suggesting KCNQ1OT1 inhibited the cellular senescence.Here, we found that KCNQ1OT1 overexpression promoted the growth and osteogenic differentiation of BMSCs.Our results were similar to a previous study, which also demonstrated KCNQ1OT1 promoted osteogenic differentiation of BMSCs through inhibiting miR-205-5p [43]. More and more evidence suggests that lncRNAs act as competitive endogenous RNAs (ceRNAs) to sponge miR-NAs [44].As reported by previous studies, KCNQ1OT1 has been demonstrated to sponge miR-34c-5p in osteosarcoma [39], miR-125b-5p in ovarian cancer [40], miR-26a-5p in ischemia reperfusion [45].Here, through the Starbase online database, we found that KCNQ1OT1 targeted to miR-19a-3p.miR-19a-3p has been demonstrated to participated in various diseases such as myocardial ischemia/reperfusion injury [46], sepsis-induced lung injury [47], in multiple myeloma [48], etc.Most studies have found that miR-19a-3p acts as a sponge for lncR-NAs, thereby participating in the progression of diseases.For example, Xiang et al. [49] found that miR-19a-3p promoted the migration and epithelial-mesenchymal transition of breast cancer cells through sponge adsorbing by LINC00094.In osteoporosis, Chen et al. [50] demonstrated that lncRNA Xist was a sponge of miR-19a-3p to inhibit BMSCs osteogenic differentiation.Similarly, this study found that miR-19a-3p overexpression inhibited the BMSCs osteogenic differentiation and reversed the role of KCNQ1OT1 in the BMSCs.However, there are contradictions with previous research, Chen et al. [50] exhibited the promoting effect of miR-19a-3p on osteogenic differentiation of BMSCs, while we confirmed the inhibitory effect of miR-19a-3p on osteogenic differentiation of BMSCs.We speculated that this may be due to they performed the study using BMSCs in aging cell models, while we are explored the osteogenic differentiation of normal BMSCs.In addition, different lncRNAs and target genes may lead to different expressions and functions of miR-19a-3p.Therefore, further research is still needed to explore the specific mechanism of miR-19a-3p in BMSCs. Finally, we confirmed that Smad5 was a target gene of miR-19a-3p.Smad5 is a receptor regulated Smad protein that is a key transcription factor for osteogenic differentiation [34].Under physiological conditions, Smad5 is mainly located in the cytoplasm.When Smad5 is phosphorylated, it is directed to the nucleus, thereby regulating the expression of osteogenic genes and inducing osteogenic differentiation [51].According to reports, inhibiting nuclear translocation of p-Smad5 can inhibit osteogenic differentiation of BMSCs [52].Here, we found that Smad5 knockdown reversed the effects of miR-19a-3p on the growth and osteogenic differentiation of BMSCs.The findings suggested that miR-19a-3p targets Smad5 to promote bone fracture development.Taken In conclusion, our research suggested KCNQ10T1 overexpression promoted the osteogenic differentiation of BMSCs.MiR-19a-3p overexpression reversed the role of KCNQ10T1 accelerates osteogenic differentiation of BMSCs via sponging miR-19a-3p.In addition, silenced Smad5 inhibited the role of downregulated miR-19-3p in BMSCs.KCNQ10T1 acted as a ceRNA to regulate osteogenic differentiation of BMSCs via miR-19-3p/ Smad5 axis.Overexpression of KCNQ10T1 may be an alternative for the treatment of bone fracture.However, there was still a limitation in this study.Due to limitations in hospital research conditions, we did not conduct animal experiments to verify the role of KCNQ10T1 in bone growth and development in vivo.In the future, we will aim to establish a fracture mouse model and inject KCNQ10T1 overexpression lentivirus to further investigate the role of KCNQ10T1 in vivo. Fig. 1 Fig. 1 KCNQ1OT1 overexpression promoted the osteogenic differentiation of BMSCs.A Overexpression efficiency of pcDNA3.1-KCNQ1OT1was detected by RT-qPCR assay.Then the BMSCs were cultured in osteogenic medium and transfected with pcDNA3.1-KCNQ1OT,B the cell viability was detected by CCK-8 assay.The positive mineralized nodules were stained by ALP (C) as well as Alizarin Red (D).E The apoptosis rate was measured by flow cytometry.The mRNA (F) and protein (G) expressions of ALP, OPG, OCN and OPN were detected by RT-qPCR and western blot assays.**P < 0.01 Fig. 3 Fig. 3 Elevated KCNQ10T1 accelerates osteogenic differentiation of BMSCs via sponging miR-19a-3p.A Overexpression efficiency of miR-19a-3p mimic was detected by RT-qPCR assay.Then the BMSCs were cultured in osteogenic medium and transfected with pcDNA3.1-KCNQ1OTand miR-19a-3p mimic, B the cell viability was detected by CCK-8 assay.The positive mineralized nodules were stained by ALP (C) as well as Alizarin Red (D).E The apoptosis rate was measured by flow cytometry.The mRNA (F) and protein (G) expressions of ALP, OPG, OCN and OPN were detected by RT-qPCR and western blot assays.**P < 0.01
2023-12-06T15:10:25.955Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "ee3e524db0c5ffbc1ee708ca7abc87b26cf1b6e8", "oa_license": "CCBY", "oa_url": "https://josr-online.biomedcentral.com/counter/pdf/10.1186/s13018-023-04425-w", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "ee3e524db0c5ffbc1ee708ca7abc87b26cf1b6e8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
606198
pes2o/s2orc
v3-fos-license
Aberrant NF-KappaB Expression in Autism Spectrum Condition: A Mechanism for Neuroinflammation Autism spectrum condition (ASC) is recognized as having an inflammatory component. Post-mortem brain samples from patients with ASC display neuroglial activation and inflammatory markers in cerebrospinal fluid, although little is known about the underlying molecular mechanisms. Nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) is a protein found in almost all cell types and mediates regulation of immune response by inducing the expression of inflammatory cytokines and chemokines, establishing a feedback mechanism that can produce chronic or excessive inflammation. This article describes immunodetection and immunofluorescence measurements of NF-κB in human post-mortem samples of orbitofrontal cortex tissue donated to two independent centers: London Brain Bank, Kings College London, UK (ASC: n = 3, controls: n = 4) and Autism Tissue Program, Harvard Brain Bank, USA (ASC: n = 6, controls: n = 5). The hypothesis was that concentrations of NF-κB would be elevated, especially in activated microglia in ASC, and pH would be concomitantly reduced (i.e., acidification). Neurons, astrocytes, and microglia all demonstrated increased extranuclear and nuclear translocated NF-κB p65 expression in brain tissue from ASC donors relative to samples from matched controls. These between-groups differences were increased in astrocytes and microglia relative to neurons, but particularly pronounced for highly mature microglia. Measurement of pH in homogenized samples demonstrated a 0.98-unit difference in means and a strong (F = 98.3; p = 0.00018) linear relationship to the expression of nuclear translocated NF-κB in mature microglia. Acridine orange staining localized pH reductions to lysosomal compartments. In summary, NF-κB is aberrantly expressed in orbitofrontal cortex in patients with ASC, as part of a putative molecular cascade leading to inflammation, especially of resident immune cells in brain regions associated with the behavioral and clinical symptoms of ASC. IntroductIon Autism spectrum condition (ASC) is a life-long neurodevelopmental condition characterized by a triad of impairments in social skills, verbal communication, and behavior (Rapin, 1997;Lord et al., 2000). Cognitively, ASC is described as a disorder involving fundamental deficits in central coherence (Frith, 1989), executive function (Ornitz et al., 1993), theory of mind (Baron-Cohen et al., 1985), and empathizing (Baron-Cohen, 2002). Continuing investigations for a neurobiological basis for ASC support the view that genetic, environmental, neurological, and immunological factors contribute to its etiology (Neuhaus et al., 2010). In particular, there is evidence to suggest an association between ASC and neuroinflammation in anterior regions of the neocortex Vargas et al., 2005;Zimmerman et al., 2005), and areas relating to cognitive function appear to be affected by inflammation resulting from activation of microglia and astrocytes (Anderson et al., 2008). In vivo measurements of structural brain changes with magnetic resonance imaging have detected gray matter loss in the orbitofrontal cortex (Hardan et al., 2006;Girgis et al., 2007) and impairment of cognitive functions mediated by the orbitofrontal-amygdala circuit (Loveland et al., 2008) in patients with ASC. Furthermore, Aberrant NF-kappaB expression in autism spectrum condition: a mechanism for neuroinflammation Young et al. Aberrant NF-kappaB expression in ASC NF-κB, and thus its potential for nuclear translocation. The expression of active NF-κB translocated to the cell nuclei was then measured directly, where it binds to DNA and transcribes proteins that result in the production of cytokines as part of an inflammatory response. Confirmation of the immunodetection and immunofluorescence results in neurons and mature microglia was sought from an independent source of micro-array tissue slides donated from ASC patient and control groups. Finally, pH was measured in homogenized tissue and compared to the corresponding intracellular NF-κB p65 expression from Western immunodetection. Acridine orange staining allowed measurements of pH localized to lysosomes. MaterIals and Methods tIssue saMples The UK cohort consisted of seven samples of fixed orbitofrontal cortex sections from four control and three ASC donors obtained from the Medical Research Council's London Brain Bank (Institute of Psychiatry, King's College London). Samples were age, sex, and post-mortem interval (PMI) matched (Table 1). Protein extraction from formalin fixed tissue was performed according to Shi et al. (2006), where tissue sections were placed in 50 μl of 20 mM Tris-HCl buffer pH 8 with 2% SDS plus pepstatin 10 μg/ml and heated to remove formalin cross-links. Samples of the US cohort were sections from six ASC patients and five age, sex, and PMI matched control donors obtained as tissue micro-arrays (Eberhart et al., 2006) from the Autism Tissue Program (Harvard Brain Tissue Resource Center, Boston). One patient had a diagnosis of Rett syndrome, a neurodevelopmental brain inflammation (Liu and Hong, 2003;Barger, 2005;Kim and Joh, 2006) and pro-inflammatory treatments of microglia acidify cellular lysosomes to a pH ∼5 (Majumdar et al., 2007). Preliminary characterization of this mechanism implicates activation of protein kinase A (PKA) and the activity of chloride channels (Majumdar et al., 2007). The transcriptional activity of NF-κB is stimulated upon phosphorylation of its p65 subunit on serine 276 by PKA (Zhong et al., 1998) and in turn PKA is a downstream target of the transcription factor (Kaltschmidt et al., 2006). With this in mind we postulated that an association may exist between the transcription factor and lysosomal acidity. This article describes measurement of NF-κB p65 expression levels and pH in post-mortem samples of orbitofrontal cortex from patients with a diagnosis of ASC and control samples from people healthy at the time of death. We hypothesized that concentrations of NF-κB would be elevated in patients and pH would be concomitantly reduced (i.e., acidification), providing evidence for a neuroinflammatory component to ASC. This hypothesis was initially tested by Western immunodetection of post-mortem brain tissue to measure overall, nuclear and cytosolic NF-κB expression. Investigations were then focused upon microglial cells due to their role in pro-inflammatory response, as these most strongly mediate aberrant expression of NF-κB. Antigen retrieval and immunofluorescence techniques were used to identify the differential concentrations of intracellular NF-κB in neurons, astrocytes, microglia, and highly activated (i.e., mature or functional) microglia. Immunoreactivity measurements were initially carried out to determine the concentration of NF-κB in the cytoplasm of each cell type as an indication of the availability of inactive microglia (Hughes et al., 2003). This was followed by a final 5 min wash and slides were mounted with 4′,6-diamidino-2-phenylindole (DAPI) with Vectashield (Vector Labs Ltd, UK) to identify cell nuclei. One hundred cells of each type (neurons, astrocytes, and microglia) from each sample were selected at random, blinded from group (i.e., ASC or control). Cells were graded for immunoreactivity according to the intensity of antibody signal within the cell on an integer scale of 0-3 (Schmidt and Bankole, 1965). For each cell type, the percentage of cells with weak (scale 1) and strong (scale 3) intensity was calculated. Antigen positive cells were also counted in each sample to quantify the intensity of anti-p65 signal in the nucleus, providing a measurement of nuclear translocation of NF-κB p65 and thus the active state of the molecule within each cell type. MeasureMent of ph Equal volumes of neural tissue were homogenized using a mortar and pestle in 10 volumes of deionized water at 4°C. The pH of the homogenate was measured using a MeterLab PHM201 pH meter (Radiometer Analytical, Villeurbanne Cedex, France) calibrated with two standards for pH 4 and 7. Measurements were made eight times over 4 days and averaged to yield a final value. Using the protocol described by Morgan and Galione (2007) and Lee et al. (1983), the pH of lysosomes was measured. Tissue sections were simultaneously loaded with 10 μM acridine orange and 1 μM Lysotracker Red DND-99 for 15-20 min at room temperature, at which time the fluorescence had reached equilibrium; that is, the dyes were present throughout the rest of the experiment. Acridine orange responds rapidly and profoundly to changes in pH, whereas Lysotracker Red responds only relatively slowly and remains essentially fixed throughout the experiment. Results are expressed as the ratios of the acridine orange/Lysotracker Red signals such that an increase in the ratio reflects an increase in pH. statIstIcal analysIs Small sample sizes precluded valid formal between-group (i.e., ASC vs. control) statistical tests. Nevertheless, in many cases there was no overlap in the values obtained for each group. Unless indicated, all data are expressed as mean ± standard error across the samples. Statistical testing of within-group correlation was undertaken using SPSS (v17, SPSS Inc.), with the level for significance set at p < 0.05. results expressIon of nf-κB p65 In neural tIssue: uK cohort Western immunodetection of neural tissue samples analyzed for overall NF-κB p65 expression is shown in Figure 1A. Densitometry demonstrated a 2.9-fold increase of NF-κB p65 expression in ASC samples; Figure 1B. Nuclear translocation of NF-κB p65 is predominately associated with the activation of the transcription factor. Separated tissue lysates were used to determine the subcellular location of p65 expression. In control tissue NF-κB p65 was mainly located within the cellular cytoplasm, whereas in tissue samples from ASC patients the expression was predominately within the nucleus. Western immunodetection of neural tissues analyzed for overall NF-κB p65 expression for the UK cohort is shown in Figure 1C. disorder classified as an ASC. The age-at-death of donors was considerably younger (5-11 years) than the UK cohort (20 -79 years; Table 1). tIssue separatIon Tissue samples from the UK cohort were processed for nuclear and cytosolic extraction using two separation buffers. The tissue was homogenized in 10 volumes of buffer 1 (Tris 10 mM, NaH 2 PO 4 20 mM, EDTA 1 mM, pH 7.8 PMSF 0.1 mM, pepstatin 10 μg/ml, and leupeptin 10 μg/ml). Homogenate was incubated for 20 min and osmolarity restored by adding 1/20 volume of KCl 2.4 M, 1/40 volume of NaCl 1.2 M, 1/5 volume sucrose 1.25 M. Samples were spun for 5 min at 3,500 rpm, supernatant removed and pellet resuspended on 0.6 M sucrose and spun for a further 10 min at 10,000 rpm. Subsequently, the supernatant was diluted in buffer 2 (imidazole 30 mM, KCl 120 mM, NaCl 30 mM, NaH 2 PO 4 , sucrose 250 mM pH 6.8, protease inhibitors pepstatin 10 μg/ml and leupeptin 10 μg/ ml) and spun again at 3,500 rpm for 15 min. The resultant pellets contained the remaining nuclear proteins, and the supernatants the cytosolic proteins. Western IMMunodetectIon Protein samples run on SDS-polyacrylamide gels were electroblotted to nitrocellulose membranes (Schleicher & Schuell Bioscience GmbH, Germany), blocked with 5% non-fat dry milk in phosphate buffered saline with 0.1% Tween-20 (PBST), and probed with a 1:1,000 dilution of anti-p65 antibody (Santa Cruz Biotechnology). PBST washed membranes were then incubated with HRP-conjugated goat anti-rabbit antisera (Sigma Ltd, UK), and developed with enhanced chemiluminescence reagents (Pierce Ltd, UK). Signal was detected using a LAS 3000 image analyzer (Fujifilm, Japan) and bands quantified using ImageJ software. IMMunofluorescence Tissue samples were fixed in formalin and embedded in paraffin blocks. Slides were deparaffinized in xylene three times, each for 5 min, then hydrated gradually through graded alcohols: washed in 100% ethanol twice for 10 min each, then 95% ethanol twice for 10 min each and finally washed in deionized water for 1 min with stirring. Sections were cut 5 μm thick and mounted onto these pre-treated slides. Antigen retrieval was carried out in a pressure microwave where slides were covered in 10 mM sodium citrate buffer pH 6.0. After cooling for 20 min, sections were blocked in 10% normal goat block for 15 min and placed in anti-p65 NF-κB antibody 1:200 (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA) overnight at 4°C. Sections were washed continuously for 5 min and placed in HRPconjugated goat anti-rabbit antisera 1:200 (Molecular Probes Inc., Eugene, OR, USA) for 30 min at room temperature. Between-group differences in the immunoreactivity of astrocytes stained by anti-GFAP were observed with 80% of cells from ASC samples scoring 2-3 compared to 56% of cells from control samples scoring 0-1; Table 2. Differences in nuclear NF-κB p65 expression between ASC and control tissue samples were observed. On average 88.00 ± 4.00% of astrocytes in tissue from ASC patients demonstrated nuclear localization of the transcription factor compared to 33.00 ± 3.16% in control samples; Figure 3A. expressIon of nf-κB p65 In neurons, astrocytes, and MIcroglIa: uK cohort The immunoreactivity intensity from neurons stained by anti-Beta III Tubulin in tissue from ASC patients predominately scored in the range 2-3 (78%), whilst neurons in tissue from controls. These between-groups differences were increased in astrocytes and microglia relative to neurons, but particularly pronounced for highly active microglia identified by anti-CD11c staining. expressIon of nf-κB p65 In neurons and MIcroglIa: us cohort Immunoreactivity measurements of samples from the US cohort stained by anti-Beta III Tubulin for neuronal identification revealed that 69% of cells from ASC samples scored 2-3, whilst 63% of cells from control samples scored 0-1; Table 2. Due to limited sample volumes, processing and analysis for astrocytes and CD11b positive microglia was not undertaken. Differences in immunoreactivity for CD11b positive microglia were also observed, with 81% of cells from ASC samples scoring 2-3 compared to 59% of cells from control samples scoring 0-1; Table 2. The between-group difference in nuclear translocation in these cells was also more pronounced than in neurons. In CD11b positive microglia, 93.67 ± 3.00% of cells from samples from ASC patients expressed nuclear NF-κB p65 compared to 64.25 ± 1.26% from controls; Figure 3B. Similarly, CD11c positive (highly active, mature) microglia in samples from ASC donors had raised levels of immunoreactivity with 88% of cells from ASC samples scoring in the range 2-3 compared to 58% of cells from control samples scoring in the range 0-1; Table 2. Nuclear NF-κB p65 expression in CD11c positive microglia from ASC samples was 89.67 ± 2.08% of cells and 34.00 ± 2.16% from controls; Figure 3C. Furthermore, using 20 visual fields randomly selected blind to group, the number of active CD11c positive cells present was 3.75 times greater in tissue from ASC donors; Figure 3E. In summary, all cell types demonstrated increased extranuclear and nuclear translocated NF-κB p65 expression in samples of brain tissue from ASC donors relative to samples from matched ASC , together with activated astrocyte and microglia in post-mortem brain tissue . Nevertheless, the underlying molecular events remain unclear. In this article, for the first time to our knowledge, we report the aberrant expression of a pro-inflammatory transcription factor, NF-κB, in samples donated to the London Brain Bank (UK cohort) and Harvard Brain Tissue Resource Center (US cohort). This discovery could play a major role in refining diagnostic tests and therapeutic interventions for ASC. Excess NF-κB p65 expression was observed in cytosolic, but predominantly nuclear compartments in ASC samples (Figure 1). These relative increases were subsequently localized to neurons, astrocytes, and microglia, but were particularly pronounced in highly activated (CD11c positive) microglia. Furthermore, nuclear translocation of NF-κB suggests activation of the molecule. NF-κB induces the expression of inflammatory cytokines and chemokines and, in turn, is induced by them (Barnes and Karin, 1997;Pahl, 1999). This establishes a positive feedback mechanism (Perkins, 2004), which has the potential, when NF-κB becomes aberrantly active, to produce the chronic or excessive inflammation associated with several inflammatory diseases (Barnes and Karin, 1997;Mattson et al., 2000;Mattson and Meffert, 2006;Memet, 2006). Primarily in neurons, NF-κB is activated in order to provide a protective function. A small, 6% point difference between ASC and control groups suggests the presence of extensive stress on neurons These data confirmed results derived from the UK cohort, with differences in nuclear translocation of NF-κB p65 in neurons 6.23% points higher for ASC samples from the UK cohort and 4.67% points higher from the US cohort. In CD11c positive, highly activated microglia, between-group differences in nuclear translocation of NF-κB p65 were similarly elevated in the US cohort (68.50% points higher in ASC samples) and the UK cohort (55.67% points higher). dIfferences In ph and relatIonshIp to nf-κB p65 expressIon: uK cohort Measurement of homogenized tissue yielded a 0.92-unit pH betweengroup difference (Figure 4A), decreased in ASC samples relative to control samples from the UK cohort. The relationship between pH and NF-κB p65 expression was explored by linear regression and a highly significant effect observed [F(1,5) = 98.3; p = 0.00018; Figure 4B]. Aberrant pH was localized to subcellular compartments by immunofluorescence. Low pH observed in homogenized tissue from ASC samples appears to be a result of a reduced pH in the lysosomal compartments of cells. Tissue from ASC patients had lysosomes that fluoresced green whereas that from controls fluoresced orange; Figures 4C,D. dIscussIon An emerging focus of research into ASC has suggested neuroinflammation as an underlying biological model, with evidence from irregular cytokine profiles in the cerebrospinal fluid of children with factor in chronic inflammatory diseases. N. Engl. J. Med. 336, 1066-1071. Baron-Cohen, S. (2002. The extreme male brain theory of autism. Trends Cogn. Sci. 6, 248-254. Baron-Cohen, S., Leslie, A. M., and Frith, U. (1985). Does the autistic child have a "theory of mind"? Cognition 21, 37-46. Barger, S. W. (2005). Vascular consequences of passive Abeta immunization for Alzheimer's disease. Is avoidance of "malactivation" of microglia enough? J. Neuroinflammation 2, 2. Barnes, P. J., and Karin, M. (1997). Nuclear factor-kappaB: a pivotal transcription amyotrophic lateral sclerosis. Brain Res. 639, 171-174. Anderson, M. P., Hooker, B. S., and Herbert, M. R. (2008). Bridging from cells to cognition in autism pathophysiology: biological pathways to defective brain function and plasticity. Am. J. Biochem. Biotechnol. 4,[167][168][169][170][171][172][173][174][175][176] references Akiyama, H., Nishimura, T., Kondo, H., Ikeda, K., Hayashi, Y., and Mcgeer, P. L. (1994). Expression of the receptor for macrophage colony stimulating factor by brain microglia and its upregulation in brains of patients with Alzheimer's disease and Consistent observations of pH reduction in brain tissue from patients with schizophrenia have unclear origins, with medication and cause of death effects suggested in addition to it reflecting features of the disorder (Lipska et al., 2006;Halim et al., 2008). Thus, there remains the possibility that reductions in pH represent agonal artifact, and indeed ante mortem hypoxia and long terminal phases as well as gender, are known to lead to pH reductions in post-mortem brain tissue (Monoranu et al., 2009), although there is no correlation with PMI and age at death. In this study donors were matched on all these quantities ( Table 1). The linear modeling between NF-κB concentrations and pH is highly significant and furthermore is located in the lysosomes (Figure 4) Nevertheless, a post-mortem change in pH from chemical cascades involving NF-κB cannot be excluded. Should further experimentation confirm the relationship between these cellular markers of inflammation and pH, then this may be a potential biomarker for diagnosis and response to therapeutic interventions. Measurements of in vivo intracellular pH can be achieved non-invasively with phosphorous-31 magnetic resonance spectroscopy (Pettegrew et al., 1988) or magnetization transfer techniques (Sun et al., 2007). conclusIon To summarize: NF-κB is aberrantly expressed in the orbitofrontal cortex as indicated by measurements on post-mortem tissue from ASC patients, and particularly in highly activated microglia. This region is a locus of abnormal function in ASC that underlies the abnormal development of social and cognitive skills (Sabbagh, 2004). This is the first discovery of its kind that identifies a potential mechanism for neuroinflammation in ASC through increased expression of this pro-inflammatory molecule and the significant involvement of resident immune cells. The connection of this result to changes in intracellular acidity indicates an investigation of pH across the entire brain parenchyma in living patients. Whilst evidence of causal link remains to be established, the idea that the induction of inflammation via the NF-κB signaling cascade is observed in regions of the neocortex associated with behavioral and clinical symptoms of ASC gives credence and impetus to interventions focusing on this potential therapeutic target. acKnoWledgMents This work was supported by grants to Adam Young from Archimedes Pharmaceuticals, the University of St Andrews, and the Pathological Society of Great Britain and Ireland. We are grateful to J. Pickett and C. Eberhart at the Autism Tissue Program and the Harvard University, and C. Troakes at King's College London Brain Bank, Institute of Psychiatry for providing samples. We thank Dr G. Cramb for his helpful comments and assistance. Finally, we extend our gratitude to the families of the donors for supporting the tissue collection programs. in ASC is unlikely. For confirmation, the cell morphology of neurons was screened (Mpoke and Wolfe, 2003) for signs of apoptosis or necrosis to assess the relative rates of cell death. There were minimal, if any, differences between-groups, an observation that concurs with work by Hausmann et al. (2004) who reported that apoptosis was not detected in non-traumatically injured brain tissue when the PMI was less than 72 h. The samples reported in the study fall into this category ( Table 1). Although needing to be confirmed at the molecular level, this may well be a key finding as it demonstrates the potential reversibility of the condition, something not commonly observed in many neurological disorders where there is high irreversible cell death. The elevated nuclear translocation in ASC samples (Figure 3) supports previous work on astrocyte and microglia activation in the condition Vargas et al., 2005;Zimmerman et al., 2005;Anderson et al., 2008). The activation of microglia induces an array of cellular events which accumulate to reduce neural function. This is potentially of interest more widely as previous studies have identified a potential link between low pH of homogenized tissue and learning disabilities (Rae et al., 2003) as well as Alzheimer's disease (Majumdar et al., 2007). Confirmation of the immunofluorescence results was obtained from an independent set of samples from the Autism Tissue Program at the Harvard Brain Tissue Resource Center (US cohort). Close correspondence in magnitude and direction of between-group differences with the UK cohort was observed. However, it is worthy of note that samples from the US cohort were donated by people very much younger than the UK cohort. Thus, as well as validating the results from the UK cohort, the observation of aberrant expression of NF-κB can be extended to cover an age range from 5 to 40 years. While the origin of inflammatory signaling in ASC remains undetermined, genetic or epigenetic factors are mechanisms which can subsequently up-regulate the NF-κB signaling cascade. Animals subject to prenatal immunological challenges during early gestation subsequently displayed marked learning deficits (Meyer et al., 2006) and morphological brain changes post-natally (Li et al., 2009). Extracellular detection of pathogens by toll-like receptors leads to signaling pathways resulting in over-expression of NF-κB. Theoretically, this would allow for the range of environmental stimuli which are associated with the condition to act on a central node of the inflammatory component of the condition. Supported by the increase in NF-κB expression at the protein level, an inherited component is most likely why the chronic inflammatory state maintains throughout adulthood. The ∼1 unit pH difference observed in homogenate brain tissue from controls and ASC patients ( Figure 4A) appears to be a result of increased lysosomal activity (Figures 4C,D). Coupled to the highly significant linear relationship between pH and NF-κB (Figure 4B), the inference is that acidification does not influence cognitive function directly, but is a consequence of neuroinflammation.
2016-05-12T22:15:10.714Z
2011-05-13T00:00:00.000
{ "year": 2011, "sha1": "9a612d6d554df717d1391dd01d639412bedc2f97", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2011.00027/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c843b35e43c552d111d5d6f853e11d951602fea", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20661475
pes2o/s2orc
v3-fos-license
Korean TimeML and Korean TimeBank Many emerging documents usually contain temporal information. Because the temporal information is useful for various applications, it became important to develop a system of extracting the temporal information from the documents. Before developing the system, it first necessary to define or design the structure of temporal information. In other words, it is necessary to design a language which defines how to annotate the temporal information. There have been some studies about the annotation languages, but most of them was applicable to only a specific target language (e.g., English). Thus, it is necessary to design an individual annotation language for each language. In this paper, we propose a revised version of Koreain Time Mark-up Language (K-TimeML), and also introduce a dataset, named Korean TimeBank, that is constructed basd on the K-TimeML. We believe that the new K-TimeML and Korean TimeBank will be used in many further researches about extraction of temporal information. Introduction Due to the exponentially increasing number of documents available on the Web and from other sources, it has become important to develop methods to automatically extract knowledge from unstructured, natural language documents. The knowledge extracted as such is useful for various applications in the areas of information retrieval (IR), trend analysis (TA), and question answering (QA) systems. Among the many aspects of extracting knowledge from documents, the extraction of temporal information has recently drawn attention. There are two well-known annotation languages of temporal information, Time Markup Language (TimeML) (Pustejovsky et al., 2003) and ISO-TimeML. Although these annotation languages define many tags and attributes for representing various types of temporal information, they do not incorporate language diversity. For example, they assume that annotation is performed in the token level. However, Korean is an agglutinative language whose words are formed by joining morphemes together, so it can not be annotated properly in the token level. As an annotation language for Korean, the Korean TimeML (KTimeML) was proposed (Im et al., 2009), and its contributions can be summarized as follows: (1) it employs a morpheme-level standoff annotation scheme, (2) it takes a surface-based annotation scheme, (3) it suggests to cancel the head-only markup policy of TimeML, (4) it addresses several Korean-specific issues (e.g., the usage of signal tag for only temporal connectives), and (5) it introduces the TARSQI Toolkit for the annotation process following the KTimeML. In this paper, we argue that the KTimeML has some limitations, and propose a revised version of the KTimeML. For example, the previous KTimeML did not consider some charactersitics of Korean (e.g., a lunar calendar), and the morpheme-level annotation of the KTimeML makes it difficult to share the dataset. Our new KTimeML overcomes such limitations, and we also introduce the Korean TimeBank constructed using the new KTimeML. The rest of this paper is organized as follows. Section 2 presents details of the Korean TimeML. Section 3 introduces the Korean TimeBank, and Section 4 concludes the paper. Limitations of the Previous Korean TimeML We argue that the previously proposed KTimeML has five limitations. First, although it was proposed as an annotation language for Korean, it misses some characteristics of Korean. Temporal expressions based on the lunar calendar appear often, where the normalized value of such temporal expressions cannot be represented using the Gregorian calendar. For example, for the sentence "어머니 생신은 4 월4일이다"(Mother's birthday is on the 4th day of the 4th month in the lunar calendar), the normalized value of 'the 4th day of the 4th month' will be different on different years in the Gregorian calendar (e.g., '2015-05-21' for the year 2015, '2014-05-02' for the year 2014). Moreover, there are some temporal expressions conveying vague temporal information that appear often in Korean. For example, '초 중반[cho-joong-ban]' represents the beginning or middle phase of a period, and '중후반[joong-hoo-ban]' represents the middle or ending phase of a period. There is no way to annotate these expressions using the previous KTimeML. Second, there are temporal expressions conveying periodic patterns that can not be annotated using the previous KTimeML. For the sentence "I visit there twice every week, each of which takes one day", there is no way to annotate the expression 'every week' because the attribute freq of timex3 tag can not represent 'twice' and 'one day' simultaneously. The reason for this limitation is the inconsistent usage of the attribute freq. That is, the freq is used to annotate not only a periodic frequency (e.g., 'twice'), but also a periodic duration (e.g., 'one day'). When these two periodic patterns appear simultaneously, then the temporal expressions will not be annotated properly using the previous KTimeML. Third, the previous KTimeML takes a morpheme-level annotation. From a linguistic point of view, morpheme-level annotation seems perfect because the smallest meaningful unit of Korean is the morpheme. However, from a practical point of view, morpheme-level annotation makes it difficult to distribute or share the dataset. The reason is that there are multiple tag-sets of morphemes, so the datasets using different tag-sets will not be consistent with each other. Even if all the datasets are commonly based on a single tag-set, they will not be consistent unless they use the same morphological analyzer. Because the essential purpose of annotation language is to help to distribute or share the dataset, morpheme-level annotation must be avoided. Fourth, different attribute names are used to denote the IDs of different tags. For example, tid is used for timex3 tags, and lid is used for tlink tags. One may argue that using the different attribute names to denote IDs will make the various kinds of tags easier to recognize. However, in terms of further applications that make use of temporal information, it is not necessary to use various attribute names to denote tag IDs the kind of tag is already known when its attributes are parsed. Rather, using different names to denote IDs makes it complex to implement programs to parse the tag attributes. Fifth, similar to ISO-TimeML, an event tag plays two roles: the role of an event token and the role of an event instance. Given the sentence "Kevin taught English yesterday and today", there will be two event tags as follows. <EVENT eid="e1" morph="m1" pred="TEACH" class="OCCURRENCE" tense="PAST" polarity="POS"/> <EVENT eid="e2" pred="TEACH" class="OCCURRENCE" tense="PAST" polarity="POS"/> The first event tag has the two roles (e.g., a role of an event token and a role of an event instance), while the second event tag has only the role of an event instance. From a practical point of view, this inconsistent functionality of event tags may cause difficulty in parsing of the annotated event tags, which would result in the inefficiency of further applications. In other words, it would be necessary to implement a program to recognize the role of the event tag, which would slow down the applcations. Modified Korean TimeML To address the limitations of the previous KTimeML, we revise the KTimeML by introducing some additional attributes and modifying some existing attributes. In terms of the first limitation, we add an attribute calendar of timex3 tag to denote the calendar types, where its value can be LU- To address the second limitation, we introduce an additional attribute prd of timex3 tag, where it is used to represent periodic duration based on ISO-8601. The existing attribute freq of timex3 is also modified so that it is used to represent only the periodic frequency. This role separation between freq and prd makes it possible to annotate temporal expressions that could not be annotated using the previous KTimeML. For example, given the sentence "I visit there twice every week, each of which takes three hours", the timex3 tag of 'every week' will have freq='2X' and prd='PT3H'. To address the third limitation, we propose taking a character-level annotation. Character-level annotation will make the dataset independent of morpheme tag-set and morphological analysis, which in turn makes it easy to distribute or share the dataset. To realize character-level annotation, we replace the attribute morph with five attributes e begin, e end, begin, end, and text. The attributes e begin and e end indicate token indices of the extent, while begin and end indicate character (letter) indices of the extent. For example, the sentence, "I work today" in Fig. 1, contains one timex3 tag whose text is 'today', where e begin=2, e end=2, begin=0, and end=4. One may argue that the e begin and e end represent the token-level information, so it seems that the proposed annotation does not take the character-level annotation. It is true that these two attributes may seem unnecessary, because the other two attributes begin and end carry the character-level information. However, it is important to notice that the annotation language is used by not only computers, but also by human. Using only begin and end will be enough for the computers to work, but not for human annotators. The usage of e begin and e end helps human to easily annotate new corpus or to check an annotated corpus. For example, given a sentence "There were many works I had to do, but I left it yesterday", human annotators have to annotate 'yesterday' with timex3 tag. Without using e begin and e end, it will make the annotators hard to annotate, because the annotators have to count the number of preceding characters. Furthermore, it will also be more difficult to read or check whether the annotated timex3 tag is correct or not, due to the same reason. Thus, the two attributes are necessary to help human annotators. To address the fourth limitation, we just use the same attribute name id for every tag, as ISO-TimeML does. To address the fifth limitation, we employ a makeinstance tag, which is also adopted by TempEval shared tasks (Verhagen et al., 2009;Verhagen et al., 2010;UzZaman et al., 2013). The makeinstance tag takes the role of an event instances, while the event tag has only the role of an event token. This clear separation of the two roles will help the further applications to easily analyze event tags. As there is at least one instance for each event token, the number of event tags is always smaller than or equal to the number of makeinstance tags. Korean TimeBank There are some existing Korean datasets of temporal information. A Korean dataset constructed using timex2 was introduced (Jang et al., 2004), where timex2 is the former version of timex3. The first Korean dataset using timex3 appeared in TempEval-2, which provides datasets of six languages:Chinese, English, French, Italian, Spanish, and Korean. However, the Korean dataset of TempEval-2 is small in size (e.g., totally 26 documents) and has many annotation errors. There are some missing values of timex3 tags, and there are some tags that must be merged into one. An example of the errors can be found at the 11th sentence of the 2nd training document within the TempEval-2 Korean dataset. Moreover, it is annotated in the morpheme level, which implies that it will not be consistent with other datasets. Thus, we introduce a new Korean dataset, namely Korean Time-Bank, which is annotated in the character level. The source of the Korean TimeBank includes Wikipedia documents and hundreds of manually generated questionanswer pairs. The domains of the Wikipedia documents are personage, music, university, and history. The annotation is performed by two well-trained annotators majoring in computer science and examined by a supervisor. The statistics of the Korean TimeBank are summarized in Table 1, and the Korean TimeBank will be extended regularly. The Kappa coefficient κ is described in Table 2. Similar to TempEval tasks, it adopts four tags: timex3, event, makeinstance, and tlink. The main target applica- tion of the Korean TimeBank is question answering (QA) systems, so a part of the new KTimeML is adopted with the consideration of the target application. The adopted attributes of the timex3 tag are id, type, value, beginPoint, endPoint, e begin, e end, begin, end, text, freq, prd, quant, mod, calendar, and comment. The adopted attributes of the event tag are id, class, e begin, e end, begin, end, text, and comment. The adopted attributes of the makeinstance tag are id, eventID, polarity, tense, POS, modality, cardinality, and comment. The adopted attributes of the tlink tag are id, eventInstanceID, timeID, relatedToEventInstance, related-ToTime, relType, and comment. The annotated tags of each document are saved as a separate file in XML format, and a sample file is shown in Fig. 2. The id of the document in Fig. 2 is a file name, and a url indicates the source of the document. The category is the category of the document (e.g., category of Wikipedia documents), and date represents the Document Creation Time (DCT). The contents contains original sen-tences, while timeAnnotation contains pairs of an original sentence and annotated tags. This stand-off scheme allows the original sentences to be kept unharmed. Each annota-tionInfo of timeAnnotation contains the pair of an original sentence and tags within a sentence, where sentence id is an index of the sentence. The text of annotationInfo is the original sentence, and tag contains the annotated tags. Conclusion As there are several limitations of the previous Korean TimeML (KTimeML), we proposed a new modified version of KTimeML and introduced a Korean TimeBank constructed using a part of the new KTimeML. We believe that the Korean TimeBank will be widely used for many Korean-based studies and applications related to temporal information, because this is the first high-quality Korean dataset that is independent to any tag-sets or morphological analysis tools because it is annotated in the character level.
2017-06-30T05:03:08.323Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "2c673bdd9510a961cbee13ca523c0a932c8a32ed", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "2c673bdd9510a961cbee13ca523c0a932c8a32ed", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
53058945
pes2o/s2orc
v3-fos-license
The Most Considered Type of Student Characteristics by Primary School Teacher This study aims to determine what type of student characteristic most considered by teachers in primary school and their influence on student learning achievement. For this purpose, 37 class teachers of grade 4th to 6th from seven primary schools were the respondents. Categorizing the level of identification students characteristics by the teacher using a questionnaire on the application of student characteristics that is deepened by the study of learning planning documents intended to provide an overview of the types of students characteristics that are considered most important to be done by primary school teachers. The results show that the level of student intelligence is the type of student characteristics most considered by primary school teachers. By comparing the value of student achievement it can be seen that the learning developed by paying attention to students intelligence levels are better than learning that does not pay attention to it. INTRODUCTION Student characteristic is one of the conditions of the instructional variable which is occupied an important position that leads to learning achievement. The kind of student characteristic is intelligence levels, prior knowledge, cognitive styles, learning styles, motivation, and sosioculture [1]- [3]. On the instructional structure, it was connecting with the instructional management strategy. Management strategy is elemental methods for making decisions about which organizational and delivery strategy components to use when during the instructional process. They include such conciderations as for how to individualize the instruction and when to schedule the instructional resources [3], such us; selecting the instructional component, organizing delivery of content, development instructional strategy and media, delivering matters and motivational management, and managing instructional activities [4]. All of this must be considered by the teachers before they make instructional planning that suitable with the student characteristics for the equitable and meaningful learning. Due to many types of student characteristics, it often comes up with the questions of teachers, especially in primary schools, such as: which one is more important? Should everything be identified? Can I choose one or more that is more urgent to pay attention to? Which is most influential in improving learning outcomes? Ideally, all types of student characteristics must be considered by the teachers because the variety of student characteristics influence the selection of other instructional method variables [5]. However various limitations causing them not to study in its entirety and depth. With the result that, need an effort to give a guidance for the teachers in primary school to know the most criteria that are appropriate for class conditions. This paper begins with a literature review in Section 2, then followed by a review of the methodology used for conducting this study in Section 3. Section 4 presents the result of the study and the last part of the paper includes the conclusion and recommendation of the study. STUDENT CHARACTERISTICS Student characteristics is a personal quality of students who become characteristic and indicate the condition of students. This individual characteristic is believed to be a special ability that influences the degree of success in following a program characteristics is needed by other learning components such as material goa strategies and evaluation. As describe first, the kind of student characteristics are intelligence levels, prior knowledge, cognitive styles, learning styles, motivation, and sosio characteristics has a different influence on determining the right describe on Figure 1. Analyzing the characteristics of students can help in setting appropriate learning management strategies. Analysis of student characteristics can be done based on the dimensions of the characteristics of students who first adjusted to the objectives and characteristics of learning content. For example, the approach based on the basic characteristics of students (applying and abilities, experience in using technology), and learning context (personal identity, learning style, cognitive style, motivation) in formal and informal learning situations characteristics as a key interaction between students and learning that will affect the effectiveness of learning [7]. Therefore the teacher must recognize and consider the characteristics possessed by students proportionally so that learning c Intelligence levels can be described as the ability to solve prob valued within one or more cultural settings in what respects an individual may be deemed intelligent is a product of his his psychological properties, ranging from his cognitive powers to his personality dispositions The level of intelligence of students can affect students' confidence in facing All students have various intelligence These intelligences can serve as a powerful springboard and instructional planning [9]. The prior knowledge or mental models that vital to providing them with strategies and heuristics for managing the processing event International Journal on Integrating Technology in Education (IJITE) Vol. 7, No.3, September 2018 erature review in Section 2, then followed by a review of the methodology used for conducting this study in Section 3. Section 4 presents the result of the study and the last part of the paper includes the conclusion and recommendation of the study. HARACTERISTICS Student characteristics is a personal quality of students who become characteristic and indicate the condition of students. This individual characteristic is believed to be a special ability that the degree of success in following a program [6]. Various information about student characteristics is needed by other learning components such as material goals, media, learning As describe first, the kind of student characteristics are intelligence levels, prior knowledge, cognitive styles, learning styles, motivation, and sosio-culture [1]- [3]. Each type of student characteristics has a different influence on determining the right instructional method The influence of student characteristics on instructional method Analyzing the characteristics of students can help in setting appropriate learning management student characteristics can be done based on the dimensions of the characteristics of students who first adjusted to the objectives and characteristics of learning content. For example, the approach based on the basic characteristics of students (applying and abilities, experience in using technology), and learning context (personal identity, learning style, cognitive style, motivation) in formal and informal learning situations characteristics as a key interaction between students and learning that will affect the effectiveness herefore the teacher must recognize and consider the characteristics possessed by students proportionally so that learning can succeed well. can be described as the ability to solve problems, or to create products, that are valued within one or more cultural settings [8]. These are biopsychological potential. in what respects an individual may be deemed intelligent is a product of his genetic heritage and his psychological properties, ranging from his cognitive powers to his personality dispositions The level of intelligence of students can affect students' confidence in facing difficult material intelligence, or ways of receiving and expressing their knowledge. can serve as a powerful springboard for creative curricular decision mak The prior knowledge or mental models that learners bring to any information, therefore, can be vital to providing them with strategies and heuristics for managing the processing event International Journal on Integrating Technology in Education (IJITE) Vol.7, No.3, September 2018 30 erature review in Section 2, then followed by a review of the methodology used for conducting this study in Section 3. Section 4 presents the result of the study and the last part of the paper includes the conclusion and recommendation of the study. Student characteristics is a personal quality of students who become characteristic and indicate the condition of students. This individual characteristic is believed to be a special ability that Various information about student ls, media, learning As describe first, the kind of student characteristics are intelligence levels, prior knowledge, ach type of student method as clearly Analyzing the characteristics of students can help in setting appropriate learning management student characteristics can be done based on the dimensions of the characteristics of students who first adjusted to the objectives and characteristics of learning content. For example, the approach based on the basic characteristics of students (applying skills and abilities, experience in using technology), and learning context (personal identity, learning [2]. Student characteristics as a key interaction between students and learning that will affect the effectiveness herefore the teacher must recognize and consider the characteristics possessed by to create products, that are biopsychological potential. Whether and genetic heritage and his psychological properties, ranging from his cognitive powers to his personality dispositions [1]. difficult material [7]. ng and expressing their knowledge. reative curricular decision making learners bring to any information, therefore, can be vital to providing them with strategies and heuristics for managing the processing event [2]. It's important in increasing the meaningfulness of learning because it facilitates internal processes that take place within the students. Teachers often do not pay attention to how new knowledge is treated not related to prior knowledge. For example dividing facts and procedures as a separate material, emphasizing memorization and not understanding the material, and teaching managing new knowledge without understanding how the material reflects the student's personal goals or strategies in learning. Whereas linking with different types of abilities can make new knowledge meaningful to students to facilitate acquisition, organization, and re-disclosure of new knowledge that has been obtained. For the primary students, strengthening prior knowledge such as language, literacy, and self-regulation in various contexts proved to be most effective in increasing student success [10]. There are seven types of prior knowledge that could be used to facilitate the acquisition, organization, and re-disclosure of new knowledge [11]. Arbitrarily meaningful knowledge as a place to link memorized knowledge to facilitate retention. Analogic knowledge that associates new knowledge with other similar knowledge, but is beyond the content being discussed. Superordinate knowledge that can serve as a cantaloupe for new knowledge. Coordinate knowledge that can fulfill its function as associative and/or comparative knowledge. Subordinate knowledge that functions to concretize new knowledge or also provide examples. Experiential knowledge which also functions to concretize and provide examples for new knowledge. Cognitive strategy that provides ways of processing new knowledge, ranging from encoding, storage, to re-disclosure of knowledge that has been stored in memory. Cognitive style refers to an individual's way of processing information [12] and reflect human attributes or processes that tend to be fixed over time [2]. It relates to things such as attention span, memory, mental procedures, and intellectual skills that determine how students feel, remember, think, solve problems, organize and represent information in their brain. Examples of cognitive styles include focussing and scanning, reflective and impulsive, serialist and wholist, and so on. Teachers must pay attention to the cognitive style of students in terms of choosing the right instructional strategy. The information of cognitive styles helping teachers to support media or learning experiences that match with the type of students cognitive styles so that student feel comfortable and equal with instructional activities on progress [5]. Learning style is defined as the characteristics, strengths, and tendencies that a person has towards how to receive and process information [13], and is related to the way a person begins to concentrate on processing and storing new and difficult information [14]. Learning styles are easier to alter over time with instruction, motivation, trial and error, and experience [2]. There are several models of learning style. Some of them are Kolb learning styles, Honey & Mumford models, V-A-R-K models, Myers-Briggs Type Indicator (MBTI) models, and Felder-Silverman models. Knowing student learning styles useful not only for the teachers in addition to design instructional with excellent quality but also for the students to more understanding their strength for learning optimally [5]. The same material can be accessed or understood in a different way by some people. So this means that it requires many ways so that the same information can be conveyed and understood by various people with different learning styles. Many of the methods referred to here are certainly related to the media in which the information is published and delivered through the right channels, as well as the choice of instructional strategies as a place to manage it. As an example of material regarding a procedure that contains steps for operating a tool. Students with visual learning styles will more easily understand the material if it is presented through component drawings on the tool and the flow that must be done in its operation. Conversely, for students with verbal learning styles, it will be easier to understand the material in the form of oral descriptions or written texts that describe in detail the operating procedures of the tool. So that teachers must match learning strategies and learning tasks that are appropriate to the learning styles of students [15]. Student motivation on instructional is vital in ensuring that the learner persists adequately to successfully complete the task and acquire skills or content knowledge. Motivation to learn is identified by the student's choice of behavior, latency of the behavior, and the intensity and persistence of engagement in the learning task [16]. Motivation is not only limited to learning competence but also depends on the interaction of students with the social environment. This interaction is internalized in students from time to time and serves as a guide to behaving [1]. Therefore, identifying motivation should not only be related to the learning material but more to the overall attitudes and behavior of students towards their readiness to understand themselves and their environment in real terms. Motivation has an impact on students' self-efficacy to succeed in learning. This gives a strong and positive influence on academic achievement [17]. When motivated enough, students tend to be more eager to face tasks, survive in difficult situations, and enjoy their achievements. There is a strong relationship between intrinsic motivation and student learning achievement. In addition, the learning context greatly influences student motivation. Organizing challenging learning materials, giving choices and learning autonomy have a positive impact on student motivation [16]. Socio-cultural conditions relate to groups or individuals associated with society. Examples of social characteristics are group structure, individual position in a society, social skills, and so on. Research indicates that the effect of socio-cultural conditions such as poverty, race/ethnicity, and community are distal factors associated with children's school success [10]. Students with good socio-cultural conditions can influence the success of learning in the long term [18]. A sociocultural perspective must link theory with practice in instructional design that rests on the involvement of teachers and students in the real world environment with authentic practice communities [2]. Sociocultural instructional design views learning as collaborative and generative, involving enculturation into groups of students who are driven by authentic, ongoing, mediated, and built problems. Learning that takes into account sociocultural conditions emphasizes student-teacher collaboration so as to enable differences in individual learners, and the role of students' privileges in developing learning processes that are participatory, interactive and sustainable. Learning is not de-contextual, so tools are not seen as a delivery mechanism or as a tool that must be learned separately from the content of the material, but as a technology to add meaning to the process of learning and community development. PRIMARY SCHOOL PROGRAMS Primary school is the most basic level of formal education in Indonesia. Primary school is part of the basic education program launched by the government covering 6 years of primary school and 3 years of junior high school with a total of 9 years [19]. Primary schools are taken within 6 years, ranging from grades 1 to 6 with students ranging in age from 7 to 12 years. The programs of primary school are focused on knowledge acquisition and learning process in general. Primary schools curriculum adheres to: (1) learning done by the teacher (taught curriculum) in the process developed in the form of learning activities in school, class, and community; and (2) students' direct learning experience (learned-curriculum) in accordance with the background, characteristics, and initial abilities of students. Individual direct learning experiences of students become learning outcomes for themselves while learning outcomes of all students become curriculum results [20]. Teachers who teach in primary school consist of class teachers and subject teachers. Class teachers teach language, math, natural science, social science, and cultural arts and crafts. The subject teachers teach religious education and physical education, sports and healthy subjects. Primary school teachers must have the ability to working with people, organizing the classroom, planning the curriculum, managing behavior, assessing and record keeping, thinking about education, and becoming a professional [21]. At the same time must be able to manage the tension created by the need to support children's learning in the right way while responding to demands for accountability [22]. With the many demands that must be met by primary school teachers often cause stress and boredom that can affect the effectiveness of classroom learning. The teacher can no longer apply professionally, and just do the task as a teacher without feeling it. Therefore the teacher must be aware of his role which is vital to the student's success in learning. RELATED WORKS Recent research on student characteristics that influence learning outcomes has been widely discussed. Some of them are presented in Table 1. The research conducted focused only on some types of student characteristics, even though there were many types of student characteristics that could affect student achievement. The results of the study to provide good advice for teachers in addressing various types of characteristics of students they face. However, no one has shown the tendency of teachers to pay more attention to one type of characteristic of students who are considered the most important to be identified considering the various limitations they have, especially for primary school teachers. It would be helpful if there were studies that discussed the trends of types of student characteristics that were most considered by primary school teachers. So that need more study to find out the type of student characteristics that are most considered by primary school teachers and to know their effects on student learning achievement. In this sense, this work is new and significantly different from the previous study done in the field. CONTEXT AND PARTICIPANTS Student characteristics is a personal quality of students who become characteristic and indicate the condition of students, which is believed to be a special ability that influences the degree of success in following a program [6]. Each type of student characteristics has a different influence on determining the right instructional method. Analyzing the characteristics of students can help in setting appropriate learning management strategies. Ideally, all types of student characteristics must be considered by the teachers because the variety of student characteristics influence the selection of other instructional method variables [5]. However various limitations causing them not to study in its entirety and depth. This study aims to find out the characteristics of students that most influence learning achievement so that it can help primary school teachers determine priorities according to class conditions. The population used in this study included 37 class teachers in grade 4 -6 from seven primary schools. PROCEDURE This research was conducted from January 2018 -as the beginning of the semester -to March 2018 -as the middle of the semester -to get data about the planning carried out by the teacher and the learning outcomes that have been obtained by students. The teachers were given a questionnaire about the application of student characteristics in learning. Categorize the level of identification of student characteristics conducted by the teacher. Observe instructional planning documents from teachers that identify student characteristics. Categorize the level of application of information on the characteristics of students in the learning activities carried out by the teacher. Comparing students midterm exam to get a comparison of the value of learning outcomes between classes whose learning was developed based on students' characteristic information with classes that were not developed based on student characteristics information. RESEARCH DESIGN Completion of questionnaires on the application of student characteristics in learning must be ascertained the truth by observing instructional planning documents developed by the teacher. Categorizing the level of identification of characteristics of students by the teacher is intended to provide an overview of the types of student characteristics that are considered most important to be done by primary school teachers. Comparison of learning outcomes is intended to show improvement in learning outcomes that are developed based on information on student characteristics. The design of the study as shown in Figure 2. A questionnaire which was adapted from [21], [24] used to find out the application of information on student characteristics in learning done by the teacher. The questionnaire, which was completed by each participant, consisted of four parts: (i) demographic information, (ii) knowledge of the types of students characteristics, (iii) how to identify student characteristics, (iv) development of appropriate strategies according to student characteristics, based on 5-point Likert scale (ranging from very disagree '1' to very agree '5') [25]. Data is presented in the tables form or frequency distributions. With this analysis will be known the tendency of the application of information characteristics of students at very low, low, medium, high, or very high levels. The conversion table of scale 5 can be seen in Table 2. Data on students learning achievement was obtained by questioning the teachers as well as from the average scores in the midterm exam. Test scores are separated into several categories as can be seen in Table 3. RESULT OF THE QUESTIONNAIRE Before filling out the questionnaire, the teacher is given questions related to the types of characteristics of students identified in learning. The questions are as presented below: Item number 1: What type of student characteristics that you identify in learning? Write down two or three types that you usually identify. Item number 2: According to the answer of item number 1, choose one of the characteristics of students that you are most considering to identify. The result of the item number 1 and 2 as shown in Table 4 and 5. The results show that the types of student characteristics identified by primary teachers are the level of intelligence, motivation, cognitive style, learning style, prior knowledge, and sociocultural. The type of student characteristics that are the most considered in learning is the level of intelligence. Next, the teacher is given a questionnaire according to the type of characteristics that are most considered by them. The questionnaire used to get data about identifying the characteristics of students by the teachers. The number of respondents was 37 people divided into 20 teachers whose main consideration was the level of intelligence of students, 6 teachers on motivation, 4 teachers on cognitive style, 4 teachers in learning styles, 2 teachers in prior knowledge, and 1 teacher in socio-cultural. The results are described as follows: CATEGORIZING THE LEVEL OF IDENTIFICATION STUDENT CHARACTERISTIC To complete the data provided by the teacher, an observation was made of the instructional planning documents designed by the teacher. The results are converted using a Likert scale. Conversion of Likert scores in a scale of 5 is done to determine the level of identification of student characteristics by the teachers, as can be seen in Table 12. As can be seen in Table 12, intelligence level occupies the highest level with a score of 3.65 which shows that primary school teachers are very considered about the student's intelligence level that they achieve in the implementation of learning. Furthermore, motivation ranks second, together with cognitive styles, prior knowledge, and learning styles at the medium level. This means that the teacher is enough to pay attention to the type of characteristics of these students in learning but not in depth. The last order is occupied by socio-cultural with a low level, which means that the teachers pay less attention to the socio-cultural conditions of students in learning. RESULT OF THE MIDTERM EXAM Although the level of intelligence ranks first in the type of student characteristics that are most important to be identified by the teacher in learning, the effect on student achievement is unknown. To find out whether there are differences in learning outcomes of students whose learning takes into account the level of intelligence of students and those who do not, it is necessary to compare the learning outcomes achieved by students. For this purpose, a document study is conducted to obtain the value data obtained by students on the midterm exam. The results of the comparison as presented in Table 13. Table 13 shows that the learning outcomes developed based on the identification of student's level of intelligence can be better. This can be seen from the acquisition of a better score on all subjects. Even in Indonesian language and mathematics subjects, the difference in learning outcomes is prominent. DISCUSSION AND CONCLUSION Student characteristic is believed to be a special ability that influences the degree of success in following a program [6]. Student characteristics as a key interaction between students and learning that will affect the effectiveness of learning [7]. Therefore the teacher must recognize and consider the characteristics possessed by students proportionally so that learning can succeed well. Ideally, all types of student characteristics must be considered by the teachers because the variety of student characteristics influence the selection of other instructional method variables [5]. However various limitations causing them not to study in its entirety and depth. This study aimed to find out what type of student characteristic most considered by teachers in primary school and to know their effects on student learning achievement. This is expected to help overcome primary school teacher confusion about which types of student characteristics are most considered in designing and implementing learning. According to the result of the study, the type of student characteristics most considered by the teacher is student intelligence level. This can be seen from the majority of primary teachers who identify the types of characteristics of students. This can also be seen from the use of student characteristics information in instructional planning at a high level. Based on the study of learning planning documents developed by the teacher it was found that information on the level of intelligence of students was known from in-depth observation of students' behavior in solving learning problems and their speed in understanding the material presented. The teachers use information on the student's intelligence levels to classify students into categories of fast, medium, and slow. Each category is treated differently so that students feel comfortable in understanding the material presented. Treatment differences can be in the form of material enrichment for fast category students and adding tasks to students in the fast category or organizing the right material for students in the slow category. Fast category students are given assignments as peers for their friends. Sometimes the explanation given by his friend is easier to understand. This will foster good learning motivation. Based on the comparison of learning outcomes it can be shown that students learning achievement which is developed based on the identification of students' intelligence level can be better than those who do not use it. The future study should focus on the specific learning strategies are in accordance with the students intelligence level to help teachers choose the right strategy for their class conditions.
2018-10-06T18:30:47.961Z
2018-09-30T00:00:00.000
{ "year": 2018, "sha1": "313dfc8eda5636bac1be8f1a9dfe72eedc287fb8", "oa_license": null, "oa_url": "https://doi.org/10.5121/ijite.2018.7303", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "313dfc8eda5636bac1be8f1a9dfe72eedc287fb8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
22021110
pes2o/s2orc
v3-fos-license
INTESTINAL AND EXTRAINTESTINAL NEOPLASIA IN PATIENTS WITH INFLAMMATORY BOWEL DISEASE IN A TERTIARY CARE HOSPITAL Context The development of neoplasia is an important concern associated with inflammatory bowel disease (IBD), especially colorectal cancer (CRC). Objectives Our aim was to determine the incidence of intestinal and extraintestinal neoplasias among patients with inflammatory bowel disease. Methods There were retrieved information from 1607 patients regarding demographics, disease duration and extent, temporal relationship between IBD diagnosis and neoplasia, clinical outcomes and risk factors for neoplasia. Results – Crohn’s disease (CD) was more frequent among women (P = 0.0018). The incidence of neoplasia was higher in ulcerative colitis (UC) when compared to CD (P = 0.0003). Eight (0.99%) patients developed neoplasia among 804 with CD: 4 colorectal cancer, 2 lymphomas, 1 appendix carcinoid and 1 breast cancer. Thirty (3.7%) patients developed neoplasia among the 803 UC: 13 CRC, 2 lymphomas and 15 extraintestinal tumors. While CRC incidence was not different among UC and CD (1.7% vs 0.5%; P = 0.2953), the incidence of extraintestinal neoplasias was higher among UC (2.1% vs 0.5%, P = 0.0009). Ten (26.3%) patients out of 38 with neoplasia died. Conclusions CRC incidence was low and similar in both diseases. There was a higher incidence of extraintestinal neoplasia in UC when compared to CD. Neoplasias in IBD developed at a younger age than expected for the general population. Mortality associated with malignancy is significant, affecting 1/4 of the patients with neoplasia. HEADINGS – Intestinal neoplasms. Colorectal neoplasms. Inflammatory bowel diseases. Colitis, ulcerative. Crohn’s disease. Declared conflict of interest of all authors: none Colorectal Surgery Division, Gastroenterology Department. Hospital das Clínicas University of São Paulo Medical School, Brazil. Correspondence: Prof. Fábio Guilherme Campos Rua Padre João Manoel, 222 – Cj. 120 01411-001 São Paulo, SP, Brazil. E mail: fgmcampos@terra.com.br and more extensive, a less dramatic figure is extracted from population-based studies. Moreover, it is possible that biologic therapy and immunosupressants that are currently used nowadays may affect this incidence(24). When compared to sporadic CRC, IBD-associated tumors usually occur in earlier in lifetime and develop as flat lesions in regions exhibiting dysplasia. Mucinous and synchronous lesions are also important characteristics of this population(23). Factors associated with an increased risk of CRC are colitis duration and extension, early onset, family history of CRC, primary sclerosing cholangitis and histological features such as dysplasia or severity of inflammation. Otherwise, surgical treatment, surveillance and chemoprevention may help reduce this risk(4). The present study was designed to evaluate and compare the risk of intestinal and extra intestinal cancer in patients with UC and Crohn’s disease. Thereafter, the characterization of this risk may help us identify features that are probably enrolled in prevention and selection of patients at greater risk. Campos FG, Teixeira MG, Scanavini A, Almeida MG, Nahas SC, Cecconello I. Intestinal and extraintestinal neoplasia in patients with inflammatory bowel disease in a tertiary care hospital 124 Arq Gastroenterol v. 50 no. 2 abr./jun. 2013 METHODS This study has been approved by the scientific-ethics committee of our hospital and have therefore been performed in accordance with the 1964 Declaration of Helsinki. Patient data was prospectively collected since the first medical visit, being constantly actualizated during follow-up. Between September 1984 and September 2007, 1607 patients were diagnosed with IBD (804 CD and 803 UC). Data regarding diagnosis, treatment and clinical outcomes were prospectively registered in a protocol containing all information since the first appointment. IBD diagnosis was made based on clinical, radiological, endoscopic and histological exams. Extent of CD was defined as colitis, enteritis, enterocolitis and perineal exclusively. Extent of UC was defined as extensive, left-sided and proctitis. Information collected from medical records included demographics, IBD duration, extent of disease, temporal relationship between diagnosis of inflammatory bowel disease (IBD) and intestinal/extraintestinal neoplasia, clinical outcomes and potential risk factors for neoplasia (drugs, family history of cancer, smoking and association with sclerosing cholangitis). Follow-up data was obtained through regular outpatient visits, that were scheduled every 6 months or more frequently if they needed orientation regarding medications, symptoms control, clinical or radiological evaluation. During follow-up, patients were routinely evaluated through biochemical tests and colonoscopy according to the symptoms (and every year after 10 years of diagnosis. Radiological investigation (ultrasonography, computer tomography and magnetic resonance) was indicated as necessary. Neoplasia incidence was determined for patients with UC and CD. The comparison of IBD patients with and without neoplasia was established through chi-square and Fisher’s exact tests. The test of proportion was used to compare gender prevalence among the groups. All the analysis was performed using Statistical Package for the Social Sciences (SPSS) 13.0 (Chicago, IL). The threshold for statistical significance was predefined as P<0.05. and more extensive, a less dramatic figure is extracted from population-based studies.Moreover, it is possible that biologic therapy and immunosupressants that are currently used nowadays may affect this incidence (24) . When compared to sporadic CRC, IBD-associated tumors usually occur in earlier in lifetime and develop as flat lesions in regions exhibiting dysplasia.Mucinous and synchronous lesions are also important characteristics of this population (23) .Factors associated with an increased risk of CRC are colitis duration and extension, early onset, family history of CRC, primary sclerosing cholangitis and histological features such as dysplasia or severity of inflammation.Otherwise, surgical treatment, surveillance and chemoprevention may help reduce this risk (4) . The present study was designed to evaluate and compare the risk of intestinal and extra intestinal cancer in patients with UC and Crohn's disease.Thereafter, the characterization of this risk may help us identify features that are probably enrolled in prevention and selection of patients at greater risk. METHODS This study has been approved by the scientific-ethics committee of our hospital and have therefore been performed in accordance with the 1964 Declaration of Helsinki.Patient data was prospectively collected since the first medical visit, being constantly actualizated during follow-up. Between September 1984 and September 2007, 1607 patients were diagnosed with IBD (804 CD and 803 UC).Data regarding diagnosis, treatment and clinical outcomes were prospectively registered in a protocol containing all information since the first appointment.IBD diagnosis was made based on clinical, radiological, endoscopic and histological exams.Extent of CD was defined as colitis, enteritis, enterocolitis and perineal exclusively.Extent of UC was defined as extensive, left-sided and proctitis. Information collected from medical records included demographics, IBD duration, extent of disease, temporal relationship between diagnosis of inflammatory bowel disease (IBD) and intestinal/extraintestinal neoplasia, clinical outcomes and potential risk factors for neoplasia (drugs, family history of cancer, smoking and association with sclerosing cholangitis). Follow-up data was obtained through regular outpatient visits, that were scheduled every 6 months or more frequently if they needed orientation regarding medications, symptoms control, clinical or radiological evaluation.During follow-up, patients were routinely evaluated through biochemical tests and colonoscopy according to the symptoms (and every year after 10 years of diagnosis.Radiological investigation (ultrasonography, computer tomography and magnetic resonance) was indicated as necessary. Neoplasia incidence was determined for patients with UC and CD.The comparison of IBD patients with and without neoplasia was established through chi-square and Fisher's exact tests.The test of proportion was used to compare gender prevalence among the groups.All the analysis was performed using Statistical Package for the Social Sciences (SPSS) 13.0 (Chicago, IL).The threshold for statistical significance was predefined as P<0.05. RESULTS Most (76.2%) of the 1607 IBD patients were Caucasians.As seen in Tables 1 and 2, gender distribution in this study revealed a prevalence of women (P = 0.001) among CD (55.2% vs 44.8%) and UC patients (59.5% vs 40.5%), but the CRC incidence was similar in both sexes (P = 0.4). Thirty-eight (2.4%) patients developed some type of neoplasia during follow-up.Within this group, only two were receiving immunosuppressants and none of them had received anti-TNF therapy.Both patients receiving azathioprine were diagnosed with an eye tumor and a right colon cancer.There was no diagnosis of anal or small intestine carcinoma in any IBD patient.Two (11.7%) out of 17 patients that developed CRC had multiple lesions along the colon. Patients with CD Eight (0.99%) patients developed some kind of neoplasia among 804 DC patients (Table 1).Among five patients with CD colitis, there were found three CRC (one of them diagnosed with five synchronous primary colon adenocarcinomas), one had a lymphoma and another one presented a breast cancer.The three patients with CD enteritis developed carcinoid of the appendix (one), sigmoid adenocarcinoma (one) and one lymphoma.Data regarding tumor location and intestinal inflammation is presented in Figure 1.The patient with appendix carcinoid was diagnosed simultaneously with CD just 1 year after the onset of symptoms of IBD.The diagnosis was established incidentally during surgery.With the exception of the patient with five tumors, the remaining adenocarcinomas were localized in the distal colon and rectum. The interval between onset of symptoms and CRC diagnosis varied from less than 1 year in two patients to more than 11 and 14 years for the others, respectively.The patient diagnosed with CRC after 11 years had initially refused to undergo exams due to pain, so the diagnosis was only established during laparotomy to treat intestinal obstruction with a simple colostomy for advanced disease.Regarding the lymphomas, one of them was diagnosed simultaneously and the other case developed 2 years after the onset of CD symptoms.The breast cancer patient developed it 10 years after CD had manifested. The average age of symptoms onset in CD patients without neoplasia was 29.5 years, and 30.3 years in the group with neoplasia.When the neoplasia was diagnosed, patient's mean age was 35.2 years.The average interval between onset of symptoms and CD diagnosis was 3.0 years (without neoplasia) and 1.6 years (with neoplasia).Follow-up periods of these groups were 10.5 and 6.7 years, respectively.Four (0.5%) patients had sclerosing cholangitis and one of them developed lymphoma.Smoking was referred by 150 (18.8%) patients among 796 without neoplasia and by 4 (50%) out of 8 with neoplasia (P = 0.04), suggesting that smoking affects the risk of neoplasia in CD (RR = 2.65; 1.31<RR, 5.39). Furthermore, among the group without neoplasia, 128 (16.1%) patients reported other types of neoplasia within their families.The same information was discovered in two (25%) out of eight patients that developed neoplasia.None of families fulfilled criteria to be categorized as having any form of hereditary CRC syndrome. In the CD with CRC group, one patient died in the same year that the diagnosis was made and another one was lost to follow-up.The remaining two patients are alive 3 and 7 years respectively after diagnosis, both with no evidence of metastasis. Clinical features from CD and UC patients are presented in Table 4. Patients with UC Within this group, 30 (3.7%) patients developed neoplasia, and the majority of tumors (63.3%) occurred in those with extensive colitis (Table 5).Even more, 11 out of 13 patients (84.6%) presented left colon tumors.Interestingly, one patient developed a rectal tumor 11 years after an ileorectal anastomosis due to a synchronous right and left colon carcinoma.The average age at onset of symptoms was 34.1 years.Length of simptomatology was 11.4 years (0-26 years) to CRC diagnosis and 3.5 years to the diagnosis of other neoplasia (excluding the two cases of lymphoma).These lymphoma patients were diagnosed 15 years before the UC and 3 years later, respectively. The 15 cases of extraintestinal tumors were represented by 6 breast cancers (one of them diagnosed 4 years before the UC symptoms), 2 cholangiocarcinomas and 1 of following locations: brain, lung, prostate, kidney, thyroid, a melanoma and an eye. Follow-up and mortality Follow-up varied from 1-22 years for both IBD, with medium periods of 10.5 years and 9.6 years for CD and UC patients, respectively.During this period, 13 (1.6%)patients with CD died, but only 1 (12.5%)among the 8 because of a rectal cancer.Among the UC patients, another 13 (1.6%)died, and the mortality related to neoplasia occurred in 9/30 (30%). From the total group, 10 (0.6%) out of 1607 died due to intestinal or extraintestinal neoplasia.Among the 26 deaths, 7(26.9%) were due to colorectal cancer.The patient with CD and rectal cancer died at 21 years.The six UC patients with CRC died at an average age of 41.4 years and the three UC patients with extraintestinal neoplasia died at an average of 48.7 years.The other patients died from IBD complications (2), from surgical complications (13) and from causes not related to IBD (1). DISCUSSION As in other countries, the frequency of IBD is increasing in Brazil, especially for DC.For this reason, we decided to evaluate the incidence of intestinal and extraintestinal neoplasia within our series of patients from one single institution, which is the biggest public hospital in South America.Data retrieved from 1607 patients treated between 1984 and 2007 showed prevalence among women (55.2% in CD and 59.2% for UC), although the incidence of neoplasia was not different among genders.One interesting feature is that patients developed neoplasia earlier than the expected average age for the general population (mean age was 35.2 years for CD). The diagnosis of CD was made earlier (1.6 vs 3.0 years) in the group that developed cancer probably because their disease was more severe, fact that could lead them to look sooner for medical attention.On the other hand, the average interval between the onset of CD symptoms and the diagnosis of CRC was small.This fact may be explained by an eventual lack of relation between CD and CRC in these patients.Moreover, CD could be present for a longer time without expressing symptoms, making it difficult to establish the disease's start.Another explanation is that, in these patients, the same genetic alterations that induced CD could enhance the risk of CRC, as the association between familial IBD and genes encoding susceptibility for CRC have been suggested previously (33,35) . As stated before, CD patients may eventually be asymptomatic, while in UC there is a correlation between macroscopic colorectal lesions and symptoms.Due to that, interval between onset of symptoms of UC and diagnosis of CRC is much larger than that presented by DC patients. Although controversial, cancer risk may vary according to many factors such as sex, disease location, degeneration in inflamed areas, age at onset of CD and disease behavior (23) .The finding that CRC was more frequently found in the distal colorectum and diseased areas among our patients is supported by meta-analysis showing that the CRC risk was higher in Crohn's colitis than in ileitis (7) .In another study, the relative risk of CRC in patients with CD affecting only the small bowel was similar to the general population, while the relative risk increased to 4.6 and 13.3 in patients with ileocolic and colonic disease (40) . We also found that the CRC incidence was similar in CD (0.5%) and UC (1.7%) patients.Similarly, Gillen et al. (15) reported an increased risk (18-19 fold increase) in patients with extensive unresected Crohn's colitis and UC, with an absolute cumulative frequency of 7% and 8% after 20 years of symptoms for UC and CD, respectively.In our study, it is not possible to evaluate an eventual influence of biological therapy upon CRC risk as patients in the present series did not receive anti-TNF agents. There is evidence suggesting that colonoscopic surveillance for long standing disease may reduce CRC risk and its associated mortality (21,39) .Otherwise, one should notice that at least 13 of our patients developed neoplasia in less than 2 years from the onset of IBD symptoms, and other 2 patients presented it even before IBD had manifested clinically.This suggests that inflammation may not always be responsible for the malignization process in these cases and in others with extraintestinal neoplasia. In the present study, patients who had been operated for IBD were not dropped from the cancer risk annalysis for many reasons.Regarding Crohn's disease, even operated patients still present the risk to develop neoplasia in another location (either colon, small bowel and others).And although it is obvious that UC who had undergone total rectocolectomy will not develop colorectal cancer in the future, there have been reported cases of cancer appearing within the ileal pouch or at the anal transitional zone (17) .Furthermore, we evaluated the incidence of both intestinal and extraintestinal neoplasia, and the latter is not associated with cancer developing at the colon or at the small bowel during follow-up. After a median follow-up of 9.6 years, the incidence of CRC among our UC patients (13 out of 803 = 1.6%) was similar to that found by others (10) .In a meta-analysis of 194 studies, Eaden et al. (10) found that the cancer cumulative probability for all patients with UC was 2% at 10 years, 8% at 20 and 18% at 30 years, regardless of disease extent; the overall prevalence of CRC in any patient was 3.7%. During the last 30 years, IBD treatment has changed a lot.Before 1980, UC used to be clinically treated as long as possible and surgical resection was almost always lead to a terminal and definitive ileostomy.This decision was frequently postponed due to the inherent risks associated with a major abdominal surgery and its long-term morbidity affecting quality of life.But the introduction of restorative ileal pouch-anal anastomosis changed the perspectives of surgical indication among UC patients.Thus, patients refractory to medical treatment and with extensive colitis were operated on sooner.Consequently, this population should not develop CRC in the future.In this context, it is easy to attest that the available management options may also influence the CRC risk in this population. Meanwhile, the initial enthusiasm with restorative proctocolectomy decreased overtime due to incidence of pouchitis and other complications during long-term follow-up (38) .Once again, a turnover towards medical treatment increased the use of immunosuppressants and biologic therapy in the last years.Theorically, this decrease in surgical treatment may be eventually responsible for a future increase in the neoplasia incidence during the next 10 years.On the other hand, the use of salicylates has been associated with reduced risk of colonic cancer (5) . Askling et al. (1) demonstrated that IBD patients with a family history of colon cancer have a higher risk of CRC, and nearly 10% of all cases occurred in patients with affected relatives.However, this correlation was not significant among our IBD patients.Even though, we do believe that they deserve a close observation follow-up to verify if they will develop a CRC in the future. Sclerosing cholangitis is associated with a higher number of hepatic and extra hepatic neoplasias (3,26,22) , and it has been suggested that these patients should undergo surveillance v. 50 no. 2 -abr./jun.2013 Arq Gastroenterol 127 colonoscopy every year regardless of disease's duration (37) .In our series, the numbers are too small to draw any conclusion.The same difficulty occurred to Bernstein et al. (4) , who reported this diagnosis in only 5 (6.4%) among 78 IBD patients.The findings in our series do not support the association of IBD and lymphoma.One patient developed lymphoma 15 years before the onset of IBD symptoms, one simultaneously and two others were diagnosed 2 and 3 years later.Two of the largest population-based studies did not show significantly elevated risk (2,27) .In a retrospective study, Bernstein et al. (4) demonstrated an increased risk of lymphoma only in male patients with CD and it was not related to immunotherapy.The fact that our four patients were all males is in accordance with their results. There exists a theoretical fear that the prolonged use of azathioprine or its derivatives could induce cancer or lymphoma.In this context, Fraser et al. (12) observed patients receiving azathioprine for 27 months, and after a follow-up of 6.9 years they concluded that the cancer incidence was similar to the observed in patients who did not received the drug.Loftus et al. (28) also demonstrated that the absolute risk of lymphoma remains quite small (0.01% per person-year).None of our lymphoma patients received immunosuppressive drugs or biologic therapy. Although the same suspicious idea has been raised with the use of biological therapy, it is difficult to evaluate this issue as the frequency of lymphoma is very low.It is possible that the increased severity or activity of IBD in referral-based studies may increase this risk regardless of the treatment option (19) .Anyway, an increased number of lymphomas were reported by Brown et al. (6) in patients with rheumatoid arthritis.But so far it is impossible to assign a causative effect to the infliximab therapy, mainly because the potential confounding effects of IBD are difficult to separate and they may be synergistic (19) . The association of carcinoid tumors with IBD is rare and controversial (25) .There have been reports of carcinoid tumors in areas without inflammation in CD patients, leading the authors to suggest that the development of carcinoid tumors may be secondary to distant proinflammatory mediators (42) .In an interesting review, Freeman (14) found 3 cases of appendiceal carcinomas in 441 CD patients who underwent surgery, all of them in female patients.Others believe that there appears to be no evidence to substantiate a direct association between IBD and carcinoid tumor, since the frequency was similar to patients without IBD (17) .Some data regarding life expectancy in IBD were provided by Jess et al. (18) , who observed 6,569 individuals/year during an average interval of 17 years.They noticed 84 deaths among 374 patients for an expected number of 67, and reported that mortality was more important among women, mainly after 21 years of diagnosis.Twenty-two deaths were related to IBD and seven to gastrointestinal cancer in a nationwide population-based study in Denmark (24) .The impact of CD on CRC prognosis revealed that 100 patients with CRC-associated-CD had a poorer prognosis than 71,438 CRC patients without CD.The effect of CD on CRC survival was more pronounced during the 1st year after cancer diagnosis, in younger patients, in men, and in patients whose tumors had regional spread. Poor survival in patients with CD and gastrointestinal cancer might be explained by delayed diagnosis, as CD symptoms may mask those of the neoplasia (24) .This is probably what happened with the patient that died in the present study.As the patient had extensive and painful disease, she refused to undergo examination.Despite been adequately treated for IBD, her disease worsened and the diagnosis of an advanced cancer was established.Thus, an unfavorable outcome under treatment should alert the physician for the need to rule out cancer. Our mortality due to IBD and gastrointestinal cancer was higher when compared with other studies, as we detected 12.5% and 30% mortality rates among CD and UC patients with associated neoplasia, respectively.Among the 26 deaths, 7 (26.9%)were due to CRC.In other series, Persson et al. (32) observed 15 (5.9%) deaths out of 255 due to CRC.Similarly, Ekbom et al. (11) found mortality due to CRC in 14% (6/43) and due to extraintestinal cancer in 7% (3/43) among IBD patients.On the other hand, others have found similar survival rates for IBD associated-CRC or sporadic CRC (9) . Thus, if we consider the high mortality due to neoplasia, it is important to establish a surveillance program for IBD.In this population, screening in younger ages is necessary since they develop neoplasia earlier than the general population.With this regard, colonoscopy has been shown to decrease mortality in both UC and Crohn's colitis (21,36) .In the near future, the potential use of chemopreventive agents and biomarkers of malignancy in IBD patients may improve the effectiveness of cancer detection during long-term follow-up. At this point, it is important to state that our study has some biases.First of all, our hospital is an important tertiary center in Brazil for IBD, so patients with more severe and complicated disease are usually sent to this hospital.Thus, this group of patients may overestimate the cancer risk.Furthermore, it is well recognized that medical treatment of IBD has changed during the last two decades, where the use of immunosuppressants and biologic therapy is recent.As there is a suggestion that aminosalicylates may protect IBD patients from cancer (34) , our patients have not been exposed to the same drugs for the same length of time, and this condition is probably associated with distinct individual risks for cancer. Additionally, different operative techniques may influence the probability of developing cancer, and surgically treated patients may underestimate the true risk of malignancy.At the same time, improvements in early diagnosis and oncological treatment overtime may eventually influence survival comparative studies in long periods of time as they do not reflect the natural evolution of the disease. Concerning the risk of extraintestinal neoplasia in our study, it was greater among UC (2.1% vs 0.5%, P = 0.0009).Among the reports focusing this specific issue, Bernstein et al. (9) observed an increased rate only for liver and biliary tract malignancies in CD and UC patients.They found that incidence rates of breast, prostate and respiratory carcinomas were not significantly different compared with the general population. In a very interesting meta-analysis of 34 studies from the literature (40) , the authors evaluated the risk of neoplasia in 60,122 patients with CD and compared the relative risks of small bowel (RR 28.4), colorectal (RR 2.4), extraintestinal cancers (RR 1.27) and lymphoma (RR 1.42) with the baseline population.They also recommended that patients with extensive colonic disease presenting from a young age should undergo endoscopic surveillance.Moreover, their study found that small bowel and CRC risks were higher in North America and United Kingdom than Scandinavian countries.Similarly, Fornaro et al. (13) also identified a greater risk of gastrointestinal, extraintestinal and hemopoietic cancers in 35 literature publications, reinforcing the importance of identifying groups of CD patients with exposition to risk factors in order to establish correct diagnostic and therapeutic methods. The role of environmental factors such as smoking in IBD pathogenesis has deserved some attention recently.When compared to non-smokers, this habit has been associated with some deleterious effects such as a worse clinical course, higher therapeutic requirements and disease-related complications, mainly in patients with ileal disease (30) .Tobacco consumption may even influence surgical recurrence and disease progression towards fistulizing or stricturing forms.However, the underlying mechanisms are certainly complex and are still subject to research.In this sense, smoking may alter gene expression and inflammatory profiles at the colonic mucosa and other organs (20) . Within a disease that may exhibit different extraintestinal manifestations and may involve distinct parts of the upper gastrointestinal tract, smoking habits may also influence the risk of extraintestinal cancer.In a systematic literature review (1966-2009), Pedersen et al. (31) CONCLUSIONS CRC incidence was low and similar in both inflammatory bowel diseases.There was a higher incidence of extraintestinal neoplasia in UC when compared to CD. Neoplasias in IBD developed at a younger age than expected for the general population.Mortality associated with malignancy is significant, affecting 1/4 of the patients with neoplasia. TABLE 1 . Descriptive data on genders for patients with CD and CD associated to neoplasia (colorectal cancer, lymphoma and other tumors) TABLE 5 . Extent of UC in patients with and without neoplasia (colorectal cancer, lymphoma and other tumors) UC = ulcerative colitis; NEO = neoplásica; CRC = colorectal cancer; LY = lymphoma; OT = other tumors
2018-04-03T01:28:34.055Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "26fd2ceea9f06d515bf6027ef91d63942fe165cc", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/ag/a/mbpRpRwJvqqHrMtZxcRR84n/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "774030851b0d3f833acf21f9476fc23f52b461e1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18032869
pes2o/s2orc
v3-fos-license
Incorporating Inductions and Game Semantics into Logic Programming Inductions and game semantics are two useful extensions to traditional logic programming. To be specific, inductions can capture a wider class of provable formulas in logic programming. Adopting game semantics can make logic programming more interactive. In this paper, we propose an execution model for a logic language with these features. This execution model follows closely the reasoning process in real life. Introduction Fixed-point definitions, inductions and game semantics are all useful extensions to the theory of logic programming. In this paper, we propose an execution model that combines these three concepts. First, logic programming with fixed-point definitions has been studied by several researchers [6,10]. In this setting, clauses of the form A △ = B -called definition clauses -are used to provide least fixed-point definitions of atoms. We assume that a set D of such definition clauses -which we call a program -has been fixed. The following definition-right rule, which is a variant of the one used in LINC [10], is used in this paper as an inference rule which introduces atomic formulas on the right. This rule is similar to backchaining in Prolog with the difference that a current answer subsititution σ (also called a run) is maintained and applied to formulas in a lazy way here. The definition-left rule represents a case analysis in reasoning. The nat-left rule corresponds to an induction in reasoning. This rule is a well-known induction rule [6] and used to prove a goal G for all natural numbers using only trivial inductions. As we shall see later, even simple inductions make their implementation difficult. The operational semantics of these languages [6] is typically based on intuitionistic provability. In the operational semantics based on provability, solving the universally quantified goal ∀xD from a definition D simply terminates with a success if it is provable. In this paper, we make the above operational semantics more "interactive" by adopting the game semantics in [2,3]. That is, our approach in this paper involves a modification of the operational semantics to allow for more active participation from the user. Solving ∀xD from a program D now has the following two-step operational semantics: As a particular example, consider a goal task ∀x(nat(x) ⊃ ∃yf act(x, y)). To prove that this goal is valid, we need to use induction. Most theorem provers simply terminates with a success as it is solvable. However, in our context, execution requires more. To be specific, execution proceeds as follows: the system requests the user to select a particular number for x. After the number -say, 5 -is selected, the system returns y = 120. As seen from the example above, universally quantified goals in intuitionistic logic can be used to model the read predicate in Prolog. In this paper we present the syntax and semantics of this language called Prolog Ind,G . The remainder of this paper is structured as follows. We describe Prolog Ind,G in the next section. Section 3 describes the new semantics. Section 4 concludes the paper. An Overview of Prolog Ind,G Our language is a variant of the level 0/1 prover in [10] extended with simple inductions. Therefore, we closely follow their presentation in [10]. We assume that a program -a set of definition clauses D -is given. We have two kinds of goals given by G-and D-formulas below: In the rules above, A represents an atomic formula. The formulas in this languages are divided into level-0 goals, given by G above, and level-1 goals, given by D. We assume that atoms are partitioned level-0 atoms and level-1 atoms. Goal formulas can be level-0 or level-1 formulas, and in a definition A △ = B, A and B can be level-0 or level-1 formulas, provided that level(A) ≥ level(B). Proving Level-0 formulas and Level-1 formulas is similar to proving goal formulas in Prolog. However, there are some major differences: • when the Level-1 prover meets the implication G ⊃ D where G is not nat(x), it attempts to solve G (in level-0 mode). If G is solvable with all the possible answer substitutions Σ 1 , . . . , Σ n , then the Level-1 prover checks that, for every substitution Σ i , DΣ i holds. If Level-0 finitely fails, the implication is proved. • when the Level-1 prover meets the implication nat(x) ⊃ G, the choices for x can be infinite. Therefore the machine needs to prove G using induction (in induction mode). In induction mode, the machine attempts to decompose the induction hypothesis G(x/n) (in level-0 submode) into a set atomic formulas A. Then it attempts to solve G(x/n + 1) (in level-1 submode ) relative to A. If G(x/n + 1) is solvable with respect to G(x/n) with an (partial) answer substitution ∆ n , then the machine concludes that G(x/k) holds with an (total) answer substitution ∆ k . . . ∆ 0 (i.e., by composing answer substitutions) for each natural number k. We will present the standard operational semantics for this language as inference rules [1]. Below the notation G : G denotes {G} ∪ G. Note that execution alternates between two phases: the left rules phase and the right rules phase. In this fragment, all the left rules (excluding the defL in in) are invertible and therefore the left-rules (excluding the defL) take precedence over the right rules. Note that our semantics is a lazy version of the semantics of level 0/1 prover in the sense that an answer substitution is applied as lazily as possible. Below, the proof procedure for some formula returns a final run Σ in normal mode and a final run ∆ in induction mode. Note that it is not always possible to obtain the final run due to the presence of induction. In such a case, we assume that the machine returns a F ailure. Definition 1. Let σ, δ be answer substitutions, let G, D be a goal, let G be a set of G-formulas. Then the task of • proving D from ∅ (empty premise) with respect to σ, D and returns a total run Σ -pv(l 1 , σ, ∅, D, Σ) -% in level 1, • proving D from G : G with respect to σ, D and returns a total run Σpv(l 0 , σ, G : G, D, Σ) -% in level 0, • proving G from G : G with respect to σ, δ, D and returns a partial run ∆ -pv(i 0 , σ, δ, G : G, G, ∆) -% induction mode, level 0 • proving G from G : G with respect to σ, δ, D and returns a partial run ∆ -pv(i 1 , σ, δ, G : G, G, ∆) % induction mode, level 1 -are defined as follows: (1) pv(l 0 , σ, ⊥ : G ⊢ D, σ). % This is a success. Here, the answer substitution ∆ ′ 0 is identical to ∆ 0 but locations of the form loc(x) in ∆ ′ 0 are adjusted to new locations properly. Similarly for , t)}{(y, t)} and t is a term. Note that we assume that loc(x) represents a unique location in the sequent. % Below is the description of the level-1 prover (14) pv(l 1 , σ, ∅ ⊢ ⊤, σ). % solving a true goal The following is a proof tree (from bottom up) of the example given in Section 1. Note that a proof tree is represented as a list. Now, a proof tree of a proof formula is a list of tuples of the form E, Σ, Ch where E is a proof formula, Σ is a final run for E, and Ch is a list of the form i 1 :: . . . :: i n :: nil where each i k is the address of its kth child (actually the distance to E's kth chilren in the proof tree). % base case An Alternative Operational Semantics Adding game semantics requires some changes to the previous execution model. To be precise, our new execution model -adapted from [2] -solves the goal relative to the program using the proof tree built in the proof search. To be precise, execution proceeds in two different phases: normal phase and induction phase. In normal phase, execution simply follows the proof tree because the proof tree encodes all the possible total runs. In induction phase, things are more complicated. Note that the proof tree in induction mode encodes only the partial run (from ith inductive step to i+1th inductive step). Therefore, a total run must be obtained from composing all the partial runs, not from the proof tree. In addition, to deal with the universally quantified goals properly, the execution needs to maintain an input substitution F of the form {y 0 /c 0 , . . . , y n /c n } where each y i is a variable introduced by a universally quantified goal in the proof phase and each c i is a user input during the execution phase. Definition 2. let L be a fixed proof tree. Let i be an index to a proof tree and let F be an input substitution. In addition, let σ be an answer substitution, let ∆ be an answer substitution (obtained from composing induction steps). Then executing L i (the i element in L) with F in normal phase -written as ex(i, F ) -and executing G with σ, ∆, F in induction phase -written as ex(ind, σ, ∆, ∅ ⊢ G, F ) -are defined as follows: nil). % no child. This is a success. Initially, σ, F are empty substitutions. In the above, ∆ total = ((∆|(j, k − 1) . . . ∆|(j, 0) Σ B ))|(−k + 1) is used to correctly obtain a total run for G. To be precise, the notation ∆|(j, i) is used • to rename each varaible w r to w r+im , • to replace j with i where m is the number of existentially quantified variables in G. Thus the composition ∆|(j, k −1) . . . ∆|(j, 0) Σ B contains all the answer substitutions obtained in inductive steps upto the number k. Thus it contains all the answer substitutions for km variables. Then to produce correct answers in solving G, we must undo the renaming via |(−k + 1), deleting unnecessary answer substitions. Note that each ∆|(j, i) may contain location variables of the form loc(x) and we assume that loc(x) is adjusted properly in obtaining ∆ total . The following is an execution sequence of the goal ∀x(nat(x) ⊃ ∃y f ib(x, y)) using the proof tree above. We assume that the user chooses 3 for x. Note that the last component represents F .
2015-08-08T19:14:09.000Z
2015-08-08T00:00:00.000
{ "year": 2015, "sha1": "ea4fd5cb4fe1d4940e27fbd59c138a175bb50f92", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ea4fd5cb4fe1d4940e27fbd59c138a175bb50f92", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
26530777
pes2o/s2orc
v3-fos-license
Identification of a DNA methylation-independent imprinting control region at the Arabidopsis MEDEA locus Genomic imprinting is exclusive to mammals and seed plants and refers to parent-of-origin-dependent, differential transcription. As previously shown in mammals, studies in Arabidopsis have implicated DNA methylation as an important hallmark of imprinting. The current model suggests that maternally expressed imprinted genes, such as MEDEA ( MEA ), are activated by the DNA glycosylase DEMETER (DME), which removes DNA methylation established by the DNA methyltransferase MET1. We report the systematic functional dissection of the MEA cis -regulatory region, resulting in the identification of a 200-bp fragment that is necessary and sufficient to mediate MEA activation and imprinted expression, thus containing the imprinting control region (ICR). Notably, imprinted MEA expression mediated by this ICR is independent of DME and MET1, consistent with the lack of any significant DNA methylation in this region. This is the first example of an ICR without differential DNA methylation, suggesting that factors other than DME and MET1 are required for imprinting at the MEA locus. Genomic imprinting is a form of epigenetic gene regulation, which leads to the differential expression of an allele according to its parent of origin. Its discovery dates back to 1970, when Kermicle (1970) described the maternal effect of the R gene, which controls maize kernel coloration. Later, an analogous phenomenon was identified in mice when pronuclear transplantation experiments revealed that both maternal and paternal genomes were required to achieve normal development (McGrath and Solter 1984;Surani et al. 1984). Imprinted genes encode for diverse proteins that function in growth and cellular proliferation, typically in extraembryonic tissues involved in nourishing the newly developing organism; i.e., the placenta in mammals and the endosperm in plants (Grossniklaus 2005;Feil and Berger 2007). The endosperm results from double fertilization in angiosperms: While one sperm cell fertilizes the egg cell, giving rise to the embryo, the second sperm cell fuses with the central cell, leading to the development of the endosperm (Maheshwari 1950). Genomic imprinting in mammals and seed plants evolved independently, but likely in response to similar selective pressures that maintain a fine balance between competing interests of the maternal and paternal genomes in resource allocation (Haig and Westoby 1989;Moore and Reik 1996;Messing and Grossniklaus 1999). Although some imprinted plant genes are also expressed in the embryo, most show preferential expression in the triploid endosperm, and some of them are essential for seed development (for review, see Raissig et al. 2011). MEDEA (MEA) and FERTILIZATION-INDEPENDENT SEED2 (FIS2) are maternally expressed genes encoding evolutionary conserved Polycomb group (PcG) proteins (Grossniklaus et al. 1998;Luo et al. 1999). Plant PcG proteins form several variants of multiprotein complexes that maintain a silenced state of gene expression over many cell divisions through histone modifications . The MEA-FIE (FERTILIZATION-INDEPENDENT ENDOSPERM) complex, which regulates cell proliferation in the endosperm and embryo, contains the PcG proteins MEA, FIS2, FIE, and MULTICOPY SUPPRESSOR OF IRA 1 (MSI1) (Ohad et al. 1999;Luo et al. 2000;Spillane et al. 2000;Kö hler et al. 2003a). Mutations in any of these FIS class genes (mea, fie, fis2, and msi1) lead to maternal-effect seed abortion (for review, see Grossniklaus et al. 2001), which, in the case of MEA and FIS2, is due to their maternal-specific expression (Kinoshita et al. 1999;Vielle-Calzada et al. 1999;Jullien et al. 2006b). To date, PHERES1 (PHE1), which is directly regulated by MEA, represents the only well-studied paternally expressed imprinted gene in plants (Kö hler et al. 2003b(Kö hler et al. , 2005. While MEA and FIS2 are required for normal seed development (Grossniklaus et al. 1998;Luo et al. 1999), and PHE1 plays a role in seed abortion in hybrids (Josefsson et al. 2006), two other maternally expressed genes that were reported to be imprinted, FLOWERING WAGENINGEN (FWA) and AGAMOUS-LIKE 36 (AGL36), are not essential for seed development (Kinoshita et al. 2004;Shirzadi et al. 2011). Recently, several studies using allele-specific RNA profiling of the seed transcriptome describe many novel candidate imprinted genes in Arabidopsis Hsieh et al. 2011;Wolff et al. 2011), rice (Luo et al. 2011), and maize (Waters et al. 2011;Zhang et al. 2011). Yet, little is known concerning their role during seed development or their allele-specific regulation. In contrast, the molecular mechanism underlying the maternal monoallelic expression of MEA, FIS2, and FWA, which results from genomic imprinting (for review, see Grossniklaus 2005), has been studied in some detail. Imprinting of all three loci results from a combination of maternal allele activation and paternal allele silencing. DNA and histone methylation function as epigenetic marks to distinguish maternal and paternal alleles, with DNA methylation playing a critical role in the regulation of all three loci (Vielle-Calzada et al. 1999;Luo et al. 2000;Kinoshita et al. 2004;Jullien et al. 2006b). The current model for imprinting control of FIS2 and FWA involves repressive DNA methylation of both parental alleles by the maintenance DNA methyltransferase MET1 throughout vegetative development. The silencing of the paternal MEA allele, however, depends on repressive histone H3 Lys 27 methylation (H3K27me) mediated by a vegetatively acting PcG complex (Jullien et al. 2006a). During male gametogenesis, paternal allele silencing is maintained by MET1 for FIS2 and FWA but by the PcG protein FIE at the paternal MEA allele, since in MET1-deficient pollen, the paternal MEA allele is not derepressed (Gehring et al. 2006;Jullien et al. 2006a,b). In contrast, during female gametogenesis, the DNA glycosylase DEMETER (DME) removes maternal DNA methylation at all three loci, which results in expression of the maternal allele in the central cell and, subsequently, during seed development (Choi et al. 2002;Kinoshita et al. 2004;Gehring et al. 2006;Jullien et al. 2006b). This demethylation process also involves a histone chaperone, illustrating the interplay of DNA methylation and chromatin level regulation (Ikeda et al. 2011). In addition to the shared regulation of imprinting at the FIS2 and FWA loci, additional mechanisms appear to operate at the MEA locus: MEA is expressed in both the embryo and endosperm, and paternal MEA allele expression has not been detected during early seed development, suggesting that it is imprinted in both fertilization products at these stages, at least in some accessions (Vielle-Calzada et al. 1999;Luo et al. 2000;Spillane et al. 2007;Raissig et al. 2011). Thus, it is currently unknown how the maternal MEA allele is activated in the embryo in the absence of DME activity, which is thought to be restricted to the central cell (Choi et al. 2002). Nevertheless, maternal MEA allele activation in the central cell by DME has been the main focus of imprinting regulation in Arabidopsis, and possible DME target regions at the MEA locus have been identified: The AtREP2 helitron, CG sites 3 kb and 500 bp upstream of the MEA coding region, and the MEA-intergenic subtelomeric repeat (ISR) (Cao and Jacobsen 2002) downstream from the MEA coding region were shown to be methylated (Xiao et al. 2003). Indeed, DME establishes allele-specific hypomethylation of the maternal MEA allele at the À500-bp region and the MEA-ISR, suggesting that these regions control MEA-imprinted expression via their methylation status (Gehring et al. 2006). However, Arabidopsis accessions lacking the MEA-ISR remain imprinted at the MEA locus (Spillane et al. 2004), and the methylation status of the À500-bp region is not only controlled by DME, but varies depending on the accession; i.e., this region is unmethylated in the Landsberg erecta (Ler) accession, despite MEA being imprinted in Ler (Spillane et al. 2004;Gehring et al. 2006;Schoft et al. 2011). Taken together, this challenges DME as the regulator of imprinted MEA expression and raises the question of the actual cis-regulatory element for MEA imprinting. Here we report on a minimal 200-base-pair (bp) fragment from the MEA cis-regulatory region that faithfully recapitulates MEA-like expression and functionally complements the mea mutation. Hence, it contains all of the necessary elements for transcriptional activation and imprinting control. We show that activation by DME is not mediated by this 200-bp fragment, thereby uncoupling maternal activation by DME from the imprinting control region (ICR). Genetic analysis of seed abortion indicated that DME and MET1 are only indirectly involved in MEA imprinting regulation. Maternally, dme-induced seed abortion could not be rescued by a functional MEA transgene; paternally, rescue of mea-induced seed abortion by met1 mutant pollen was not linked to a functional paternal MEA allele. As suggested previously (Gehring et al. 2006), allele-specific expression analysis showed that paternal MEA silencing is independent of MET1, consistent with the lack of significant methylation in the MEA-ICR. We propose a new model of MEA imprinting, in which DME and MET1 affect higher-order chromatin structure through targeting of transposon-related sequences but are not directly involved in the regulation of MEA imprinting. shown to confer imprinted expression (Spillane et al. 2004), contains the previously unidentified gene At1g02570. It was recently annotated based on expressed sequence tags found in transcription profiling studies and encodes a protein of unknown function (Schmid et al. 2003;Castelli et al. 2004). As it resides between regions implicated in MEA regulation, we analyzed expression of At1g02570 by RT-PCR before and after fertilization. We found no expression during early seed development when MEA is expressed (Supplemental Fig. S1), suggesting that this gene does not share regulatory cis-elements with MEA. Using the previously described 4.8pMEATGUS reporter construct, which comprises 3.8 kb of upstream and 1 kb of coding region of the MEA gene (Spillane et al. 2004), successive 59 deletions were introduced, leading to fragment lengths ranging from 1330 bp to 150 bp of MEA cis-regulatory sequence. We constructed transcriptional fusions to the Escherichia coli uidA gene (pMEATGUS), encoding b-glucuronidase (GUS), and to MEA genomic DNA (pMEATMEA) for expression and functional analyses, respectively (Fig. 1B). Several independent primary trans-formants for each transgene were recovered and scored for MEA-like expression (Supplemental Tables S1, S2). Only transgenic lines containing a single copy of the insertion, as determined by Southern blot analysis, were chosen for experiments investigating MEA imprinting regulation (data not shown). We studied maternal GUS expression from before fertilization until 4 d after pollination (DAP), corresponding to the globular stage of embryo development ( Fig. 1C; Supplemental Fig. S2A,C,D). The plant line harboring the 4.8pMEATGUS transgene was used as a reference, its GUS-staining pattern reflecting MEA expression (Supplemental Fig. S2A). pMEATGUS transgenes with 1330 bp to 250 bp of MEA cis-regulatory sequences resulted in GUS-staining patterns that were indistinguishable from the MEA-like reference pattern: GUS activity was first detected in gynoecia before fertilization, with the entire embryo sac displaying a strong blue staining. After fertilization, GUS activity was found in the embryo, in the free nuclei in the peripheral endosperm, and at the chalazal cyst region of the endosperm. At 4 DAP, weak (Xiao et al. 2003;Gehring et al. 2006); (stars) hypomethylation of maternal MEA endosperm alleles at 7-9 d after pollination (DAP). (B) The 4.8pMEATMEA transgene contains 3.8 kb of MEA upstream sequence fused to MEA cDNA and was shown to complement the mea-induced seed abortion phenotype (Makarevich et al. 2006). The 4.8pMEATGUS transgene was previously described (Spillane et al. 2004) and contains 3.8 kb of MEA upstream sequence plus 1 kb of MEA coding region. The other transgenes consist of 1330-bp to 150-bp MEA promoter sequence fused to MEA genomic DNA (pMEATMEA) or the bacterial uidA reporter gene (pMEATGUS). In the D1330pMEATGUS transgene, the region between the À200-bp and À150-bp MEA upstream sequence is deleted. Plus signs [+] indicate positively tested for rescue, staining, or imprinting; minus signs [À] indicate negatively tested for rescue, staining, or imprinting; the plus sign in parenthesis [(+)] indicates deviation from MEA-like GUS staining; and empty fields indicate that the corresponding promoter fusion was not tested. (C,D) Expression of a 250pMEATGUS transgene (C) and a 200pMEATGUS transgene (D). The transgenes were reciprocally crossed to Ler wild-type plants. Maternal GUS activity is detected with both transgenes before fertilization (BF) and 2 DAP. No paternal GUS activity is detected. For detailed GUS expression analysis, see Supplemental Figure S2. Bar, 50 mm. GUS activity was detected in globular stage embryos and at the chalazal pole in the endosperm. Therefore, the minimum element necessary to confer MEA-like expression resides in the 250-bp MEA upstream sequence. Reducing the fragment length further to only 200 bp of the upstream sequence resulted in a slightly different GUS-staining pattern, which extended into the surrounding sporophytic endothelium ( Fig. 1D; Supplemental Fig. S2F,G). The altered expression observed with the 200pMEATGUS indicates that a sporophytic repressorbinding site is located between À250 and À200 bp. We observed no GUS activity in plants with the 150pMEATGUS transgene and therefore proposed that the 50-bp fragment, which extends from À200 bp to À150 bp, is required for cis-activation of MEA expression. Indeed, deletion of this 50 bp in the context of the 1330pMEATGUS transgene resulted in a loss of expression in all independent primary transformants analyzed ( Fig. 1B; Supplemental Table S1). The 50-bp fragment alone did not result in any detectable expression when fused to a min35STGUS transgene (data not shown), indicating that this fragment is necessary but not sufficient for cis-activation of MEA. To test for potential loss of imprinting of the reporter transgenes, we reciprocally crossed plants containing the different pMEATGUS transgenes and looked for possible paternal pMEATGUS expression. All reporter transgenes showing MEA-like expression were active only when inherited from the mother ( In order to functionally test the pMEA fragments, we investigated seed abortion in mea/MEA plants transformed with pMEATMEA transgenes. Heterozygous mea/MEA mutant plants show 50% seed abortion, and all seeds carrying a maternally inherited mea mutation abort irrespective of the paternal contribution (Grossniklaus et al. 1998). We scored seed abortion in transgenic mea/MEA plants to look for complementation of the mea-induced 50% seed abortion phenotype. In all primary transformants except the ones carrying the 150pMEATMEA transgene, we found rescue of the mea mutant phenotype illustrated by reduced seed abortion frequencies (Supplemental Table S2). Thus, the 200-bp cis-regulatory fragment is necessary and sufficient for functional expression of pMEATMEA transgenes, recapitulating the results with the pMEATGUS transgenes at the functional level. Taken together, our systematic analysis has uncovered a 200-bp minimal fragment of the MEA cis-regulatory region that contains the elements necessary and sufficient for transcriptional activation and imprinting control. An additional element between À250 bp and À200 bp is needed to repress sporophytic expression in the ovule. Thus, we used the 250pMEATGUS transgene, reflecting MEA-like expression, to investigate MEA imprinting control in combination with allele-specific expression analyses of the endogenous MEA locus. MEA-ICR sequence elements are found upstream of or downstream from other potentially imprinted loci We investigated whether sequence elements from the MEA-ICR were also present at other potentially imprinted loci. To this aim, we performed a WU-BLAST analysis (http://www.arabidopsis.org/wublast/index2.jsp) of the entire 250pMEA promoter sequence and of the promoter sequence required for proper MEA-like expression (100-bp element between À250 and À150 from the MEA start codon) against 3 kb of upstream and downstream sequences of all TAIR10 loci (http://www.arabidopsis.org/ wublast/index2.jsp). We then compared the output (684 loci) with all potentially imprinted genes that were recently reported Hsieh et al. 2011;McKeown et al. 2011;Wolff et al. 2011). Interestingly, we found that 15 of these recently published imprinted candidate genes do have conserved sequences upstream of or downstream from the respective gene, suggesting that some MEA-ICR sequence elements might be conserved between genes regulated by genomic imprinting (see Supplemental Table S3). A permutation test using 1000 randomized gene samples (n = 684) showed that the probability of finding >14 of the recently described imprinted candidate genes by chance is only P = 0.051. In addition, we performed a motif analysis of the MEA-ICR and the putative regulatory sequences of the six imprinted candidate genes with the highest similarity scores (i.e., the smallest P-values) using the PLACE database (Higo et al. 1999). Interestingly, we found that GT1binding sites and DOF-binding elements, both of which are abundant in the MEA-ICR (nine and five sites, respectively), were also present in the putative regulatory sequences of all six imprinted candidate genes analyzed (Supplemental Table S4). Surprisingly, a pollen-associated binding element, which we speculate might be involved in recruiting repressors to the paternal allele in the male gametophyte, was also found in all of these sequences. An overview of the identified motifs, including other expected cis-regulatory elements such as TATABOX5, GATABOX, and a poly-A signal box, is shown in Supplemental Table S4. However, none of these six candidate imprinted genes was analyzed for regulation by MET1 or DME, such that we have no information on their dependence on DNA methylation. Expression of three of the candidates was analyzed in a fie mutant background (At3g19160, At2g18880, and At4g29650) , but disruption of PRC2 (Polycomb-repressive complex 2) had no effect on their expression. Taken together, these bioinformatic analyses showed that some sequence elements of the MEA-ICR are conserved in putative regulatory sequences of other imprinted loci. Yet these motifs constitute only a small part of the conserved region, as most of the similarity is based on the high A+T content of the MEA-ICR (70%). Nevertheless, the imprinted candidate genes with the highest similarity do share common motifs, such as GT1-binding sites and DOF-binding elements, possibly reflecting conserved regulatory mechanisms. The MEA-ICR mediates activation of maternal MEA expression independent of DME Allele-specific demethylation of the maternal MEA allele by DME in the central cell was proposed to selectively activate the maternal MEA allele, whereas the paternal MEA allele remains silenced (Gehring et al. 2006). However, the 250pMEATGUS transgenes are maternally active and paternally silent even though they lack the À500-bp region targeted by DME-dependent demethylation. To elucidate the impact of DME on MEA-imprinted expression, we analyzed the maternal activity of two pMEATGUS transgenes in the dme-4 mutant background (Guitton et al. 2004). We crossed plants homozygous for a single locus of either the 4.8pMEATGUS or 250pMEATGUS transgene to dme-4/DME plants and analyzed the progeny for maternal GUS activity. All F1 plants are hemizygous for the pMEATGUS transgene, and half of them are dme-4/DME or DME/DME, respectively. F1 plants segregating the dme-4 mutation were emasculated and analyzed for their GUS-staining pattern before fertilization. In DME wild-type plants hemizygous for either 250pMEATGUS or 4.8pMEATGUS, we observed 50% and 47% GUS staining in unfertilized ovules, respectively, consistent with Mendelian inheritance of the pMEATGUS transgenes by one-half of the female gametophytes ( Fig. 2A,B). In plants hemizygous for the pMEAT GUS transgene and heterozygous dme-4/DME, one-fourth of the ovules are predicted to inherit both the wild-type DME allele and the pMEATGUS transgene, whereas onefourth will inherit the mutant dme-4 allele along with the pMEATGUS transgene. If DME is a direct activator of maternal MEA allele expression, we would expect to see only 25% GUS-staining ovules in dme-4/DME plants. Indeed, we found a significant reduction (P = 0.0003) from 47% to 34% GUS-staining ovules in dme-4/DME plants with the 4.8pMEATGUS transgene ( Fig. 2A,B), suggesting that the 4.8pMEATGUS transgene was partly subject to DME-dependent repression. In plants hemizygous for 250pMEATGUS and dme-4/DME, we obtained 46% GUS-staining ovules ( Fig. 2A,B). This is not significantly different (P = 0.9667) from the 50% GUS staining found in the DME wild-type background, suggesting that all of the ovules inheriting the dme-4 mutation expressed 250pMEATGUS. Thus, DME activation of maternal transgene expression before fertilization is dependent on the MEA promoter length, which is likely due to the presence of the AtREP2 helitron 4 kb upstream of MEA. This is supported by a previous study, which demonstrated that 4.2pMEAT GUS and 4.2pMEATGFP transgenes containing 450 bp of AtREP2 are only active when a maternal wild-type DME copy is provided (Choi et al. 2002). Our 4.8pMEATGUS transgene, containing 3.8 kb of MEA upstream sequence with only 100 bp of AtREP2, is partially dependent on DME activation, whereas maternal activation of the 250pMEATGUS transgene is completely independent of DME function. As the 250pMEATGUS transgene shows exclusive maternal expression, we conclude that DME is not required for imprinting control beyond the native genomic context; i.e., DME is not targeted to the MEA-ICR for activation of maternal MEA expression. The MEA-ICR mediates paternal transgene silencing by maternal MEA The MEA promoter analysis revealed the existence of a MEA-ICR in the 200-bp fragment. Subsequently, we could show that maternal MEA allele activation by DME is not targeted to the MEA-ICR on the maternal allele. Therefore, we sought to test whether the previously suggested mechanism for paternal MEA allele silencing, involving DNA and histone methylation (Gehring et al. 2006;Jullien et al. 2006a,b), is mediated by the MEA-ICR. The MEA-FIE complex represses the MEA paternal allele via deposition of repressive H3K27 dimethylation (H3K27me2), which has been found in a region close to the MEA transcriptional start site (Gehring et al. 2006). We asked whether the MEA-ICR still responds to repression by the MEA-FIE complex. Therefore, we analyzed GUS expression in reciprocal crosses of plants homozygous for the 250pMEATGUS transgene and homozygous for mea-1. Pollination of female plants homozygous for the 250pMEATGUS transgene with mea-1 mutant pollen (Fig. 3A,B) resulted in the same maternal GUS-staining pattern as in females pollinated with wildtype pollen (Supplemental Fig. S2C). Although the 250pMEATGUS transgene is imprinted and paternally not expressed after fertilization of a wild-type ovule Percentage of ovules expressing the 250pMEATGUS and 4.8pMEATGUS reporter transgenes in DME/DME and dme-4/DME plants before fertilization. At least four independent DME/DME and four independent dme-4/DME segregants were analyzed for each transgene. Error bars indicate SEM. (n) Total number of ovules analyzed for each genotype; (p) level of significance relative to the difference between the two segregants (t-test). (B) Maternal pMEATGUS expression of unfertilized ovules in dme-4/DME mutant background. Bar, 50 mm. Fig. S2E), we found paternal GUS expression starting from 3 DAP in maternal mea-1 mutant plants (Fig. 3A,B). The number of seeds expressing paternal 250pMEATGUS in the endosperm increased during development and peaked 4 DAP, with 31% of the seeds showing paternal 250pMEATGUS expression. Derepression during 3-4 DAP of the paternally inherited 250pMEATGUS transgene in maternal mea mutant seeds suggests that the MEA-ICR mediates the repressive function of the MEA-FIE complex. MET1 is not involved in paternal transgene silencing We found that the maternal MEA protein is required for repression of the paternal 250pMEATGUS transgene 3-4 DAP, which is transmitted by the pollen in a transcriptionally silent state (Gehring et al. 2006). As the paternal MEA allele provided by met1 pollen showed no expression in wild-type endosperm 7 DAP, it was concluded that the methylation status of the paternal MEA allele is irrelevant for its transcriptional state (Gehring et al. 2006). However, derepression of the paternal MEA allele in a met1 mutant background might only be visible during early seed development, when the MEA-FIE complex is not yet functionally targeting the MEA locus. We tested the impact of MET1 loss of function during male and female gametogenesis on the expression of the 250pMEATGUS transgene. In contrast to previous studies that did not distinguish between indirect effects of met1-3 due to global DNA hypomethylation (Saze et al. 2003;Mathieu et al. 2007) and direct effects of met1-3 due to MEA hypomethylation, we isolated met1-3/MET plants from a segregating population of wild-type plants pollinated with met1-3/MET pollen. In these plants, MET1 activity is missing only in the gametophytes, and thus pre-existing epigenetic misregulation by hypomethylation of genes other than MEA can be excluded. We used only met1-3/MET plants that showed full methylation at the 180-bp centromeric repeat (Martinez-Zapater et al. 1986) as an indication for wild-type methylation levels in those plants. We crossed wild-type pollen to females heterozygous for met1-3 and hemizygous for 250pMEATGUS and investigated maternal GUS activity (Fig. 3C,D). We observed GUS staining in almost all prefertilization ovules and developing seeds after fertilization inheriting the reporter construct (maximum 50%) as in wild-type females (Supplemental Fig. S2C). In contrast, when we used pollen from plants heterozygous for met1-3 and hemizygous for the 250pMEATGUS transgene to fertilize wild-type females, we found no (1 DAP) or only very few (2, 3, and 4 DAP) seeds with paternal GUS activity (Fig. 3C,D). Thus, a lack of MET1 activity during female or male gametogenesis has no effect on imprinted expression of the 250pMEATGUS transgene. Silencing of the endogenous paternal MEA allele is controlled by maternal MEA The MEA-ICR in the paternally inherited 250pMEATGUS transgene responds to repression by maternal MEA but not by MET1. In order to correlate the control of paternal transgene silencing with the control of endogenous paternal MEA allele silencing, we quantified MEA allelespecific transcripts in mea mutants and combinations of mea mutants with met1-3 mutants. We first investigated the role of maternal MEA on paternal MEA allele silencing during early seed development. Therefore, we reciprocally crossed MEA wild-type plants with mea homozygous plants and quantified MEA allele-specific transcripts from 1-4 DAP (Fig. 4). Maternal transcripts in reciprocal crosses of MEA/MEA and mea/ mea plants accumulated to their highest level before fertilization and decreased afterward (Fig. 4B). No paternal transcripts were detectable in a maternal MEA wildtype background, whereas in a maternal mea mutant background, paternal MEA allele silencing was released already at 1 DAP (Fig. 4C). Derepression of the paternal MEA allele continued until 4 DAP and resulted in more or less constant levels of paternal MEA transcripts. Remarkably, the level of derepressed paternal MEA transcripts in maternal mea/mea mutants represented only 19.5% (0.1563 of 0.8008) of the amount of maternal MEA transcripts in the maternal wild-type background (Fig. 4B,C; Supplemental Table S5). However, maternal transcription is no longer autorepressed and highly up-regulated in mea/mea mutant plants ( Baroux et al. 2006), so the paternal MEA transcripts represented only 1.8% (0.1563 of 8.6833) of the amount of maternal mea transcripts in mea/mea plants (Supplemental Table S5). Thus, derepression of the paternal MEA allele in a maternal mea mutant background does not result in equivalent expression levels of the two parental alleles. The low level of derepressed paternal MEA expression indicates weak paternal MEA promoter activity, which might explain why paternal 250pMEATGUS expression is only detected 3-4 DAP (Fig. 3A,B). Taken together, we observed derepression of a paternally inherited 250pMEAT GUS transgene and derepression of the endogenous paternal MEA allele in maternal mea mutants. This suggests that the MEA-ICR is the target of the MEA-FIE complex at the endogenous MEA locus. qualitatively whether there is maternal and/or paternal MEA expression but are unsuitable to infer quantitative differences. The paternal (#) and maternal ($) RT-PCR products are indicated. Actin 11 (Act11) was used as loading control. (B,C) Quantification of maternal (B) and paternal (C) transcripts by RT-qPCR. Transcript levels were normalized to Act11. No significant differences in transcript levels were found between crosses with and without met1-3 (braces below the X-axis indicate pairwise t-tests). Note the different scales for maternal and paternal transcripts. Error bars indicate SEM of three biological replicates. Since a lack of maternal MEA-FIE PcG activity only leads to very weak derepression of the endogenous paternal MEA allele, we wondered whether DNA methylation might be involved in keeping it largely silenced. Thus, we asked again whether MET1 has any residual role in paternal MEA allele silencing and crossed mea/ mea mother plants with either wild-type or met1-3 mutant pollen and analyzed allele-specific MEA expression levels (Fig. 4A,C). In mea/mea mutant mothers, the paternal MEA allele was derepressed when transmitted by both wild-type and met1-3 pollen, with no significant change in the level of derepression (Fig. 4C). This shows that MEA, presumably as part of the maternal MEA-FIE complex, represses the paternal MEA allele independent of its methylation status maintained by MET1 during male gametogenesis. Thus, even after removal of both known repressing factors, the maternal MEA-FIE complex and MET1, the paternal MEA allele is still expressed at extremely low levels compared with the maternal MEA allele; in other words, it is still imprinted. Furthermore, we detected no paternal MEA transcripts in the reciprocal cross when mea homozygous mutant pollen was crossed to met1-3 heterozygous females. We conclude that paternal MET1 during male gametogenesis and maternal MET1 during early seed development are not required for MEA paternal silencing and thus play no significant role in imprinting at the MEA locus. The MEA-ICR is unmethylated Our comparative analysis of MEA transgene and endogene regulation revealed that the MEA-ICR is not targeted by DME and MET1. Thus, contrary to what was previously suggested (Gehring et al. 2006;Jullien et al. 2006a), MEA imprinting regulation is not primarily controlled by differential DNA methylation. Therefore, we speculated that there is either no DNA methylation at all at the MEA-ICR or no differential DNA methylation between active and silent MEA alleles. We analyzed MEA promoter methylation in isolated central cells and sperm cells as well as in isolated two-cell stage embryos where the maternal MEA allele is expressed (Vielle-Calzada et al. 1999;Spillane et al. 2007). In parallel, we monitored FWA promoter methylation, which exhibits imprinting control through a differentially methylated SINE-related element in its promoter (Kinoshita et al. 2004(Kinoshita et al. , 2007. In sperm cells, we found high levels of FWA promoter methylation in the CG context, consistent with previously reported methylation levels in pollen (Fig. 5E,G; Kinoshita et al. 2004). Surprisingly, we found only a small reduction of CG methylation in the central cell at the FWA locus, suggesting that DNA methylation is fully removed after fertilization only (Fig. 5E,F). Contrary to this, we detected almost no methylation in the 250-bp MEA promoter from sperm cells and central cells in any sequence context (Fig. 5A-C). In addition, we analyzed methylation in two-cell stage embryos early after fertilization. We detected high methylation levels of FWA in the CG contexts in the embryo (Fig. 5E,H), consistent with MET1-dependent silencing of parental FWA alleles. However, we found no methylation in the 250-bp MEA promoter in the embryo, where the maternal MEA allele is expressed (Fig. 5A,D). In summary, MET1-dependent FWA silencing in sperm cells, central cells, and the embryo correlates with DNA methylation in the SINE-related repeat region of its promoter. However, the MEA-ICR in the 250-bp MEA promoter carries no DNA methylation in any reproductive cell. This confirms our finding that DME is not targeted to the MEA-ICR for maternal MEA allele activation and that MET1 is not involved in paternal MEA allele silencing. Thus, MEA is regulated differently from FIE and FWA, and presently unknown factors, together with the MEA-FIE complex, must be responsible for the imprinted expression of MEA. Discussion The MEA-ICR maps to a 200-bp region and displays no differential DNA methylation In plants, the primary DNA sequences responsible for genomic imprinting remained elusive. Studies involving transgenes to identify the cis-determinants for imprinted expression in Arabidopsis and maize indicated that plants ICRs are located close to the imprinted loci (Luo et al. 2000;Kinoshita et al. 2004;Gehring et al. 2006;Gutierrez-Marcos et al. 2006;Makarevich et al. 2008). We identified the 200-bp upstream region adjacent to the MEA translational start site as the minimal sequence necessary to confer cis-activation and imprinted expression of a GUS transgene. The proximity of the MEA-ICR and the MEA locus is in contrast to mammalian ICRs, which can be located >100 kb distal from the imprinted loci (Ferguson-Smith and Surani 2001). Mammalian ICRs are typically a few kilobases in length and exhibit parental allele-specific DNA methylation (Bartolomei 2009). However, the MEA-ICR maps to a 200-bp fragment and is essentially unmethylated, excluding DNA methylation as the epigenetic mark distinguishing maternal and paternal MEA alleles. This is in contrast to the cis-elements involved in imprinting at the FWA and PHE1 loci. Maternal-specific expression of FWA in the endosperm is due to differential methylation of a SINE-related element located in the FWA promoter (Kinoshita et al. 2007). Yet our analysis of DNA methylation in gametes shows that differential methylation at FWA is only established after fertilization. This suggests that the primary germline imprint at the FWA locus is not the DNA methylation mark itself. Imprinting of PHE1 results in preferential paternal expression in the endosperm and correlates with differential methylation of tandemly repeated motifs located 3 kb downstream from the PHE1 gene (Makarevich et al. 2008). Furthermore, differential DNA methylation between the parental alleles has been described for the maize imprinted genes ZmFie1 and ZmFie2 (Gutierrez-Marcos et al. 2006). Interestingly, ZmFie2 is unmethylated in both central cells and sperm cells prior to fertilization, and the differential methylation pattern is only established after fertilization, Wö hrmann et al. GENES & DEVELOPMENT Cold Spring Harbor Laboratory Press on March 20, 2020 -Published by genesdev.cshlp.org Downloaded from also indicating that the primary germline imprint is not a DNA methylation mark. In addition, several of the potentially imprinted genes recently identified by transcriptome profiling are unaffected by mutations in one or even all of the known imprinting factors (i.e., DME, MET1, and FIE) (Hsieh et al. 2011;Wolff et al. 2011), suggesting additional, yet-undiscovered imprinting regulators. MEA is an imprinted gene that is not controlled by differential DNA methylation at the ICR. A related situation may occur in the mouse Prader-Willi/Angelman region showing a complex imprinting control involving several cis-acting elements, one of which is not differentially methylated but is required to establish parental imprints at other sites (Kaufman et al. 2009). Moreover, it was recently shown that in macaques, some ICRs that acquire a germline DNA methylation imprint in mice are not methylated in the germline and acquire a differential methylation mark only post-fertilization (A Ferguson-Smith, pers. comm.). Thus, primary imprints that do not involve germline DNA methylation appear to exist in both plants and mammals. Future studies will show whether common regulatory mechanism indeed exist between nonmethylated ICRs in mammals and plants. Imprinting control at the MEA-ICR is independent of DME and MET1 Maternal allele expression of MEA and other maternally expressed imprinted genes depends on the removal of MET1-dependent DNA methylation (Choi et al. 2002;Kinoshita et al. 2004;Jullien et al. 2006b). Consistent with the lack of significant DNA methylation at the MEA-ICR, the imprinted 250pMEATGUS transgene is maternally activated independent of DME, suggesting that DME is only required in the endogenous context, probably targeting a region different from the MEA-ICR. Although involved in imprinting, DNA methylation in flowering plants primarily silences transposons and repeat elements (Henderson and Jacobsen 2007). Thus, a 590-bp AtREP2 transposon element that is located À4 kb upstream of the MEA start codon represents a likely DME target. Indeed, the previously described 4.2pMEATGUS transgene containing 450 bp of the AtREP2 is fully dependent on DME for activation (Choi et al. 2002), whereas the 4.8pMEATGUS transgene containing 3.8 kb of MEA upstream sequence with 100 bp of the AtREP2 is only partially dependent on DME (this study). Therefore, we hypothesize that DME is only indirectly involved in the activation of endogenous maternal MEA transcription by demethylation of the AtREP2. Based on our results, we propose a new model of MEA imprinting regulation (Fig. 6). The methylated AtREP2 would interact with an unidentified region of the MEA locus to establish a silent higher-order chromatin structure; e.g., a repressive chromatin loop. This prevents the MEA promoter from being accessed by an unknown transcriptional activator binding the MEA-ICR. Demethylation of AtREP2 by DME in the central cell resolves the repressive chromatin loop and allows the transcriptional activator to access the MEA-ICR. The repressive chromatin loop is not resolved in the male gametophyte, where DME is not expressed, resulting in exclusive maternal MEA allele expression. Since the paternal MEA allele is not fully activated if both known repressing activities, MET1 and the MEA-FIE complex, are removed, additional paternal repressors involved in imprinting control have to be postulated, possibly including a PcG complex with a histone methyltransferase other than MEA. In mammals, chromosome conformation capture experiments revealed that chromosome looping is involved in imprinting control (Lopes et al. 2003;Kurukuti et al. 2006;Yoon et al. 2007;Engel et al. 2008). More specifically, interactions of differentially methylated regions (DMRs) at the mouse H19/Igf2 locus were shown to partition maternal and paternal chromatin into distinct loops, generating an epigenetic switch to control allelespecific expression (Murrell et al. 2004). Our findings raise the possibility that MEA imprinting control might depend on a similar mechanism involving higher-order chromatin structure controlled by DME and MET1. This hypothesis is consistent with recent reports that DME is involved in genome-wide demethylation of the maternal genome in the endosperm, especially of transposons and repeat elements (Gehring et al. 2009;Hsieh et al. 2009). Intriguingly, all characterized imprinted genes in plants are hypomethylated on the maternal allele regardless of which allele is expressed. This suggests that DME-dependent demethylation in the endosperm is not specifically targeting imprinted genes, but rather is a nearly universal process that reshapes DNA methylation of the entire maternal genome in the endosperm. The imprinting factors required for paternal MEA silencing remain unknown Two epigenetic silencing marks were found at specific sites of the MEA locus: DNA methylation and histone H3K27 di-and trimethylation (H3K27me) (Xiao et al. 2003;Gehring et al. 2006;Jullien et al. 2006a). We report that lack of MET1 during male gametogenesis does not derepress the paternal MEA allele 1-4 DAP. This complements previous studies with met1 mutant pollen that showed no paternal MEA allele expression 7-9 DAP (Gehring et al. 2006). Whereas DNA methylation is irrelevant for paternal MEA allele silencing, PcG-mediated histone methylation is necessary for paternal MEA allele silencing (Gehring et al. 2006;Jullien et al. 2006a). Maternal MEA is involved in deposition of repressive H3K27me at the paternal MEA allele close to the translational start site (Gehring et al. 2006). We found derepression of a paternally inherited 250pMEATGUS transgene in the maternal mea mutant background, suggesting that the MEA-ICR in the 250-bp MEA promoter is targeted by the maternal MEA-FIE complex. However, it is unclear how the MEA-FIE complex gains access to the silent chromatin loop of the paternal allele to maintain silencing after fertilization. Possibly, the repressive machinery, including the MEA-FIE complex and other proposed repressors, has access to cisregulatory elements in repressive chromatin loops, whereas the activating machinery is efficiently prevented from binding to the MEA-ICR. We found derepression of the paternal MEA allele in the maternal mea mutant background already at 1 DAP. This contradicts recent findings of delayed paternal derepression, which were explained by the need for passive loss of repressive H3K27me on the paternal MEA allele (Jullien et al. 2006a). Surprisingly, derepressed paternal MEA transcripts in maternal mea mutant plants represent only 14% of maternal MEA transcripts in maternal wild-type plants. This resembles the observed residual transcriptional activity of the silent maternal PHE1 allele (Kö hler et al. 2005). Similarly, in mice, paternal alleles of several imprinted genes in the IC2-imprinted domain are not completely silent (Lewis et al. 2004). Even though the silent paternal MEA allele is derepressed in mea mutant plants, parental transcript levels are clearly not equivalent and still show parent-of-origin-dependent differences. Assuming equivalent parental expression levels in the background of compromised imprinting, the main components involved in paternal MEA allele silencing remain to be identified because the paternal MEA allele is still imprinted when MET1 and the MEA-FIE complex are missing. As the MEA-ICR confers paternal MEA silencing beyond the native genomic context, loop formation is not sufficient to explain paternal MEA silencing. Thus, another unknown repressor binding to the MEA-ICR, along with the proposed PcG complex (Jullien et al. 2006a), may be required for paternal MEA repression (Fig. 6). In summary, our promoter dissection identified the MEA-ICR in the 200-bp MEA upstream sequence. The MEA-ICR carries no significant methylation in sperm cells, central cells, and two-cell stage embryos, which to our knowledge is the first example of an ICR without differential DNA methylation. DME, the key factor necessary for specific activation of maternally expressed imprinted genes in Arabidopsis, is dispensable for activation of maternal MEA allele transcription. Instead, DME and MET1 may be involved in the regulation of a higher-order chromatin structure at the MEA locus, thereby only indirectly controlling the specific marking and activation of the maternal MEA allele by unknown factors. However, a repressive chromatin structure at the paternal MEA locus alone cannot explain paternal MEA silencing, which is mediated through the MEA-ICR beyond the native genomic context by still unknown MEA imprinting factors. Generation of pMEATMEA and pMEATGUS constructs All pMEATMEA constructs were cloned into pCAMBIA3300 containing the corresponding MEA promoter sequence and the entire MEA ORF amplified from genomic Ler DNA. All pMEATGUS constructs contain the corresponding MEA promoter sequence amplified from genomic Ler DNA and were cloned in-frame to the GUS reporter gene in pCAMBIA 1381Z. Promoter deletions were done using different primer pairs amplifying differently sized amplicons and were subsequently cloned in the above-mentioned vectors. For a detailed cloning procedure, see the Supplemental Material. Microscopy and GUS staining Histochemical analysis of GUS reporter gene expression was essentially done as described in Baroux et al. 2006. Microscopic inspection was carried out under differential contrast (DIC) optics using a Leica DMR microscope (Leica Microsystems). A detailed description can be found in the Supplemental Material. RT-PCR analyses Reverse transcription was performed as previously published (Baroux et al. 2006) on 20 gynoecia before fertilization or on 10-15 siliques at 1-4 DAP, depending on the stage indicated in the corresponding figure. In all experiments, transcript levels were normalized to the level of ACTIN11 (Huang et al. 1997). For detailed protocol and primers used, see Supplemental Material. Bisulfite DNA sequencing of isolated reproductive cells Central cells were isolated using laser capture microscopy, sperm cells were isolated using a Percoll density gradient (M Schauer and U Grossniklaus, unpubl.), and embryos were isolated as previously described (Autran et al. 2011). DNA isolation and bisulfite conversion were essentially performed as described in the Epigenetics Protocols Database ''Bisulphite sequencing of small DNA/cell samples'' (PROT35; http:// www.epigenome-noe.net/research tools/protocols.php). Subsequently, regions of interest (250-bp MEA promoter and SINErelated tandem repeat in the FWA promoter) were amplified. Purified bisulfite PCR products were cloned into the pGEM-T vector (Promega) and several independent clones were sequenced (for sperm cell and embryo sample), or purified PCR products were directly sequenced with the 454 sequencer according to the standard protocol (central cell samples). All sequences were analyzed with the BiQ Analyzer software (Bock et al. 2005) for quality control and removal of identical clones in a standardized manner. For a more detailed description, see the Supplemental Material.
2018-04-03T04:05:45.639Z
2012-08-15T00:00:00.000
{ "year": 2012, "sha1": "5fd8f7349000ef202cc58d1e18e52b1f524ffe1c", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/26/16/1837.full.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c4724fcf30769a2216b6e60f71df26185b32fac3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
55667953
pes2o/s2orc
v3-fos-license
Element Concentration in the Prepared and Commercial Feed as Well Their Status in the Breast Muscle of Chicken after Prolonged Feeding Quality poultry meat depends upon the feed and as such there are many commercially available feeds. However, their composition and standard by and large throughout the year may not remain same due to obvious reasons. Moreover, there is no mention of locally produced feed particularly in the north eastern part of India. The major objective of this study was to prepare mesh feed E1 with the available ingredients as well as their effect were compared with that of the two commercially available feed Amrit and Godrej (E2 and E3) in terms of Crude protein, fats and element composition. The findings showed that the protein content (240 g/kg) and fats (105 g/kg) in the breast muscle of female was higher in the E3 received against the broiler chicken received local feed. Element analysis of the E1, E2 and E3 depicted significantly higher value of Ca, K, Cu Zn and Se against the commercial feed. Other elements like Mg, Na, Fe, P, and Mn showed no variation while compared E1, E2 and E3 together. Thus the present findings suggest that the local feed E1 could be accepted at per with that of the commercial feed for poultry. Introduction Poultry is a rapidly growing industry, and being a rich source of protein and micro nutrient, contributes a useful component to human diet.The modern broiler has been generally selected for rapid gains and efficient utilization of nutrients.Broilers are capable of thriving on widely varied types of diets, but do best on diets composed of low-fiber grains and highly digestible protein sources.Nutritional quality of broiler's meet always depends on proteins, free amino acids, minerals and vitamins.Lazar [1] was of the view that the ideal quantum of certain minerals in poultry meet should be for K (0.4%), P (0.2%), Na (0.09%) etc. Significantly, the quality of the meat mostly depends upon the types of feed.Generally proteins are the major constituents of the poultry meat [2] along with the mineral constituents.In broiler nutrition, the concept of ideal protein has been widely accepted with growth rate and breast meat yield increases with balanced protein intake.It has also consistently been shown that if an adequate quantity of essential nutrients is maintained in relation to metabolic energy (ME), increasing concentrations of energy for broilers result in a more rapid weight gain and an improvement in feed conversion [3]. On the other hand differences in the quantum of the nutrients in terms of male and female, particularly in the breast muscle of female, which is finer and more tender [4] attracts significance.In addition, many of the stress, including the dietary stress stimuli could act in the homeostasis which in turn may cause loss of macro and microelements, deficiency of which in meat may affect the quality.Good broiler growth rates will be achieved if the daily nutritional requirement of the bird is met.The ability of the bird to achieve its daily nutritional requirement will, in part, depend upon the nutrient composition of the diet and what the bird actually responds to feed is as nutrient intake.For a good broiler growth and efficient nutrient utilization it is, therefore, vital to ensure that a good feed intake is achieved.Feed intake can be significantly affected by feed form.A poor feed form will inhibit feed intake and have a negative impact on growth rate [5]. The cost, easy availability and nutritional quality of feed coupled with genetically potent feed efficient stocks to produce safe food from poultry are the need of time [6].The relationship between feed ingredient and animal product output is both direct and obvious, and the conventional feed stuffs are very expensive and scarce [7].The commonly used feedstuffs in poultry diet i.e. maize, soybean meal, groundnut meal, fish meal etc. which mainly depends on their nutritive value as well as absence of any incriminating factor. Moreover, the available quantum of macro and microelements either in commercial feed or in indigenously prepared feed is data deficient.Hence, an attempt has been made to determine the contents of Ca, Mg, P, Na, K, Fe, Cu, Zn, Mn, in the commercially available feed, indigenously prepared feed as well as their accumulation in the breast muscle of broiler chicken. Experimental Birds 300 VENCOBB commercial broiler chicks of uniform weight (day-old) were procured from a commercial hatchery.Chicks were weighed individually and wing bended.Chicks were randomly allotted according to nearest body weight to three experimental treatments groups.Each group was subdivided into two replicates of 50 chicks. Experimental Ration Three experimental Rations were considered to evaluate three types of rations, viz.prepared ration with conventional ingredients (E1) and two rations procured from the market Amrit (E2) an Godrej (E3) respectively. Eighty numbers of samples were collected from sales and display centre of feed mills mainly from Jorhat, Golaghat, Sonitpur and Kamrup (urban) districts of Assam, India.These compounded feeds available in Assam were collected randomly for chemical analysis.While collecting the samples, at least three samples from each company were collected and pooled sample was analyzed.These two compounded feeds were in the form of crumbles and pellets. The control diet (Ration-1) was formulated for prestarter (0 -1 week), starter (1 -4 weeks) and finisher (4 -6 weeks) separately with conventional ingredients.Prestarter, starter and finisher rations were calculated for crude proteins (CP) and metabolic energy (ME) values.All the ingredients were obtained from local market. Physical and Chemical Composition of the Prepared Rations of Broiler As per standard protocol the control (Ration 1) was prepared as Pre-starter, Starter and broiler Finisher for the experimental birds [8].The prepared and their analysis are presented in the Tables 1, 2 and 3. Feed toxicity mainly for aflatoxin for three rations of different types tested by ELISA method.In the present trial three experimental rations and feed ingredients taken for preparation of experimental ration-1 were found aflatoxin negative and below the toxic level. Housing and Management The chicks were housed in a clean well ventilated room, previously disinfected with formalin.Each group was presented by two replicate of 50 birds.The experimental house was divided into six pens of equal size by using bamboo materials and wire netting.The doors, windows, wire netting etc. of the house was painted before starting the experiment.Fresh dried rice husk litter was spread on the floor of the pens at a depth of about 0.04 meter before placing the chicks in the pens to maintain brooding temperature.The room was provided with electric heaters to adjust the environmental temperature and provide the light as per requirement.Each pen was equipped with two 100 W electric bulb suspended 0.4 meter above the litter.The feeder and drinker were fixed such a way that the birds could eat and drink conveniently The chicks were vaccinated against new castle disease vaccine and infectious bursal disease and provided with required veterinary care.Each groups of chicks received F-strain RD virus vaccine intra-ocularly at the dose rate of 0.05 ml/bird on 5th day and IBD vaccine intra-ocularly at the dose rate of 0.05 ml/bird at 14 days of age.Booster dose of RD vaccine F-strain was given on 21 days of age.The room was provided with electric heaters to adjust the environmental temperature and provide the light as per requirement. Feeding of Birds: Chicks were reared from 1 day old to 42 days on the experimental diets and were allowed ad libitum access to feed and water throughout the study.On the first day the chicks were provided with only crushed maize and then given pre-starter diet along with drinking water.They were fed with pre-starter, starter, finisher ration on the day of assigning the treatment and the beginning of the starter period the required amount of broiler starter feed were procured.The same procedure was followed in the procurement of broiler grower and broiler finisher diets. Meat samples of breast muscle were taken from carcass of each group with scissor and sharp knife and the samples were wrapped in a polythene bag and kept in deep fridge for proximate analysis of meat as outlined by AOAC [9]. Crude Protein Nitrogen in the sample was determined by the Kjeldahl method and was multiplied by factors 6.25 to determine the crude protein content of the feed. The representative sample of feed was first weighed and digested with concentrated H 2 SO 4 in presence of 10 g anhydrous sodium sulphate and 0.5 g copper sulphate.Digested materials were dissolved in distilled water and collected in a 250 ml volumetric flask.Then, known volume of aliquot was distilled after being made alkaline with 45% sodium hydroxide solution and the liberated ammonia was trapped in 2% boric acid solution.The same was titrated against N/10 H 2 SO 4 (standard).Percentage of nitrogen was calculated by the following formula.Feeding trial was conducted for a period of 42 days, A1 and A2 were simply the replicas of E1, E2 and E3 respectively. Experimental Design Crude Fats: The wet tissue was homogenized with a mixture of chloroform and methanol (3:1 v/v) in such proportions that a miscible system was formed with the water in the tissue.Dilution with chloroform and water separates the homogenate into two layers, the chloroform layer containing all the lipids and the methanolic layer containing all the non-lipids.A purified lipid extract was obtained merely by isolating the chloroform layer [10] and weighed. Element Analysis: Samples from the finisher feed E1 (Indigenous feed), E2 (Amrit) and E3 (Godrej) as well as from the breast muscle of the 42 days old chickens were considered for crude protein (CP) and elemental assessment.Breast muscles obtained from the chickens were kept in 10% formalin for 24 h and thereafter 4 -5 g of them were subjected to acid digestion [11].In brief, the weighed samples were added with HNO 3 -HCLO 4 (3:1) acid digestion until a clear precipitation was obtained.The sample was diluted to 50 ml with deionized water and subjected for element analysis in Atomic absorption spectrophotometer (Perkin-Elmer 3110) at SAIF, NEHU, Shillong, India.The content of Na and K was determined by the emission technique (acetylene air flame).The P was determined by colorimetric method [12].The data collected were subjected to ANOVA followed by Fischer's test of significance. Results and Discussion The results of the present investigation have been depicted in the Table 4 and 5. Administration of feed as shown exerts clear impact on the breast muscle of 42 days broiler presenting distinct variation among the E1, E2 and E3.The source of protein for the indigenously prepared feed was the maize, groundnut and soyaben against the unknown composition of the two commercially available feed.Elements contents present in the feed (E1) and commercially available feed E2 and E3 have been found to be consistent with the chicken growths.Indigenously prepared feed E1 was found to contain significantly lower Ca (5.64 mg/g) and Se (0.11 mg/g) as well as higher Cu (0.89 mg/g), Zn (1.54 mg/g) compared to the commercial feeds E2 and E3.As well, higher quantum of K, Fe and Mn and relatively higher Se in E2 and E3 were assayed.Analysis of two other categories of feed viz pre-starter and starter feed either showed identical or no significant variation, hence not shown in the text.The higher quantity of Ca in the feed lowers the palatability and interfere with the utilization of P, Zn, Mg and Mn, though its precise requirement is not known [13] The maximum requirement up to the level of finished meat has been stated as Mn, Zn, and Fe at 50 mg [13] against the availability of these minerals at 1.51, 1.54 and 13.54 mg/kg of finisher food of this investigation (Table 4). Analysis of breast muscle after 42 day of feeding and that too with finisher demonstrated significantly higher value of crude protein (240 g/kg) and fats(105.5mg/kg) in the E3 feed female breast muscle against E2 (231.2 and 70.1 g/kg proteins and fats respectively) and E1 (210.0 and 80.0 g/kg proteins and fats respectively) feed.The quality of feature of the meat depends upon the protein and fats.The variation recorded in this investigation might be an attribution of the feed content and there has been insignificant variation of protein between E1 (Indigenous) and E2 (Amrit).However, less amount of fat accumulation could be noticed in male under the feeding of E1 while it is higher in E3 female, which might be related to the egg production.The higher quantum of protein (22.26%) in the breast muscle of male and female and higher value of fats in female (2.78%) in broiler chicken have already been highlighted [14].Further a higher fat content in female broiler [15] and its difference is associated with the metabolic differences, higher competitiveness, and variation in fat deposition capabilities, nutritional requirement and higher hormonal influences [16].The muscles are finer and tender in female and the female accumulate more fats compared to male [4].Earlier, it was suggested low fat quantity in male abdomen [17].In fact, breast muscle contain approximately 24% crude protein against 22% in broiler chicken [18], while it was recorded as 219 g/kg of protein and 16.7 g/kg of fats respectively in male and female broiler chicken [19]. The present finding is very much suggestive of the negative correlation between the fats and protein content and in conformity with the work of various workers [19,20].However, it is true that the fat content in meat largely depends upon the factors like animal species, breed, gender and anatomic origin of the muscles.The important feature of broiler chicken meat from dietary aspects might be from dietetic aspects and excessive accumulation of fat lies in an imbalance between feed intake and consumption of energy [21].The present feeding suggested higher quantum of protein in breast muscle in the finished product and it could reasonably be argued that the indigenously prepared feed might be considered for quality meat production. Evaluation of elements analysis showed presence of higher Ca (43.64 mg/g) in the E1 fed female broiler, significantly different from that of E2 and E3 fed chicken breast muscles.However, the present findings could not support the data of some workers for Calcium [19].This might be due to the difference in breed or strain, however, demands further characterization. The Mg quantum in the breast muscle was evaluated and ranged between 310 to 450 mg/g of breast tissue and significantly higher Mg was noted in male and female E3 fed feed.The variation might, probably be due to the feed composition and notably the higher Mg is related to the cardiovascular disease [22].Moreover, higher Mg in the diet has been linked to a 22% lower risk of Ischemic heart disease [23].Yet the breast muscle of the E1 and E2 group has had higher range of Mg against 130 mg/g in Ross 308 broiler chicken [19] The macro element P was in higher direction in the range of 266 to 411 mg/g, while the P in E1 fed female breast muscle showed significantly higher quantity (P < 0.01; 411 mg/g).The possible reason might be due to the variation of feed intake and therefore, it is suggested that the male and female should distinctly be segregated.On the other hand, the importance of P in various activities have already been suggested and it was stated that P influences on the release of energy from protein, carbohydrate and fats while the body was exposed to stress bearing factors [24]. Trance quantities of minerals present in the tissue serve a variety of function in their bodies.The present findings for Mn and Cu showed their identical presence amongst the E1, E2 and E3 feed categories, while the Fe quantum of female fed with E1 feed depicted significantly higher values (4.7 µg/g) and the Se and Zn in E3 feed breast muscle exhibited higher values (2.9 µg/g).Significantly higher quantity of K was noted in male, fed with E1 feed.Presence of identical quantity of Na and Se was recorded amongst the three groups (Table 2).As mentioned elsewhere, Na, K, Zn, Cu, Mn, Se, Fe occurring tissue affects the osmotic balance in the body [25].The median value of potassium in BBQ breast meat has been estimated at 284 mg/100g.The average amount of potassium in 100 g of chicken breast is 273.77mg against the 450.0 µg/g in the breast tissue fed with local [25].The concentration of selenium (Se) in chicken breast meat in Scandinavia is, however, only about 0.01 mg/100g.Thigh meat from broilers raised on a feed supplemented with 40 g rapeseed oil, 10 g linseed oil per kg diet and 0.27, 0.44, 0.78 or 1.16 mg Se per kg diet could be described as a functional food.This broiler meat is a good contribution to a better strategy for increasing the food content of Se and very long chain omega-3 fatty acids [26].Presence of optional quantum of elements in feed mixtures enable the proper functioning of an organism and good production performance [27] and thus the prepared feed E1 could be of ideal with that of the commercially available feeds. Element Concentration in the Prepared and Commercial Feed as Well Their Status in the Breast Muscle of Chicken after Prolonged Feeding 1185 Table 2 . Physical and calculated chemical composition of starter feed. ME: Metabolic energy, CP: Crude proteins. Ingredients Parts/100kg ME/kg ME (Kcal/kg) CP% CP Element Concentration in the Prepared and Commercial Feed as Well Their Status in the Breast Muscle of Chicken after Prolonged Feeding 1186 Table 5 . Evaluation of protein (g/kg), fats (g/kg) and elements in the breast muscle of male (M) and female (F) Broiler chicken fed with indigenous feed (E1), and two commercially available feed E2 (Amrit) and E3 (Godrej) for a period of 42 days. Concentration in the Prepared and Commercial Feed as Well Their Status in the Breast Muscle of Chicken after Prolonged Feeding 1188 Element Element Concentration in the Prepared and Commercial Feed as Well Their Status in the Breast Muscle of Chicken after Prolonged Feeding 1189 feed E1 in this investigation.It is evident that Cu regulates cholesterol biosynthesis and the distribution of fatty acids in poultry
2018-12-12T02:46:07.280Z
2013-10-07T00:00:00.000
{ "year": 2013, "sha1": "694549a37e728ba712c89de7856bc60fad2f5f74", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=37666", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "694549a37e728ba712c89de7856bc60fad2f5f74", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
211268949
pes2o/s2orc
v3-fos-license
Mitigating water stress on wheat through foliar application of silicon Climate change emerges in different forms such as drought, which is prevalent all over the world, especially in semi-arid and arid regions. Crop production especially wheat (Triticum aestivum L.) yield is affected due to water shortage at critical growth stages in Pakistan. A greenhouse experiment was conducted by using plastic trays to assess the performance of wheat to exogenous silicon (Si) application under water stress which in applied through skipping irrigation at critical stages at College of Agriculture, University of Sargodha, Pakistan. Experiment include irrigation levels (I1: irrigation at crown root stage + booting stage, I2: irrigation at crown root stage + anthesis stage, I3: crown root stage + grain development stage, I4: crown root stage + booting stage + anthesis stage + grain development stage, I5: crown root stage + tillering stage + booting stage + earing stage + milking stage + dough stage) and foliar application of Si viz., Si0: 0% (Control), Si1: 0.25%, Si2: 0.50%, and Si3: 1% (w/v). Treatment combination I1 + Si0 significantly reduced yield and yield attributes, net assimilation rate, Si contents in plants, leaf water potential, chlorophyll content, root length and water use efficiency furthermore, increased evapotranspiration efficiency. In contrast, treatment combination I5 + Si3 significantly increased these parameters and reduced evapotranspiration efficiency. Moreover, treatment combinations I4 + Si3 and I3 + Si3 were statistically at par with treatment combination I5 + Si3 which indicating the role of Si in mitigating negative impact of water shortage and improved these parameters. It is concluded that plant exhibited positive response at irrigation levels I3 and I4 in combination with foliar-applied Si3 while irrigation level lower than I3 with Si3 was not showed positive improvement in crop productivity. Introduction Wheat has a leading role in human nutrition and animal feed in the world with the largest harvested area (220.1 million hectares) and the second largest production (749.5 million tons) among cereals. In Pakistan, area under wheat cultivation is 8.7 million hectares which gives 25.4 million tonnes grain production and its contribution in value addition and gross domestic product is 9.1 and 1.7% respectively (Economic Survey of Pakistan, 2018). The 22% people of Pakistan are suffering from food shortage due to insufficient wheat production. There are several reasons which restrict the wheat production. However, in recent scenario, climate change has been emerged as a big threat for wheat production in the world (FAO, 2011d). Abnormality in environment like unpredictable and low rainfall especially at maturity and temperature severity at tillering and milking stage resulted in abiotic stress for wheat production. Drought stress along with salinity is two main threats that have reduced about 20% arable land of the world and especially Pakistan (Munns and Tester, 2008). It has been reported that more than 40% yield fluctuation of wheat can be attributed to climate change (heat waves and drought) at the global, national, and subnational scales (Zampieri et al., 2017). The worldwide climate change is creating a series of problems that are disturbing the crop production and has also affected the 58% population which is engaged directly and indirectly related to agriculture (Anonymous, 2012). Water stress at critical growth stage is the major hindrance for plant growth and development in different climatic conditions (Wang et al., 2014). Wheat is not equally susceptible to water stress during life but some stages like flowering and grain development are severely affected the crop which ultimately causes in yield reduction (Bukhat, 2005;Sinclair, 2011). Water stress at initial critical growth stages like crown root incitation and tillering caused rapid inhibition of roots and shoot which are further leads to stomatal closure resulting in reduction of transpiration rate and CO2 assimilates during photosynthesis (Neumann, 2008). Results from different studies concluded that water stress during flowering stage mainly affected grain number related traits whereas water stress during flowering impacted grain weight. This is consistent with other studies on wheat, where drought during stem extension caused floret and whole spikelet death and drought during grain filling reduced grain size and weight (Oosterhuis and Cartwright, 1983;Dorion et al., 1996;Ji et al., 2010;Pradhan et al., 2012). Water stress at anthesis stage caused reduction in pollination process that ultimately reduced the number of grains (Showemimo and Olarewaju, 2007). Anther development and pollen production are particularly sensitive to abiotic stresses such as heat, leading to severe reductions in crop yield (Hedhly et al., 2009). Drought during grain filling stage influences the partitioning of assimilate and enzymes activation that involved in starch and sucrose production (Sinclair, 2011) and also disturb the nutrients uptake and accumulation in wheat (Hattori et al., 2005). Water stress and high temperature are the exclusive cause that damages the female reproductive organs which leads to losses of fertility and production. Therefore, pollen sterility is not the major determinant of fertility loss under high temperature and water scarcity (Fábián et al., 2019). In wheat, higher temperature and water stress collectively reduced 66% to 93% rate of photosynthetic compared with non-stress conditions. Moreover, water stress at intermediate or suboptimal condition increased 24% water use efficiency (WUE), moreover, high temperature decreased 34% WUE (Hassan, 2006;Prasad et al., 2011). Sufficient supply of water at anthesis stage enhances the photosynthetic rate and also provides additional period for carbohydrate translocation towards grains which increases the size of grains (Zhang and Oweis, 1999). Crop yield improvement depends upon the water accessibility at following stages i.e. booting, flowering, grain filling and milking stage (Ashraf and Harris, 2004). Water scarcity at later stages causes decrease in grain weight and number (Gupta et al., 2001). Chlorophyll contents are enhanced during vegetative stage under unlimited water supply (Lawson and Blatt, 2014). Water stress during early vegetative growth affects different phenological stages like stem elongation, leaf area, and tillering, maybe by declined CO2 assimilation in the leaf and slow nutrient mobilization to developing tissues due to lower stomatal conductance, transpiration rate and low relative water content (Barnabas et al. 2008;Lipiec et al. 2013). Silicon (Si) availability to wheat usually low due to its immobility in plants (Hattori et al., 2005) and its accumulation depends on crop species and crop growth stages (Mecfel et al., 2007). Moreover, most of the researcher concluded that Si lower the risk of diseases, improve photosynthetic rate and grain production in cereals under water stress (Shashidhar et al., 2008). In vegetative growth, Si seems nonessential while its application in stress condition improves the growth of family Gramineae especially wheat. Plant structural study has exposed that its recommended amount is essential for cell differentiation and development (Liang et al., 2005). Wheat crop is also known as Si accumulator that is useful for development under abiotic and biotic (diseases and insect) stresses (Liang et al., 2003). However, under favourable condition its functions are minimum (Epstein, 2009;Gong et al., 2005). Application of Si on the foliage of plant under drought develops tolerance in wheat through improving the plant water use efficiency by reduction transpiration from plants surface (Gao et al., 2006). Foliar-applied Si at tillering and anthesis enhances biomass and grain yield, respectively (Al-aghabary et al., 2005). Exogenous application of Si at rate of 0.40-0.50% enhanced about 50% grain yield under water stress (Harter and Barros, 2011). Plant tissue analysis showed that Si-induced tolerance against water stress through improvement in antioxidant defence system, which diminished the oxidative stress on cell metabolites (Gong and Chen, 2012). In the light of all above studies, it was speculated that exogenous silicon application could reduce the inauspicious effects of water stress on wheat development and physiological traits. Keeping in view the hypothesis, study was planned to investigate the exogenous application of Si on yield and physiological traits of wheat cultivar Punjab-2011 under planned water stress condition. Experimental site and soil Role of Si levels on yield and physiological traits of wheat (CV. Punjab-2011) under water stress which imposed by combinations of irrigation scheduling was studied in a greenhouse experiment at College of Agriculture, University of Sargodha, Pakistan, during 2016-17 under semi-arid (32.08°N latitude, 72.67°E longitude) climate of the Punjab, Pakistan. Soil used was sandy clay loam having physico-chemical characteristics given in Table 1. Greenhouse temperature and humidity was maintained in the range between 26°C to 30°C and 60% to 65% at noontime respectively. (Piper, 1966) Total soil N (mg kg -1 ) 4.12 ± 9.17 4.20 ± 8.34 Modified Kjeldahl Method (Piper, 1966 Olsen's Method (Jackson, 1973) Available potassium (mg kg -1 ) 267 ± 10.12 269 ± 12.10 Flame photometric (Jackson, 1973) Values are mean of four replicates followed by (±) standard error of means Crop husbandry Seeds of wheat were sown in plastic trays, each of 27.94 cm width, 54.28 cm length and 35.86 cm depth, each plastic tray was filled with 12 kg soil. The 40 seeds of wheat cultivar ---were sown in each tray. Fertilizer was used as recommended for wheat i.e. 120:100:60 kg ha -1 N:P:K with urea, DAP and K2SO4, respectively. Nitrogen was used in three splits in which 1/3 rd along with whole total phosphorus and potassium fertilizers were incorporated in the soil before sowing. Remaining nitrogen dose was top dressed in two equal splits at tillering and anthesis stage. At the completion of germination, 20 seedlings per trays were maintained and irrigated according to water stress treatments. Experimental treatments and design For treatment allocation, completely randomized design (CRD) with factorial arrangement and having four replications was adopted. Study treatments included water stress at different critical growth stages After drying, samples were converted into a fine powder through grounding. Digested 0.20 g plant sample through 50% NaOH and 50% H2O2solution than placed the beakers on hot plate at 150°C for 2h for complete the digestion. Colorimetric molybdenum blue method was used for the estimation of Si from the digested samples (Elliot and Snyder, 1991). Then, take 50 ml volumetric flask and made a solution having 1 ml filtrate liquid, 25 ml acetic acid (20%) and 10 ml of ammonium molybdate. After five minutes add 5 ml tartaric acid (20%) and 1 ml of reducing solution (1 g Na2SO3, 0.5 g 1 amino-2-naphthol-4-sulfonic acid and 30 g NaHSO3 in 200 ml water) and 20% citric acid to made the solution up to the volume. Spectrophotometer (Shimadzu, Japan) at 650 mm was used to check the absorbance. Barrs and Weatherly (1962) method was used for the determination of relative water content. For this purpose, weighted fresh leaflets were imbibing into distilled water for four hours in petri plate. After four hours, turgid weight of leaflets was calculated then dried in an oven at 80°C for 48 hours for recording dry weight of leaflets. Following formula was used for calculation of relative water content (RWC): Chlorophyll was measured with chlorophyll meter, model no. (SPAD-502 Plus) between 10:00 AM and 02:00 PM. Each measurement was repeated three times and the average was included for analysis. (Tennant, 1975) method was used for the measurement of root length. For this purpose, randomly selected five plants were dug-out with a sampling tool having 7 cm sharp cutting tip. Then, soil and other residues were carefully separated from the roots through gentle washing. Root length was noted in centimeters from ground level to the root tips. Ehdaie and Waines (1994) formulae was used for calculation of water use efficiency and evapotranspiration efficiency (ETE), respectively Statistical analysis Analysis of variance (ANOVA) techniques was applied on collected data by using the statistical program SAS 9.1 (SAS Institute, 2008) and Duncan's Multiple Range test at P≤0.05 was used for comparison of the means (Steel et al., 1997). Results and Discussion Foliar application of Si has significant effect on wheat yield and yield components under irrigation levels (Table 2). Irrigation level I5 showed that full irrigation produced 7% taller wheat plant than irrigation level I1 (Table 2). Moreover, Si3 in irrigation level I5 produced 15% taller plant than Si0 at I1 level. Plant height of I4 and I3 were statistically similar with I5along with Si3. Among Si concentrations, Si3 express 6% taller plant than Si0 (Table 2). Likewise, Si3 presented 14% higher productive tillers than Si0 (Table 2) while I5produced 9% more productive tillers than water stress I1. Moreover, Si3 along with irrigation level I5 produced 24% higher productive tillers than Si0 with irrigation level I1 (Table 2). Number of grains per spike in Si3 was 8% more than that Si0 (Table 2). Irrigation level I5 produced 13% higher grains per spike than irrigation level I1. Among irrigation × silicon interactions, I5and I3in combination with Si3produced 18% higher grains per spike than those observed with I1×Si0interaction. Regarding 1000-grain weight, I5 produced 8% more 1000-grain weight than I1 while I4 and I3 were statistically similar with I5 treatment. In case of Si concentrations, Si3 produced 14% higher 1000-grain weight than Si0. IrrigationI5in interaction with 1% Si produced 25% more 1000-grain weight than Si0 with irrigation level I1. Irrigation level I4 and I3 with Si0 showed statistically similar with I5 with Si3 (Table 2). Data showed that 37% and 43% higher biological and grain yield in I5 than I1 (Table 2). Irrigation levels I4 and I3 were statistically similar with I5. Exogenous application of 1% Si under irrigation level I5produced 61% and 63% higher biological and grain yields, respectively than water stress I1 along with Si0. Among Si concentrations, Si3 produced 44% and 48% higher biological and grain yield. Irrigation level I4 and I3 with Si3 were statistically similar with I5 with Si3 (Table 2). Silicon concentration Si3 produced 70% more net assimilation rate (NAR) than Si0 ( Table 2). The trend of Si application under irrigation level showed that full irrigation along with 1% Si produced 47% more NAR than irrigation level I1 along with Si0. Moreover, irrigation levels I4 and I3 with foliar applied Si 1% were statistically similar with I5 in Si3. Likewise, water stress I5 produced 33% more NAR than water stress I1. In the present study, foliar applied Si levels at tillering and anthesis stages significantly improved the growth, yield and physiological attributes in wheat when grown under irrigation levels. In water stress, plants enhanced the activities of superoxide dismutase and peroxidase that favoured plant growth and yield (Noman et al., 2015). Water shortage at critical growth stages of wheat significantly declined the crop production through disturbance of nutrients uptake and movement, rate of respiration and photosynthesis (Gupta and Huang, 2014;Cattivelli et al., 2008). Nawaz et al. (2012) described that moisture stress at early growth stage i.e. crown root initiation meaningfully reduced the phonological development due to lack of nutrients uptake which resulted in lower crop production. It is verified by Gupta et al. (2001) that water stress at anthesis and booting stage resulted in lower number of productive tillers due to poor fertilization process. Moreover, productive tillers were positively correlated with grain and biological yield if irrigated at anthesis stage. Consequently, water stress either at reproductive or vegetative stages can affect the physiological maturity by slow growth and development of productive tillers (Dhaka, 2003). It had been proven that availability of irrigation at each critical growth stage resulted in the healthy development of physiological attributes due to timely release of essential amino acids (Zhang et al., 2017). Silicon enhanced the water uptake in plants through improving osmotic potential and activity of aquaporin (Chen et al., 2011). Root hydraulic conductance showed the water uptake capacity of roots (Steudle, 2000;Hattori et al., 2008). It has been verified by the different studies that negative effect of water stress on crops were reduced by the application of Si (Gong et al., 2005;Hattori et al., 2005). Silicon has remarkable feature in up keeping the relative water contents under water stress (Lux et al., 2002). Iannucci et al. (2002) said that water stress lowered the relative water contents and nutrient uptake in plant body while application of Si on cultivars performed excellent and maintained turgor presser through which plant growth and yield improved. Data related to Si concentrations in wheat at anthesis and grain stage as presented in Table 2 revealed that effect of irrigation levels and silicon concentrations imparted significant p ≤ 0.05 effect at anthesis and grain formation stage. Among Si treatments, foliar applied Si (Si3) at anthesis and grain formation stages have maximum concentration of Si 40% and 42%, respectively than Si0. Foliar applied Si 1% with irrigation level I5 have 49% and 60% higher Si contents in wheat at anthesis and grain formation stages as compared to Si0 with I1. Irrigation level I5 showed better (33% at anthesis and 42% at grain formation stage) as compared to I1 but I4 and I3have maximum Si contents in plants and statistically similar with I5. The results also narrated that Si concentration in plants at anthesis and grain formation stage were significantly P≤0.05 increased by the application of Si3 (Table 2). Relative water content (RWC) of leaves as influenced by irrigation levels and foliar applied Siare presented in Table 3. The data on relative water content indicated that I5with Si3has 20%, 7% and 12% higher RWC over I1 with Si0 at 70, 95 and 120 DAS respectively. Significantly p ≤ 0.05 8%, 5% and 3% higher RWC were recorded in Si3 at 70, 95 and 120 DAS than Si0. Among irrigation levels, I5 treatment showed 10%, 4% and 4% higher relative water content than I1at 70, 95 and 120 DAS ( Table 3). Data of chlorophyll content (SPAD value) indicated that irrigation levels and foliar applied Si (Table 3) Data showed that significantly p ≤ 0.05 (8%) higher root length was recorded in I5 than I1 (Table 3). Moreover, irrigation levels I4 and I3 were statistically similar with I5 with respect to root length. In case of Si concentrations, Si3 produced significantly p ≤ 0.05 (8%) higher root length than control Si0 while Si0 and Si1 were statistically similar with Si2. The interactive effect of irrigation levels and foliar applies Si were found non-significant (Table 3). Significantly (46%) higher (WUE) was recorded in water stress I5 than I1 (Table 3). Application of Si3 under irrigation level I5 showed 72% more WUE than under I1 with Si0. Moreover, Si3 showed 48% more WUE than Si0. Regarding evapotranspiration efficiency (ETE) of wheat, irrigation level I5 along with Si3 showed 42% lower ETE than I1 along with Si0 (Table 3). Water stress I5 showed 22% lower ETE than I1 while I5, I4, I3 and I2 were statistically similar with each other. Foliar applied Si3 gave 24% lower ETE than Si0. Above findings showed the significant reduction in chlorophyll content due to stress levels. Chlorophyll content is a significant indicator which showed the plant productivity in terms of biomass production (Wang and Huang, 2004). Roy Deluca (2013) stated that wheat under water stress along with Si application significantly affected photosynthetic rate and root length compared than control. Wheat crop up to 60% and other crops up to 20% were less affected by drought with application of Si that increased the root surface area, volume, activity and total length due to which more water absorbed from soil which improved plant growth (Chen et al., 2011). Studies revealed that Si caused significant enhancement in water use efficiency through stimulating enzymatic, nonenzymatic and anti-oxidative defense systems (Liang et al., 2003;Hattori et al., 2005). Evapotranspiration demands to wheat crop have been increased as a result of evaporation due to minimum and sparse plant population and more area faced to sunlight which resulted in reduction of grain yield and water use efficiency. Optimum level of Si decreased the conductance and transpiration rate for leaf that was associated with plant growth under drought condition but no significant effect was recorded on conductance and transpiration rate from cuticle in plant due to Si supply by causing the Si polymers formation on root (Gao et al., 2006). Conclusion It can be concluded that Si application significantly increased biomass and grain yield of wheat cultivar under different irrigation levels, which was found to be associated with Si-induced increased relative water content, chlorophyll content, root length and water use efficiency. Therefore, application of Si is an effective way of increasing production of wheat in arid or semiarid areas.
2020-02-20T09:03:13.185Z
2020-02-17T00:00:00.000
{ "year": 2020, "sha1": "023a2863eb3772a5fb072c8b08a9486297c81681", "oa_license": "CCBY", "oa_url": "https://doi.org/10.35495/ajab.2019.04.174", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "810e2d203defc9f30bc27f422335c6c3def8664e", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
189088277
pes2o/s2orc
v3-fos-license
2082 Profile of pediatric potentially avoidable transfers OBJECTIVES/SPECIFIC AIMS: While hospital-hospital transfers of pediatric patients is often necessary, some pediatric transfers are potentially avoidable. Pediatric potentially avoidable transfers (PAT) represent a process with high costs and safety risks but few, if any, benefits. To better understand this issue, we described pediatric inter-facility transfers with early discharges. METHODS/STUDY POPULATION: We conducted a descriptive study using electronic medical record data at a single-center over a 12-month period to examine characteristics of pediatric patients with a transfer admission source and early discharge. Among patients with early discharges, we performed descriptive statistics for PAT defined as patient transfers with a discharge home within 24 hours without receiving any specialized tests, interventions, consultations, or diagnoses. RESULTS/ANTICIPATED RESULTS: Of the 2414 pediatric transfers 31.2% were discharged home within 24 hours. Among transferred patients with early discharges, 348 patients (14.4% of total patient transfers) received no specialized tests, interventions, consultations, or diagnoses. Direct admissions were categorized as PAT 2.2-fold more frequently than transfers arriving to the emergency department. Among transferred direct admissions, PAT proportions to the neonatal intensive care unit (ICU), pediatric ICU, and non-ICU were 5.8%, 17.4%, and 27.3%, respectively. Respiratory infections, asthma, and fractures were the most common PAT diagnoses. DISCUSSION/SIGNIFICANCE OF IMPACT: Early discharges and PAT are relatively common among transferred pediatric patients. Further studies are needed to identify the etiologies and clinical impacts of PAT, with a focus on direct admissions given the high frequency of PAT among direct admissions to both the pediatric ICU and non-ICU. OBJECTIVES/SPECIFIC AIMS: Opioid prescribing is common and increasing in certain areas of the country with known risk of misuse and dependence. Our study examined the association of opioid prescription at discharge after hospitalization for acute coronary syndrome (ACS) or acute decompensated heart failure (ADHF) with emergency department (ED) care or all-cause readmission, intended healthcare utilization (follow-up with physician within 30 d of discharge and cardiac rehab participation), and all-cause mortality. METHODS/STUDY POPULATION: The Vanderbilt Inpatient Cohort Study is a prospective cohort of hospitalized patients age >18 enrolled with either ACS or ADHF between 2011 and 2015 (index hospitalization). We then excluded those who died during the index hospitalization, patients with hospitalization <24 hours, patients discharged to hospice care, or those who underwent coronary artery bypass surgery because of the high probability of receiving opioids. In addition, we limited the analyses to patients whom we had complete covariate data. The primary predictor variable was an opioid prescription at the time of hospital discharge. We collected healthcare utilization behavior for 90 days after discharge, and mortality data until March 8, 2017. Time-to-event analysis using Cox proportional hazard models was performed for both unintended healthcare utilization behavior and mortality outcomes. Logistic regression was performed for intended healthcare utilization (adherence to follow-up appointments and cardiac rehabilitation). All models were adjusted for demographic data, opioid use prior to index hospitalization, severity of illness, and healthcare utilization prior to the index hospitalization. RESULTS/ ANTICIPATED RESULTS: There were 501 patients discharged with an opioid prescription and 1994 with no opioid prescription at discharge. Among patients with opioids at discharge 235 (47%) experienced unplanned healthcare events (71 ED visits and 164 readmissions) and among nonopioids patients 775 (39%) experienced unplanned healthcare events (254 ED visits and 521 readmissions) (aHR: 1.06, 95% CI: 0.87, 1.28). Patient mortality in the opioid group was 131 Versus 432 in the nonopioid group (aHR: 1.08, 95% CI 0.84, 1.39). Patients in the opioid at discharge group were less likely to attend follow up visits or participate in cardiac rehab (OR: 0.69, 95% CI 0.52, 0.91, p = 0.009) compared with those not discharged on opioid medications. Sensitivity analysis of patients who were prescribed prehospital opioids (including prehospital opioids in the exposure group with postdischarge opioids) did not reveal a statistically significant increase in mortality (aHR: 1.09, 95% CI 0.91, 1.31) or unintended healthcare utilization (aHR: 1.12, 95% CI 0.89, 1.41) among opioid users. DISCUSSION/SIGNIFICANCE OF IMPACT: Morbidity and mortality related to opioid use is a public health concern. Our study demonstrates a statistically significant reduction in physician follow-up and participation in cardiac rehab among opioid users, both of which are known to decrease patient mortality. We did not find a statistically significant increase in unplanned healthcare utilization or mortality. Sensitivity analysis combining prehospital and posthospital opioid prescriptions did not reveal a statistically significant association between opioid use, hospital readmissions, or mortality. The hospital provides unique patient interactions where providers can make significant medical changes based on their patient's clinical status. Continuing to understand the association between opioid use, healthcare utilization, morbidity, and mortality in recently hospitalized cardiac patients will provide data to support reduction in total opioid dose to improve clinical outcomes. 2418 Post-traumatic stress symptoms in caregivers of pediatric hydrocephalus population Kathrin Zimmerman, Alexandra Cutillo, Laura Dreer, Anastasia Arynchyna and Brandon G. Rocque University of Alabama at Birmingham OBJECTIVES/SPECIFIC AIMS: The goal of this study is to characterize traumatic events and post-traumatic stress symptom severity experienced by caregivers of children with hydrocephalus. Results will eventually be evaluated and compared with demographic and medical characteristics. This study is part of a larger research project that aims to (1) determine the prevalence and risk factors for post-traumatic stress symptoms in pediatric hydrocephalus patients and their caregivers; (2) develop a targeted intervention to mitigate its effects and pilot test the intervention. METHODS/STUDY POPULATION: Caregivers of children with hydrocephalus that have received surgical treatment (CSF shunt or ETV/CPC) were enrolled during routine follow up visit in a pediatric neurosurgery clinic. Caregivers completed the PTSD Checklist for DSM-5 (PCL-5), a 20-item self-report measure that assesses the presence and severity of post-traumatic stress disorder (PTSD) symptoms. RESULTS/ANTICIPATED RESULTS: Participant responses (n = 56) revealed that 57.14% of caregivers indicated that their most traumatic event was directly related to their child's medical condition. In total, 23.21% of caregivers did not specify their most traumatic event and 1.79% of caregivers indicated that they had never experienced a traumatic event. Median Total Symptom Severity Score was 11 (mean: 15.32 ± 14.92), and scores ranged from 0 to 67; 32.14% of caregivers scored 19 or greater, and 16.07% of caregivers scored 33 or greater, a value suggestive of a provisional diagnosis of PTSD. Severity scores by DSM-V clusters were as follows: cluster B-intrusion symptoms (mean: 4.91 ± 4.77, median: 4, range: 0-20), cluster C-avoidance symptoms (mean: 1.27 ± 1.87, median: 0.5, range: 0-8), cluster D-negative alterations in cognition and mood (mean: 4.86 ± 6.07, median: 2, range: 0-22), and cluster E-alterations in arousal and reactivity (mean: 4.29 ± 4.07, median: 3, range: 0-17). DISCUSSION/SIGNIFI-CANCE OF IMPACT: Preliminary results from this study indicate that posttraumatic stress symptoms are prevalent among caregivers of children with hydrocephalus. These results suggest that psychosocial issues such as PTSS may be a significant problem in need of treatment, that is not traditionally addressed as part of routine care for families of children with hydrocephalus. Characterizing post-traumatic stress symptoms in this population sets the foundation for the development of screening and treatment protocols for posttraumatic stress symptoms in caregivers of children with hydrocephalus. This study is the first step towards fundamentally improving routine clinical care and quality of life for patients with hydrocephalus and their caregivers by understanding and addressing the effects of traumatic stress. 2110 Prenatal near roadway air pollution exposure and early neurodevelopment in young Mexican-American children: Findings from the CHAMACOS prospective birth cohort study William H. Coe 1 , Jason Feinberg 2 , Robert Grunier 3 , Brenda Eskenazi 3 and Heather Volk 2 1 Johns Hopkins University School of Medicine; 2 Johns Hopkins Bloomberg School of Public Health; 3 UC Berkeley School of Public Health OBJECTIVES/SPECIFIC AIMS: Previous studies suggest that prenatal exposure to environmental pollutants can have an adverse effect on brain development. We examine the association between prenatal near roadway air pollution (NRAP) exposure and early neurodevelopment. METHODS/STUDY POPULATION: The Center for the Health Assessment of Mothers and Children of Salinas (CHAMACOS) Study is a prospective birth cohort that began in 1999 with 605 mother-child pairs of primarily Mexican-American descent. Maternal residence during pregnancy was geocoded using ArcGIS and prenatal NRAP exposure was assigned using the CALINE4 line source dispersion model. We used composite Bayley Scale scores for cognitive and motor development, and created separate linear regression models at 6, 12, and 24 months of age. RESULTS/ANTICIPATED RESULTS: After adjusting for relevant maternal and child characteristics, preliminary estimates suggest that prenatal NRAP exposure is associated with a nonsignificant increase in Bayley Scale scores at 6 and 24 months (cognitive: β = 0.13, p-value = 0.20 and motor: β = 0.08, p-value = 0.58 at 6 months; cognitive: β = 0.16, p-value = 0.42 and motor: β = 0.20, p-value = 0.25 at 24 months) and a nonsignificant decrease at 12 months (cognitive: β = − 0.07, pvalue = 0.64 and motor: β = − 0.12, p-value = 0.56). DISCUSSION/SIGNIFICANCE OF IMPACT: Our preliminary findings do not suggest that prenatal NRAP exposure is associated with early cognitive development. Additional exploration of co-exposures known to effect neurodevelopment should be examined in this rural population. Pediatric potentially avoidable transfers (PAT) represent a process with high costs and safety risks but few, if any, benefits. To better understand this issue, we described pediatric inter-facility transfers with early discharges. METHODS/ STUDY POPULATION: We conducted a descriptive study using electronic medical record data at a single-center over a 12-month period to examine characteristics of pediatric patients with a transfer admission source and early discharge. Among patients with early discharges, we performed descriptive statistics for PAT defined as patient transfers with a discharge home within 24 hours without receiving any specialized tests, interventions, consultations, or diagnoses. RESULTS/ANTICIPATED RESULTS: Of the 2414 pediatric transfers 31.2% were discharged home within 24 hours. Among transferred patients with early discharges, 348 patients (14.4% of total patient transfers) received no specialized tests, interventions, consultations, or diagnoses. Direct admissions were categorized as PAT 2.2-fold more frequently than transfers arriving to the emergency department. Among transferred direct admissions, PAT proportions to the neonatal intensive care unit (ICU), pediatric ICU, and non-ICU were 5.8%, 17.4%, and 27.3%, respectively. Respiratory infections, asthma, and fractures were the most common PAT diagnoses. DISCUSSION/SIGNIFICANCE OF IMPACT: Early discharges and PAT are relatively common among transferred pediatric patients. Further studies are needed to identify the etiologies and clinical impacts of PAT, with a focus on direct admissions given the high frequency of PAT among direct admissions to both the pediatric ICU and non-ICU. Risk factors for prescription opioid misuse after traumatic injury in adolescents Teresa M. Bell, Christopher A. Harle, Dennis P. Watson and Aaron E. Carroll OBJECTIVES/SPECIFIC AIMS: The objective of this study is to determine predictors and motives for sustained opioid use, prescription misuse, and nonmedical opioid use in the adolescent trauma population. METHODS/ STUDY POPULATION: This is a prospective cohort study that will follow patients for 1 year and administer surveys to patients on prescription opioid usage; substance use; utilization of pain management and mental health services; mental and physical health conditions; and behavioral and social risk factors. Patient eligibility criteria include: (1) patient is 12-18 years of age; (2) admitted for trauma; (3) english speaking; (4) resides within Indianapolis, IN metropolitan area; and (5) consent can be obtained from a parent or guardian. Patients with severe brain injuries or other injuries that prevent survey participation will be excluded. The patient sample will comprise of 50 traumatically injured adolescents admitted for trauma who will be followed for 12 months after discharge. RESULTS/ANTICIPATED RESULTS: We expect that the results of this study will identify multiple risk factors for sustained opioid use that can be used to create targeted interventions to reduce opioid misuse in the adolescent trauma population. Clinical predictors such as opioid type, dosage, and duration that can be modified to reduce the risk of long-term opioid use will be identified. We expect to elucidate clinical, behavioral, and social risk factors that increase the likelihood adolescents will misuse their medication and initiate nonmedical opioid use. DISCUSSION/SIGNIFICANCE OF IMPACT: Trauma is a surgical specialty that often has limited collaboration with behavioral health providers. Collaborative care models for trauma patients to adequately address the psychological impact of a traumatic injury have become more common in recent years. These models have primarily been concerned with the prevention of post-traumatic stress disorder. We would like to apply the findings of our research to better understand what motivates adolescents to misuse pain medications as well as how clinical, individual, behavioral, and social factors affect medication usage. This may help identify patients at greater risk of developing a SUD by asking questions not commonly addressed in the hospital setting. For example, similar to how trauma centers have mandated brief interventions on alcohol use be performed for center verification, screening patients' on their social environment may identify patients at greater risk for SUD than assumed. The long-term goal would be to prevent opioid use disorders in injured adolescents by providing better post-acute care support, possibly by developing and implementing a collaborative care model that addresses opioid use. Additionally, we believe our findings could be applied in the acute care setting as well to help inform opioid prescribing and pain management methods in the acute phase of an injury. Genetic testing to determine which opioid to prescribe pediatric surgical patients is starting to be done at some pediatric hospitals. Certain genes determine which specific opioid is most effective in controlling a patient's pain and, further, using the optimal opioid medication can also reduce overdose. Our findings may help refine prescribing patterns that could increase or decrease the likelihood of developing SUD in patients with certain genetic, clinical, behavioral, and social characteristics. 2492 Risk of readmission after discharge from skilled nursing facilities following heart failure hospitalization Himali Weerahandi 1 , Li Li 2 , Jeph Herrin 2 , Kumar Dharmarajan 2 , Lucy Kim 3 , Joseph Ross 4 , Simon Jones 5 and Leora Horwitz 3 1 NYU Lutheran, Brooklyn, NY, USA; 2 Yale School of Medicine, New Haven, CT, USA; 3 NYU School of Medicine, New York, NY, USA; 4 Yale University School of Medicine, New Haven, CT, USA; 5 School of Medicine, New York University, New York, NY, USA OBJECTIVES/SPECIFIC AIMS: Determine timing of risk of readmissions within 30 days among patients first discharged to a skilled nursing facilities (SNF) after heart failure hospitalization and subsequently discharged home. METHODS/ STUDY POPULATION: This was a retrospective cohort study of patients with SNF stays of 30 days or less following discharge from a heart failure hospitalization. Patients were followed for 30 days following discharge from SNF. We categorized patients based on SNF length of stay (LOS): 1-6 days, 7-13 days, 14-30 days. We then fit a piecewise exponential Bayesian model with the outcome as time to readmission after discharge from SNF for each group. Our event of interest was unplanned readmission; death and planned readmissions were considered as competing risks. Our model examined 2 different time intervals following discharge from SNF: 0-3 days post SNF discharge and 4-30 days post SNF discharge. We reported the hazard rate (credible interval) of readmission for each time interval. We examined all Medicare fee-for-service (FFS) patients 65 and older admitted from July 2012 to June 2015 with a principal discharge diagnosis of HF, based on methods adopted by the Centers for Medicare and Medicaid Services (CMS) for hospital quality measurement. RESULTS/ANTICIPATED RESULTS: Our study included 67,585 HF hospitalizations discharged to SNF and subsequently discharged home [median age, 84 years (IQR; 78-89); female, 61.0%]; 13,257 (19.2%) were discharged with home care, 54,328 (80.4%) without. Median length of SNF admission was 17 days (IQR; 11-22). In total, 16,333 (24.2%) SNF discharges to home were readmitted within 30 days of SNF discharge; median time to readmission was 9 days (IQR; 3-18). The hazard rate of readmission for each group was significantly increased on days 0-3 after discharge from SNF compared with days 4-30 after discharge from SNF. In addition, the hazard rate of readmission during the first 0-3 days after discharge from SNF decreased as the LOS in SNF increased. DISCUSSION/SIGNIFICANCE OF IMPACT: The hazard rate of readmission after SNF discharge following heart failure hospitalization is highest during the first 6 days home. Length of stay at SNF also has an effect on risk of readmission immediately after discharge from SNF; patients with a longer length of stay in SNF were less likely to be readmitted in the first 3 days after discharge from SNF. 2240 Shared decision making in child health: A qualitative study of parents of children with medical complexity Jody Lin, Catherine Clark, Bonnie Halpern-Felsher and Lee M. Sanders Stanford University School of Medicine OBJECTIVES/SPECIFIC AIMS: Children with medical complexity (CMC) comprise less than 5% of the pediatric population and over 40% of pediatric spending, yet receive poorer quality health care compared with other children. The American Academy of Pediatrics recently identified shared decision making (SDM) as a key quality indicator for CMC, but there is no consensus model for SDM in CMC. Objective: To create a model of SDM from perspectives of parents of CMC. METHODS/STUDY POPULATION: Interviews with parents of CMC explored SDM preferences and experiences. Eligible parents were ≥18 years old, Englishspeaking or Spanish-speaking, with a CMC <12 years old. Interviews were recorded, transcribed, and analyzed by 3 independent coders for shared themes using grounded theory. RESULTS/ANTICIPATED RESULTS: Interviews were with 31 parents [26 English speakers, median parent age 33 years (SD 11), median child age 3 years (SD 3.6)] in inpatient and outpatient settings. We identified specific, unique components of SDM that affect decision quality, the alignment of a decision with the parent's preferences and values. Themes included: concerns about uncertainty of the child's life trajectory, conflict during parent-provider communication, health system factors such as provider schedule; parent agency, and the influence of the source of information. DISCUSSION/SIGNIFICANCE OF IMPACT: Our findings provide specific components of SDM unique to CMC that can inform future research and interventions to support SDM for parents and providers of CMC. cambridge.org/jcts
2019-06-13T13:19:37.438Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "0e2de24146b24c25e760c2be2ac6ce3a790f7215", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A07E095E112E75C07FDFE152DFC38059/S2059866118003011a.pdf/div-class-title-2082-profile-of-pediatric-potentially-avoidable-transfers-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80ab0215969945c0044892c848223a3872d0c00f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119319988
pes2o/s2orc
v3-fos-license
Smoothness of anisotropic wavelets, frames and subdivision schemes This paper presents a detailed regularity analysis of anisotropic wavelet frames and subdivision. In the univariate setting, the smoothness of wavelet frames and subdivision is well understood by means of the matrix approach. In the multivariate setting, this approach has been extended only to the special case of isotropic refinement with the dilation matrix all of whose eigenvalues are equal in the absolute value. The general anisotropic case has resisted to be fully understood: the matrix approach can determine whether a refinable function belongs to $C(\mathbb{R}^s)$ or $L_p(\mathbb{R}^s)$, $1 \le p<\infty$, but its H\"older regularity remained mysteriously unattainable. It this paper we show how to compute the H\"older regularity in $C(\mathbb{R}^s)$ or $L_p(\mathbb{R}^s)$, $1 \le p<\infty$. In the anisotropic case, our expression for the exact H\"older exponent of a refinable function reflects the impact of the variable moduli of the eigenvalues of the corresponding dilation matrix. In the isotropic case, our results reduce to the well-known facts from the literature. We provide an efficient algorithm for determining the finite set of the restricted transition matrices whose spectral properties characterize the H\"older exponent of the corresponding refinable function. We also analyze the higher regularity, the local regularity, the corresponding moduli of continuity, and the rate of convergence of the corresponding subdivision schemes. We illustrate our results with several examples. Introduction We study multivariate refinement equation with a compactly supported sequence of coefficients c k ∈ R and with a general integer dilation matrix M ∈ Z s×s all of whose eigenvalues are larger than one in the absolute value. We do not make any assumptions on the stability of the integer shifts of ϕ. In this paper, we characterize continuous and L p solutions of (1). Our main contribution is the exact expression for the Hölder exponent of ϕ in C(R s ) and in L p (R s ), 1 ≤ p < ∞, see Theorems 1 and 7. In the anisotropic case, the Hölder exponent of ϕ reflects the influence of the invariant subspaces of M corresponding to its different by modulus eigenvalues. In the isotropic case, when all the eigenvalues of M are equal in the absolute value, our results reduce to the well-known ones from the literature. We also estimate the modulus of continuity and analyze Lipschitz, local and higher regularity of continuous ϕ. In the univariate case, M ≥ 2 is an integer, there are several efficient methods for determining the regularity of refinable functions. In [11,18,23] the authors compute precisely the Sobolev exponent of ϕ ∈ L 2 (R). The so-called matrix approach yields the Hölder exponent of ϕ ∈ C(R) and, in addition, provides a detailed analysis of its moduli of continuity and of its local regularity [13,16,49,56]. An obstacle to the practical use of the matrix approach is the NP-hardness of the joint spectral radius computation. This problem, however, was successfully resolved for a large class of problems by recent results in [28,29,42] where the authors presented fast and efficient methods of the joint spectral radius computation. Indeed, the invariant polytope algorithm [28] estimates the joint spectral radius for the corresponding transition matrices of size up to 20 and, in most cases, even determines its precise value. The generalization of the matrix approach to the multivariate case turned out to be a difficult task in the case of general dilation matrices. The special case of isotropic dilation is currently fully understood, see [4,5,6,11,18,21,23,30,31,32,36,38,57]. Several partial results in the anisotropic case are also available: for characterizations of continuity and L p , 1 ≤ p < ∞, regularity of ϕ see e.g. [3,33]; for estimates for the Hölder exponent of ϕ see e.g. [3,33]. The reason for the difficulty of the anisotropic case is natural and hardly avoidable. In the univariate case, say M = 2, the distance between two points x, y ∈ R can be expressed in terms of their binary expansions. The distance between the values ϕ(x) and ϕ(y) depends on the behavior of the products of certain square matrices derived from c k , k ∈ Z. These two observations establish a correlation between |x − y| and |ϕ(x) − ϕ(y)|, which leads to the formula for the Hölder exponent of ϕ. This summarizes the essence of the matrix approach. In the multivariate case, one can similarly estimate the distance between ϕ(x) and ϕ(y) by suitable matrix products. The problem occurs at an unexpected point: the expression for the distance between x, y ∈ R s . One can try to use the corresponding M-adic expansions with a certain set of digits from Z s , but such expansions do not provide a clear estimate for the distance between x and y. Indeed, unless the matrix M is isotropic, multiplication by a high power of M can enlarge distances differently in different directions. Hence, the points M ℓ x and M ℓ y, ℓ ∈ N, whose M-adic expansions are essentially the same, may have different asymptotic behavior as ℓ → ∞. Remarkably simple examples show that a direct analogue of the isotropic formula for the Hölder exponent does not hold in the anisotropic case. Moreover, unless M is isotropic, this formula never holds for Lipschitz refinable functions, see section 3.1. Nevertheless, there are ways of treating the anisotropic case. In [12], the authors consider special anisotropic Sobolev spaces. We put emphasize on incorporating the spectral properties of the dilation matrix M into the expression for the Hölder exponent of ϕ. Furthermore, we get rid of the M-adic expansions and base our analysis on geometric properties of tilings generated by M. Our paper is organized as follows. In section 3, we characterize the continuity and determine the Hölder regularity of multivariate refinable functions, see Theorems 1 and 2. In subsection 3.2, we provide an algorithm for construction of continuous solutions of (1). We consider several examples and list several important special cases of Theorems 1 in subsection 3.1. The crucial steps of the proofs and actual proofs of Theorems 1 and 2 are given in subsections 3.3 and 3.5. We illustrate our results by numerical examples in subsection 3.6. In section 4, we show how to factorize smooth refinable functions and compute the Hölder exponents of their directional derivatives. Sections 5 and 6 deal with the moduli of continuity of continuous refinable functions and with determining their local regularity. In section 7, we analyze the existence of L p -solutions of (1). We show that a direct analogue of the formula for the Hölder exponent (i.e. replacing the joint spectral radius by the p-radius) does not hold in L p , 1 ≤ p < ∞. To characterize the L p Hölder exponent of ϕ, we consider extended transition matrices, see subsections 7.3 and 7.4. In section 8, we derive the expression for the convergence rate of subdivision. We show that, in the anisotropic case, the convergence rate of subdivision and the Hölder exponent of the corresponding refinable function ϕ cannot be related similarly to the isotropic case, even if ϕ is stable. Background and notation We consider the standard notation for the function spaces C, C k , L p , 1 ≤ p < ∞, the space of vector-valued functions f : X → R n with components belonging to L p is denoted by L p (X, R n ). We simply write L p (X), if the range space is fixed. The Schwartz space of smooth rapidly decreasing functions over R s is denoted by S, and S ′ is the space of tempered distributions (distributions over S or distributions of slower growth); by µ(X) we denote the Lebesgue measure of a set X ⊂ R n , by |·| we denote either a modulus of a complex number or the cardinality of a finite set. The norm · in finite dimensional spaces is always Euclidean, unless stated otherwise. Spectral properties of the dilation matrix We make a standard assumption that the integer dilation matrix M ∈ Z s×s is expansive, i.e., all its eigenvalues are larger than 1 in the absolute value. Hence, m = |det M| ≥ 2. Among the eigenvalues 1 < |λ 1 | ≤ · · · ≤ |λ s | of M, exactly n i of them are in the absolute value equal to r i , i = 1, . . . , q(M) ≤ s. If M is isotropic, then q(M) = 1. For i = 1, . . . , q(M), let J i ⊂ R s be the root subspaces of M corresponding to the eigenvalues of modulus r i . Thus, dim(J i ) = n i and the operator M| J i has all its eigenvalues equal to r i in the absolute value. The space R s is a direct sum of the subspaces J 1 , . . . , J q(M ) . There exists an invertible transformation B : R s → R s such that M has the following block diagonal structure Dilation matrix and tiles The matrix M splits the integer lattice Z s into m equivalence (quotient) classes defined by the relation x ∼ y ⇔ y − x ∈ M Z s . Choosing one representative d i ∈ Z s from each equivalence class, we obtain a set of digits D(M) = {d i : i = 0, . . . , m − 1}. We always assume that 0 ∈ D(M). The standard choice is to take D(M) = Z s ∩ M[0, 1) s . For every integer point d ∈ Z s , we denote by M d , the affine operator M d x = Mx−d, x ∈ R s . We use the notation 0. By [26,27], for every expansive integer matrix M and for an arbitrary set of digits D(M), the set G is a compact set with a nonempty interior and possesses the properties: c) the indicator function χ = χ G (x) of G satisfies the refinement equation i.e., integer shifts of χ cover R s with µ(G) layers; e) µ(G) = 1 if and only if the function system {χ(· + k)} k∈Z s is orthonormal. If µ(G) = 1, then G is called a tile. The integer shifts of a tile define a tiling. Definition 1 A tiling generated by an integer expansive matrix M and by a set of digits D(M) is a collection of sets G = {k + G} k∈Z s such that a) the union of the sets in G covers R s and µ ((ℓ + G) Not every M possesses a digit set D(M) such that G is a tile. Those situations, however, are rare. For instance, a digit set generating a tile always exists in cases s = 2, 3 and also for arbitrary s with an extra assumption |det M| > s, which is quite general for integer expansive matrices [40]. See [3,41] for more details. Thus, in this paper, we assume that G is a tile. We denote Refinable functions and the transition operator A compactly supported distribution ϕ ∈ S ′ (R s ) satisfying equation (1) is called a refinable function. It is well known that the solution of (1) such that R s ϕ(x) dx = 0 exists if and only if k∈Z s c k = m. We assume further that the coefficients of (1) satisfy sum rules of order These conditions arise naturally in the context of subdivision and are necessary for existence of stable refinable functions [4]. Consider the transition operator T : For every compactly supported function f ∈ S ′ such that R s f (x) dx = 1, the sequence {T j f } j∈N converges to ϕ in the space S ′ [3]. The space of distributions supported on the set is invariant under T . Hence, for f ∈ S ′ (K), we have T j f ∈ S ′ (K) for all j ∈ N. Therefore, the limit ϕ ∈ S ′ (K). Thus, supp ϕ ⊂ K, see [3, Proposition 2.2]. Definition 2 A finite set Ω ⊂ Z s is a minimal subset of Z s with the property We denote N = |Ω|. It is shown easily that M −1 d (Ω + G) ⊂ Ω + G for every d ∈ D(M). The main idea of the matrix approach is to pass from a function f : Then the transition operator (5) restricted to the space becomes the self-similarity operator A : where T d are the N × N transition matrices defined by The rows and columns of the matrices T d are enumerated by elements from the set Ω. We denote The refinement equation becomes the self-similarity equation Av = v for the vector-valued Important subspaces of R N We consider the following affine subspace of the space R N It is well known that every compactly supported refinable function ϕ ∈ S ′ such that R s ϕ(x) ds = 1 possesses the partition of unity property: (see, e.g. [3,4]). Hence, if ϕ is continuous, then v(x) ∈ V for all x ∈ G. In particular v(0) ∈ V . For summable refinable function, v(x) ∈ V for almost all x ∈ G. We denote the linear part of the subspace V by Finally, every continuous refinable function defines the space of differences of the vector- Since v(x) ∈ V for all x ∈ G, we have U ⊂ W , and, therefore, n ≤ N − 1. The sum rules (4) imply that the column sums of each matrix T d are equal to one. Therefore, T d V ⊂ V and T d W ⊂ W . Thus, V is a common affine invariant subspace of the family T and W is its common linear invariant subspace. , to the subspace U are well defined. For a fixed basis of U, we denote by the set of the associated n × n matrices. If the family T is irreducible on W , then A = T | W . We also consider the following subspaces of the space U Note that U i are nonempty, due to the interior int(G) of G being nonempty. It is seen easily that the spaces span the whole space U, but their sum may not be direct. Indeed, the subspaces , may have nontrivial intersections. For example, they can all coincide with U. The following result shows that all U i are invariant under A. Hence, A d v(y) − v(x) ∈ L for each pair (x, y), and, therefore, A d u ∈ L for all u ∈ L. ✷ 2.5 Joint spectral radius Definition 3 The joint spectral radius of a finite family A of linear operators A d is defined by This limit always exists and does not depend on the operator norm [58]. The joint spectral radius measures the simultaneous contractibility of the operators from A. Indeed, ρ(A) < 1 if and only if there exists a norm in R n in which all A ∈ A are contractions. In general, We denote Continuous solutions and Hölder regularity In this section, in Theorem 1, we characterize the continuity of a solution ϕ of the refinement equation (1) in terms of the spectral properties of A and determine the exact Hölder exponent For the readers convenience, we start by listing shortly the crucial results of this section. The proof of Proposition 1 is given in subsection 3.2. Proposition 1 Let v 0 ∈ V be an eigenvector of T 0 associated to the eigenvalue 1. If ϕ ∈ C(R s ), then U is the smallest by inclusion common invariant subspace of the matrices Remark 1 Recall that 0 ∈ D(M), which justifies the notation T 0 . The existence of the eigenvector v 0 ∈ V of T 0 associated to the eigenvalue 1 follows by the continuity ϕ (which implies, by (11), that T 0 v(0) = v(0)) and by the fact that v(0) ∈ V . Proposition 1 yields an equivalent definition of U, which we use in the sequel. Definition 4 Let v 0 ∈ V be an eigenvector of T 0 associated to the eigenvalue 1. The space U is the minimal common invariant subspace of m matrices Note that due to the sum rules (4), the column sums of each T d are equal to one. Hence, each T d has an eigenvalue one. Even if the eigenvalue 1 is not simple, Proposition 2 in subsection 3.2 guarantees that there exists at most one eigenvector v 0 ∈ V such that ρ(A) < 1 for U as in Definition 4. Thus, the subspace U is always well defined, unless the refinement equation does not possess a continuous solution. For the sake of simplicity, we make the following assumption. Assumption 1 The matrix T 0 has a simple eigenvalue 1. Recall that ρ i = ρ(A| U i ), where U 1 , . . . , U q(M ) are the subspaces defined in (14). Now we are ready to formulate the main result of this section. Theorem 1 A refinable function ϕ belongs to C(R s ) if and only if ρ(A) < 1. In this case, The proof of (15) is based on Theorem 2. To state it, we define the Hölder exponent of ϕ along a linear subspace J ⊂ R s by Theorem 2 If ϕ ∈ C(R s ), then for each i = 1, . . . , q(M), we have Remark 3 The identity (15) emphasizes the influence of the spectral structure of the dilation matrix M on the regularity of the solution ϕ. Recall that, in the univariate case, the Hölder exponent is given by α ϕ = log 1/r ρ(A), where M = r ≥ 2 is the corresponding dilation factor. In the multivariate case, the Hölder exponent is equal to the minimum of several such values taken over different dilation coefficients r i on the corresponding subspaces J i of M. In special, favorable multivariate cases, the expression in (15) becomes α ϕ = log 1/ρ(M ) ρ(A) and, thus, resembles the univariate case. This happens, for instance, when the matrix M is isotropic, i.e. |λ 1 | = . . . = |λ s | = ρ(M), in particular, when M = r I, r ≥ 2. Another favorable situation is when the matrices in A do not possess any common invariant subspace. However, the need for the minimum in (15) is not exceptional. It is of crucial importance e.g. for anisotropic refinable Lipschitz continuous functions ϕ, see Corollary 3 in subsection 3.1. Special cases of Theorem 1 and examples To compare the result of Theorem 1 with the known results from the wavelet and subdivision literature, we need to define the stability of ϕ. The univariate case (s = 1). In this case, the dilation factor is M ≥ 2 and M = m = r. Theorem 1 becomes a well-known statement that α ϕ = log 1/r ρ(A). If ϕ is stable, then we have ρ(A) = ρ(T | U ) = ρ(T | W ) even if U = W (see [4]). The space U was completely characterized in [50] and it was shown that every refinement equation can be factorized to the case U = W . In the multivariate case, however, there is no factorization procedure and some equations, even with stable solutions, cannot be reduced to the case U = W , see Example 1 below. The case s ≥ 2 with isotropic dilation matrix. Since q(M) = 1, it follows that U 1 = U. Theorem 1 then implies the following well-known fact. The irreducible case with s ≥ 2. The dilation matrix M can be anisotropic, i.e. the number of different in modulus eigenvalues of M is q(M) > 1. We say that the set of matrices A = T | U is irreducible, if they do not possess any common invariant subspace. Another corollary of Theorem 1 states the following. The irreducibility assumption fails however in many important cases. For instance, if ϕ is a tensor product of two refinable functions, then A is always reducible. After Example 1 one may hope that the case of reducible family A is exceptional, and the equality α ϕ = log 1/ρ(M ) ρ(A) actually holds for most refinable functions. On the contrary, the result of Corollary 3 shows that the the situation when the isotropic formula fails is rather generic. Corollary 3 If the matrix M is anisotropic and the refinable function ϕ = 0 is Lipschitz continuous, then 1 = α ϕ > log 1/ρ(M ) ρ(A) and the family A is reducible. The case of a dominant invariant subspace. In practice, this case is much more generic than the irreducible case. Take a basis of a dominant subspace U ′ and complement it to a basis of U. Let B be the n × n matrix containing these basis elements of U. Then every matrix A d ∈ A in this basis has a block lower triangular form By Definition 6, Furthermore, since any common invariant subspace of A contains U ′ , it follows that the joint spectral radius of A restricted to any common invariant subspace is equal to ρ(A). Therefore, we have proved the following result. Construction of the space U and of the continuous refinable function ϕ. The continuity of the refinable function ϕ is characterized in terms of the joint spectral radius of the matrices T d , d ∈ D(M) restricted to the common invariant subspace U in Definition 4. In this section, we answer two crucial questions: how to determine the space U and how to construct the corresponding continuous refinable function ϕ. In many cases U coincides with W . In the univariate case, the algorithm for determining the space U was elaborated in [13]. In this section, we present its multivariate analogue and explain several significant unavoidable modifications. Algorithm for construction of the space U Algorithm 1: For a given set 3. Step: Repeat Note that the choice of 1 ≤ k ≤ N − 1 is imposed by the fact that dim U ≤ N − 1 and that, by construction, at least one extra element is added to U (k+1) before the algorithm terminates. Remark 4 In practice, one would first determine a basis of U (1) . Then, in 3. Step for 1 ≤ k ≤ N − 1, this basis will be consequently extended by T d u (k) as long as the extended set stays linearly independent. This extended set provides a basis {u (k) : k = 1, . . . , r (k) } for U (k) . The algorithm terminates, if, in the kth iteration, for every vector from Then U (k+1) = U (k) and we set U = U (k) . Note first that the existence of the eigenvector v 0 ∈ V of the matrix T 0 with the eigenvalue one (in 1. Step of the Algorithm) follows from continuity of the refinable function (see Remark 1). Hence, if 1. Step is impossible, i.e., the eigenvector v 0 does not exist, then the solution of the refinement equation (1) is not continuous. Secondly, we show that the space U in Algorithm 1 coincides with the space in (12), i.e., we prove Proposition 1 stated at the beginning of section 3. To do that we define Every point from Q k belongs to the set Thus, the set Q k contains m k points. The sets in (19) are nested, i.e. Q k ⊂ Q k+1 for all k ≥ 1. The nestedness follows due to each point 0. is dense in G. Proof of Proposition 1. Let U be as in Definition 4. It suffices to show that Assume the claim is true for some k ∈ N. Take arbitrary x, y ∈ Q k+1 . For Similarly we take the corresponding point b ∈ Q k for the point y and prove that This completes the proof. ✷ The proof of Proposition 1 also implies that the spaces U (k) defined in the algorithm above are of the form Algorithm for construction of a continuous ϕ. Due to the fact that the set Q in (20) is dense in G, the slight modification of Algorithm 1 yields a method for the step-by-step construction of the vector-valued function v = v ϕ defined on G or, equivalently, of the function ϕ. Step: Define is an approximation of v and the difference v − v k ∞ can be efficiently estimated by the joint spectral radius of the family A. This yields a linear rate of convergence of Algorithm 2. See section 8 for more details. The last result of this section, Proposition 2, ensures that U is well defined even if the eigenvalue 1 of the matrix T 0 is not simple. Proposition 2 For an arbitrary refinement equation, the matrix T 0 has at most one, up to normalization, eigenvector v 0 ∈ V associated with the eigenvalue 1 such that, for U in Definition 4, we have ρ(T | U ) < 1. If such v 0 ∈ V exists, then ϕ is continuous and v 0 = v ϕ (0). Proof. If such an eigenvector v 0 exists, then by Theorem 1 the refinable function is continuous and, by Proposition 1, U = span {v(y) − v(x) : x, y ∈ G}. By Algorithm 2, there exists a refinable function ϕ such that we get v 0 = v ϕ (0). If there is another eigenvector v 0 ∈ V with this property, then, by Algorithm 2, it generates another refinable functionφ for whichṽ 0 = vφ(0). By the uniqueness of the solution of the refinement equation, these two solutions may only differ by a constant, hence, the vectors v 0 andṽ 0 are collinear. ✷ Road map of our main results We would like to emphasize that, to tackle the anisotropic case, we use geometric properties of tilings rather than the M-adic expansions of points in R s (the latter being a successful strategy in the isotropic case). Our key contribution is Theorem 2 that finally reveals the delicate dependency of the Hölder exponent of a refinable function on its Hölder exponents along the subspaces J i , i = 1, . . . , q(M). Due to the importance of Theorem 2, we would like to give here a preview of its proof. Step 1. Extend the vector-valued function v in (7) defined on the tile G to the whole R s , see (22). Lemma 4 yields, for x, y ∈ G (i.e. x, y ∈ G − j for some j ∈ N), the estimate The extension of v is motivated by the fact that parts of the line segment [x, y] can lie outside of G, due to its possible fractal structure. Step 2. Lemma 3 shows that, for a tiling G of R s , the total number of the subsets of the tiling intersected by a line segment is proportional to the length of that segment. Step 3. Due to Step 2, Lemma 2 and Proposition 3 imply that, for k ∈ N, any line segment [x, y] in R s , y − x ∈ J i , consists of several line segments such that 1) the endpoints of each of those line segments belong to one subset of the tiling G k ; 2) the total number of those line segments is bounded by Step 4. The difference between the values of the function v at the endpoints of each of those subsegments of [x, y] is bounded from above by C 1 (ρ i + ε) k for some ε > 0. Hence, by Step 1, the same is true forṽ. Therefore, by the triangle inequality, , where α(ε) approaches log 1/r i ρ i as ε goes to 0. Auxiliary results for Theorems 1 and 2 The proofs of our main results, Theorems 1 and 2, are based on an important observation formulated in Proposition 3. We also make use of the following basic properties of the joint spectral radius and two auxiliary lemmas. Theorem A1 [58]. For a family of operators A acting in R n and for any ε > 0, there exists a norm · ε in R n such that A ε < ρ(A) + ε for all A ∈ A. Theorem A2 [1]. For a family of operators A acting in R n there exists u ∈ R n and a constant C(u) > 0 such that Lemma 2 Assume that the segment [0, 1] is covered with ℓ distinct closed sets. Then there exist ℓ + 1 points 0 = a 0 ≤ . . . ≤ a ℓ = 1, such that for each i = 0, . . . , ℓ − 1, the points a i , a i+1 belong to one of these sets,. Proof. Let the first set contain the point a 0 = 0. Choose a 1 to be the maximal (in the natural ordering of the real line) point of the first set. If a 1 = 1, then a 1 must belong to another set of the tiling. Choose a 3 to be the maximal point of this set. Repeat until a ℓ 0 = 1 for some ℓ 0 ∈ N. We have ℓ 0 ≤ ℓ, since the sets are distinct. If ℓ 0 < ℓ, we extend the sequence a 0 ≤ . . . ≤ a ℓ 0 by the points a ℓ 0 +1 , . . . , a ℓ = 1. ✷ Next we show that a segment of a given length intersects a finitely many sets of the tiling G. Proof. It suffices to prove that the number of sets G + k ⊂ G intersected by a segment of length one is bounded above by some constant C. It will imply that the number of sets G + k ⊂ G intersected by any segment of length y − x > 1, is bounded by C y − x , and the claim follows. Thus, let a segment [x, y] be of length one. In Lemma 4 and in Proposition 3, we compare the properties of v andṽ. Proof. Let j ∈ Z s . By (7) and due to the compact support of ϕ, the k-th component of v(y) −ṽ(x) is given by Hence, in the Euclidean norm we have ṽ( i=0 whose sum is equal to one, and sets of points such that each pair of successive points a i , a i+1 belongs to only one set G d (i) First we give an estimate for ℓ. Since ℓ elements of the tiling G k = M −k G cover a segment of length y − x , the same number of elements of the tiling G cover a segment of length M k (y − x) . Therefore, Lemma 2 belong to G. Thus, by (11), we obtain It follows that Proofs of Theorems 1 and 2 In this subsection we prove Theorems 1 and 2. We start with Theorem 2 as its proof is a crucial part of the proof of Theorem 1. Note that for both Theorems 1 and 2 the assumption that ϕ ∈ C(R s ) implies, e.g. by [3], that ρ(A) < 1. We will not reprove this result here. Proof of Theorem 2. Let ε ∈ (0, 1 − ρ i ) and i ∈ {1, . . . , q(M)}. We first show that α ϕ,J i ≥ log 1/r i ρ i . For arbitrary points x, y ∈ G such that y − x ∈ J i and y − x < 1, define k to be the smallest integer such that where the constant C > 0 depends only on M. By (22), Theorem A1 and by Proposition 3, for these x, y and k, there exist a constant C 1 > 0 depending on G and the integer such that (note that y−x ∈ J i implies, by Proposition 3, that x j , y j in (24) with the constant C 3 independent of k. By the choice of k, we have M k−1 (y − x) < 1 and, hence, Combining the above estimate with (25) (i.e. k ≥ − log y−x log(r i +ε) + C 4 ), we get, due to with α(ε) = log 1/(r i +ε) (ρ i + ε) and with some constant C depending on ε. Letting ε → 0, we obtain the claim. Next we establish the reverse inequality α ϕ,J i ≤ log 1/r i ρ i . Let ε ∈ (0, r i ) and d 1 , . . . , d k ∈ D(M), k ∈ N. By Theorem A2, there exist u ∈ U i and a constant C(u) > 0 such that Moreover, we have Consequently, at least one of the n i numbers v(y j ) , j = 1, . . . , n i , is larger than or equal to C(u) j |γ j | ρ k i . Combining this estimate with (27) where α(ε) = log 1/(r i −ε) ρ i and the constant C > 0 does not depend on k. Since y j ] on which the variation of the function v is at least a constant times the length of that segment to the power of log 1/(r i −ε) ρ i . Therefore, α ϕ,J i ≤ log 1/(r i −ε) ρ i . Since ε is arbitrary, the claim follows. ✷ Proof of Theorem 1. We only show that the condition ρ(A) < 1 is sufficient for continuity of ϕ. Let ε ∈ (0, 1 − ρ(A)). Recall that Q in (20) is dense in G. To determine the values of the vector-valued function v = v ϕ in (7) on Q, the set of rational M-adic points from G, use the algorithm from subsection 3.2. We first show that the vector-valued function v is uniformly bounded on Q. (19). Then, for every The values of v on Q define the function ϕ onQ, whereQ = ∪ k∈Z s (k + Q) is the set of all rational M-adic points of R s . The so constructed ϕ :Q → R is supported on K ∩Q. Using ϕ, define the extensionṽ :Q → R of v in (7). We show next thatṽ is uniformly continuous onQ, which implies that its extension to R s is continuous. Take arbitrary points x, y ∈Q. By Proposition 3 and the same argument as in the first part of the proof of Theorem 2, we obtain where k is the smallest number such that M k (y − x) ≥ 1. Note that the value v C(Q) is finite because v is bounded Q. Since M k ≥ 1/ y − x , i.e. k goes to ∞ as y − x goes to zero,ṽ is uniformly continuous onQ, which completes the proof of continuity. Thus, if ρ(A) < 1, then ϕ ∈ C(R s ). By Theorem 2, the Holder exponent α ϕ of ϕ on shifts along the subspace J i is equal to α i = log 1/r i ρ i . We pass to a basis in the space R s , in which all the subspaces J i are orthogonal to each other. Using a natural expansion h = h 1 + . . . + h q(M ) , h i ∈ J i we obtain for arbitrary ε > 0 Examples We consider the dilation matrix M = 2 1 1 −1 and the refinement equation with five nonzero coefficients The dilation matrix has eigenvalues λ 1 = 1− The corresponding transition matrices T (0,0) , T (1,0) and T (2,0) are given by respectively. The matrix T (0,0) has one eigenvalue 1 with the corresponding eigenvector Using the algorithm for construction of U from subsection 3.2, we obtain 10 }. Due to dim(U) = 12, we have W = U and, therefore, We computed the joint spectral radius of the set A = {A 0 , A 1 , A 2 } using the invariant polytope algorithm from [28] and obtained that the joint spectral radius is attained at the finite product (A 0 A 1 ) 2 A 2 0 A 2 of length 7, i.e. The algorithm constructs an invariant polytope of the operators A 0 , A 1 , A 2 in R 12 . That polytope has 434 vertices. Since M has two eigenvalues of different moduli, q(M) = 2, and there exist two corresponding non-zero subspaces U 1 , U 2 ⊂ U. On the other hand, we verified that the matrix family A is irreducible in this case. Hence, the only non-zero common invariant subspace of the matrices in A is U. Thus, U 1 = U 2 = U = W . Therefore, ρ 1 = ρ 2 = ρ(A) and Theorem 1 implies that Higher order regularity It is well known that in the univariate case, if the solution ϕ of the refinement equation belongs to C 1 (R), then ϕ is a convolution of a piecewise-constant function and of a continuous solution of a refinement equation of a smaller order [16,50,61]. This observation resolves the question of the differentiability of refinable functions and classifies all smooth refinable functions. In particular, every C ℓ -refinable function is a convolution of a refinable spline of order ℓ − 1 and of a continuous refinable function. This recursive decomposition technique, however, cannot be extended to the multivariate case, see e.g. [12,35,43]. In this section, we show that the derivatives of the multivariate refinable function ϕ ∈ S ′ (R s ) satisfy a system of nonhomogeneous refinement equations. The differentiability of ϕ ∈ C(R s ) is then equivalent to continuity of the solutions of all these equations, see Theorem 3. The main idea is that the directional derivatives of ϕ along the eigenvectors of the dilation matrix M satisfy certain refinement equations and the directional derivatives along the generalized eigenvectors (of the Jordan basis) of M satisfy nonhomogeneous refinement equations, see Proposition 4. Definition 7 A multivariate nonhomogeneous refinement equation is a functional equation of the form where T is the transition operator in (5) and g is a given compactly supported function or distribution. For more details on nonhomogeneous refinement equations see e.g. [19,37,60] the Jordan basis corresponding to this Jordan block. In the following we study the properties of the directional derivatives of the refinable function ϕ ∈ S ′ (R s ), which belong to the following subspaces of S ′ (R s ). Definition 8 For a vector a ∈ R s , we denote by the space of compactly supported distributions, whose mean along every straight line x = at + b : t ∈ R , b ∈ R s , parallel to a, is equal to zero. By ∇ϕ = ∂ϕ ∂x 1 , . . . , ∂ϕ ∂xs we denote the total derivative (gradient) of ϕ and by ∂ ϕ ∂ a = a , ∇ϕ , its directional derivative along a nonzero vector a ∈ R s . Due to the compact support of ϕ ∈ S ′ (R s ), its directional derivative ∂ ϕ ∂ a belongs to S ′ a (R s ). The next result shows that a directional derivative of a refinable function ϕ along an eigenvector of the dilation matrix M is also a refinable function and satisfies the refinement equation (28). A directional derivative of ϕ along a generalized eigenvector of M satisfies the nonhomogeneous refinement equation (29). Analogous considerations about T ϕ − ϕ along e i ∈ E λ imply the claim. ✷ Remark 5 If, for the eigenvalue λ of the dilation matrix M, the set E λ does not contain any generalized eigenvectors, then the system (28)-(29) reduces to homogeneous refinement The main result of this section, Theorem 3, states that ϕ ∈ C 1 (R s ) if and only if the (nonhomogeneous) refinement equations in Proposition 4 corresponding to the Jordan basis E ⊂ R s of the dilation matrix M have continuous solutions ϕ i ∈ S ′ e i (R s ), e i ∈ E. The directional derivatives ϕ i , i = 1, . . . , s, determine the total derivative ∇ϕ of ϕ. Moreover, ϕ i , i = 1, . . . , s, can be constructed and their Hölder exponents can be computed as described in section 3 (see Remark 6). Thus, the higher regularity of any refinable function ϕ can be analyzed by this recursive reduction to a set of continuous refinable functions. Theorem 3 Let ϕ ∈ S ′ (R s ). There exist continuous solutions ϕ i ∈ S ′ e i (R s ), e i ∈ E λ , of (28) -(29) for each eigenvalue λ of the dilation matrix M if and only if ϕ ∈ C 1 (R s ) satisfies ϕ = T ϕ and ∂ ϕ ∂ e i = ϕ i , e i ∈ E. Corollary 5 Suppose that E does not contain any generalized eigenvectors, i.e., the matrix M has a basis of eigenvectors; then . . , s, are continuous, then the solution of ϕ = T ϕ belongs to C 1 (R s ). Moreover, ∂ϕ ∂e i = ϕ i , i = 1, . . . , s. Remark 6 The system of refinement equations (28)-(29) is solved and analysed in the same way described in subsection 3.2 for the usual refinement equation (1). First we solve the equation ϕ 1 = λT ϕ 1 . We find v ϕ 1 (0) as an eigenvector of the matrix T 0 with the eigenvalue 1/λ. If T 0 does not have this eigenvalue, then equation ϕ 1 = λT ϕ 1 does not have a solution, and hence ϕ / ∈ C 1 (R s ). Then we compute ϕ 1 (x) at M-adic points by the formula v ϕ 1 (0.d 1 . . . d k ) = λ k T d 1 · · · T d k v ϕ 1 (0), and extend it by continuity to the whole set K (as in Algorithm 2, subsection 3.2). Then we define the space U λ,1 as the minimal common invariant subspace of the transition matrices containing the vectors This can be done by Algorithm 1 from subsection 3.2. Then ϕ 1 ∈ C(R s ) if and only if the joint spectral radius of the matrices λT d , d ∈ D(M), restricted to the subspace U λ,1 is smaller than one. The Hölder regularity of ϕ 1 is computed by formula (15) for the matrices A d = λT d | U λ,1 . Similarly, we solve the other equations of the system (29) successively for i = 2, . . . , ℓ. Modulus of continuity and Lipschitz continuity Apart from the computing of the exact Hölder exponent of a refinable function ϕ ∈ C(R s ), the matrix approach allows for a refined analysis of its modulus of continuity also in the case of a general dilation matrix M. In Theorem 4, we show how the asymptotic behavior of ω(ϕ, t) as t → 0 depends on the spectral properties of M. Corollary 8 states under which conditions on M the Hölder exponent α ϕ = 1 of ϕ guarantees its Lipschitz continuity. Indeed, the condition α ϕ = 1 on the Hölder exponent is not sufficient to guarantee the Lipschitz continuity of ϕ. The Lipschitz continuity takes place if and only if the exponent α ϕ = 1 is sharp. Remark 7 Even in the univariate case the Hölder exponent of a refinable function may not be sharp. For example, the derivative of the refinable function generated by the fourpoint interpolatory subdivision scheme with the parameter w = 1 16 is "almost Lipschitz" with factor 1, i.e., ω(ϕ ′ , t) ≍ t | log t| as t → 0. See [20]. It has been shown recently [29], that in the bivariate case with the dilation matrix M = 2I, the derivatives of the refinable function generated by the butterfly subdivision scheme with the parameter w = 1 16 is "almost Lipschitz" with factor 2, i.e., ω(ϕ ′ , t) ≍ t | log t| 2 as t → 0. To formulate the main result of this section we need to introduce some further notation. Definition 10 The resonance degree of a compact set A of n × n matrices is defined by Remark 8 (i) Note that the resonance degree ν(A) of one square matrix A is less by one than the size of the largest Jordan block of A corresponding to one of the largest eigenvalues in the absolute value. Thus, the resonance degree of one matrix can be computed efficiently. (ii) In general, ν(A) ≤ n − 1. Indeed, by [49], the resonance degree does not exceed the valency of A minus one, determined from the lower triangular Frobenius factorization of the family A, i.e. there exists an invertible B ∈ C n×n where each family A (j) = {A (j) : A ∈ A}, j = 1, . . . , r ≤ n, is irreducible. The valency of A is defined as the number of A (j) = {A (j) : A ∈ A} such that ρ(A (j) ) = ρ(A). In particular, if the family A is irreducible, then the Frobenius factorization is trivial with r = 1. In this case, the valency of A is equal to one, which is stated in Theorem A2. Thus, in this case, ν(A) = 0. (iii) For a sharper estimate on the valency of A see [7]. (iv) The resonance degree is an integer, by definition. By [54], there exist finite matrix families (even pairs of Now we are ready to formulate the main result of this section. Theorem 4 Let ϕ ∈ C(R s ) with the Holder exponent α = α ϕ . For the modulus of continuity of ϕ satisfies Proof. Let j ∈ {1, . . . , q(M)} and y − x ∈ J j , y − x < 1. The proof is similar to the first part of the proof of Theorem 2, where, using Definition 10, we replace the estimate (25) by Then in the estimate (26), by Definition 10, we have Let v = v ϕ be defined in (7). Combining (33) and (34), we obtain Since this estimate holds for each j ∈ {1, . . . , q(M)}, the claim follows. ✷ The first corollary of Theorem 4 lists the conditions sufficient for the sharpness of the Hölder exponent of ϕ. Corollary 6 Let ϕ ∈ C(R s ) with the Holder exponent α = α ϕ . If for each i ∈ {1, . . . , q(M)} such that log 1/r i ρ i = α, the matrix M| J i has only trivial Jordan blocks and ν(A| U i ) = 0, then the Hölder exponent α is sharp. The proof of Corollary 6 follows by Remark 8 (i), which implies that ν(M| J i ) = 0 in Theorem 4. An important particular case of Corollary 6 is stated in the following result. Corollary 7 Let ϕ ∈ C(R s ). If the dilation matrix M has a complete system of eigenvectors and the family A is irreducible, then the Hölder exponent α ϕ of ϕ is sharp. Remark 10 A careful analysis of the proof of Theorem 4, makes it possible to construct examples for which the upper bound (32) is attained. Thus, inequality (32) cannot be improved in that terms. In particular, it is shown easily that if M has the largest Jordan block of a given size ν + 1 ≥ 2 corresponding to the biggest by modulus eigenvalue and the family A is irreducible, then ω(ϕ, t) ≥ Ct α | log t| αν for a sequence of numbers t that tends to zero. Remark 11 It is known that in the univariate case, the Hölder exponent of a refinable function is always sharp, whenever it is not an integer [50]. Remark 10 shows that in the multivariate case this does not hold. If M has the largest Jordan block of a size ν + 1 ≥ 2 corresponding to the largest by modulus eigenvalue, and A is irreducible, then ω(ϕ, t) ≥ Ct α | log t| αν for t tending to zero. Local regularity of continuous solutions The matrix approach (for the matrices A d , d ∈ D(M) in (13)) makes it possible to compute the local regularity of continuous refinable functions at concrete points and to study sets of points with the given local regularity. In the univariate case, the analysis of the local regularity of the refinable function generating the Daubechies wavelet D2 was done in [16]. For the general theory of local regularity of univariate refinable functions see [49]. In particular, in [49], the authors show that all univariate refinable functions, except for refinable splines, have a varying local regularity, which explains their fractal properties. The matrix approach, see Theorem 5, extends the above mentioned local regularity analysis to the case of multivariate refinable functions with an arbitrary dilation matrix M. Definition 11 The local Hölder exponent of f ∈ C(R s ) at a point x ∈ R s along a subspace J ⊂ R s is defined by If J = R s , then we omit the index J and denote the local Hölder exponent by α f (x). Remark 12 Similarly to Definition 11, we define the local Hölder exponent of the vectorvalued function v in (7) for x ∈ G. Note that in Theorem 5 we determine the local Hölder exponent of v ϕ which satisfies α vϕ,J (x) = min{α ϕ,J (x + k) : k ∈ Ω}, x ∈ G. For the sake of simplicity, we formulate Theorem 5 for rational M-adic points x. The following definition allows us to avoid the possible non-uniqueness of such representations. Definition 12 Let ℓ, L ∈ N. A point x ∈ R s is called rational M-adic with period (ℓ, L), if For purely periodic x ∈ R s we write x = x (ℓ,L) . Remark 13 The point x (ℓ,L) is the unique fixed point of the contraction M −1 ℓ · · · M −1 ℓ+L . Thus, x (ℓ,L) ∈ G (ℓ,L) j for all j ≥ 0. If x (ℓ,L) ∈ int(G), then x (ℓ,L) ∈ int(G (ℓ,L) j ) for all j ≥ 0, and this point has a unique M-adic expansion. Since M is nonsigular, We are now ready to formulate the main result of this section. Recall that the subspaces J i , i = 1, . . . , q(M), of R s determined by the dilation matrix M are defined in subsection 2.1. Theorem 5 Let ϕ ∈ C(R s ) be refinable with the dilation matrix M and x (ℓ,L) ∈ int(G), ℓ, L ∈ N. Then for every rational M-adic point x = x d 1 ,...,d ℓ−1 ,(ℓ,L) with the period (ℓ, L), we have and If the operators A d | U i are nonsingular (in particular, if all A d , d ∈ D(M), are nonsingular), then the inequalities in (35) and in (36) become equalities. Thus, to compute the local regularity at a given M-rational point x, one does not need the joint spectral radius. The local regularity is determined by the usual spectral radius of the matrix product corresponding to the period of the M-adic expansion of x. Remark 14 The assumption that the operators A d | U i , d ∈ D(M), are nonsingular cannot be avoided even in the univariate case, see [49,Example 7]. The proof of the reverse inequality in (35) under the assumption that the operators A d | U i , d ∈ D(M), are nonsingular is done similarly to the second part of the proof of Theorem 2. Let ε ∈ (0, r i ). By Theorem A2, there exists u ∈ U i and C(u) > 0 such that L) ) , γ k ∈ R. Therefore, for the points Hence, there exists k ∈ {1, . . . , n i } such that v(y Consequently, due to with α(ε) = 1 L log 1/(r i −ε) ρ ℓ,L and with some constant C 3 > 0 depending on ε. Note that y (j) k − x goes to zero as j goes to infinity, due to x, y (j) Existence and smoothness in L p (R s ), 1 ≤ p < ∞ In this section we prove Theorem 6 that characterizes the existence of refinable functions in ϕ ∈ L p (R s ), 1 ≤ p < ∞, and Theorem 7 that provides a formula for the Hölder exponent of such ϕ in terms of the p-radius (p-norm joint spectral radius [34,47]) of a set of transition matrices. Definition 13 For 1 ≤ p < ∞, the p-radius (p-norm joint spectral radius) of a finite family of linear operators A = {A 0 , . . . , A m−1 } is defined by Note that, for ϕ ∈ L p (R s ), the difference space U is defined similarly to (12) by In this section, we also determine the exact Hölder regularity of refinable functions in the spaces L p (R s ), 1 ≤ p < ∞, see Theorem 7. Although these estimates look familiar to us from section 3, the corresponding proofs require totally different techniques. The Hölder exponent of a function ϕ ∈ L p (R s ) is defined by Here and below we use the short notation · Lp(R s ) = · p . To determine the influence of the dilation matrix M on the Hölder exponent of ϕ ∈ L p (R s ), in Theorem 7, we consider the Hölder exponents of ϕ along the subspaces determined by the Jordan basis of M. The Hölder exponent of ϕ along a subspace J ⊂ R s is defined by In the proofs of Theorems 6 and 7 we use the following auxiliary results. Auxiliary results The following analogues of Theorems A1 and A2 from section 3 were proved in [51]. Theorem A3. Let 1 ≤ p < ∞. For a finite family A of m operators acting in R n and for any ε > 0, there exists a norm · ε in R n such that For 1 ≤ p < ∞ and for a finite family A of m operators acting in R n , we denote Since each norm · in R n is equivalent to the norm · ε , Theorem A3 yields the following result. Theorem A4. Let 1 ≤ p < ∞ and A be a finite family of m linear operators in R n . Then for every u ∈ R n that does not belong to any common invariant linear subspace of A, there exists a constant C(u) > 0 such that In the next result we relax assumptions of Theorem A4. We show that (40) holds for all points u apart from the ones in a proper linear subspace of R n . Proof. Without loss of generality, after a suitable normalization, it can be assumed that ρ p = 1. Let L be the biggest by inclusion common invariant subspace of A such that ρ p (A| L ) < 1. Note that L is a proper subspace of R n , otherwise we get a contradiction to ρ p = 1. Hence, dim L ≤ n−1. Take arbitrary u / ∈ L and denote by L u the minimal common invariant subspace of A that contains u. If ρ p (A| Lu ) < 1, then the p-joint spectral radius of A on the linear span of L and L u is equal to max {ρ p (A| L ), ρ p (A| Lu )} < 1, which contradicts the maximality of L. Hence, ρ p (A| Lu ) = 1. Since u does not belong to any common invariant subspace of the finite family A| Lu , by Theorem A4, there exists a constant C(u) > 0 such that F p (k, u) ≥ C(u)ρ(A| Lu ) k , k ∈ N. On the other hand, ρ p = ρ(A| Lu ) = 1, and, hence, the claim follows. ✷ For 1 ≤ p < ∞, in the rest of this section, we denote by C(u) > 0 the largest possible constant in inequality (40), i.e., This function is upper semi-continuous and, therefore, is measurable. L p -solutions of refinement equations We are now ready to formulate the first result of this section. Proof. Assume first that ρ p = ρ p (A) < 1. Choose ε ∈ (0, 1 − ρ p ) and consider the norm · ε in U as in Theorem A4. Define the function space T d associated with the eigenvalue one. Note that, for f 1 , f 2 ∈ V U,p , we have Therefore, due to ρ p + ε < 1, T is a contraction on V U,p , and, hence, it has a unique fixed point ϕ, which is the solution of the refinement equation ϕ = T ϕ. Assume next that ϕ ∈ L p (R s ). By Lemma 5, there exists a proper subspace L ⊂ U (due to dimU = n, we associate R n with U) invariant under A such that F p (k, u) ≥ C(u) ρ k p , k ∈ N, whenever u / ∈ L. Since U is the smallest by inclusion subspace of R N invariant under A and containing the differences v( ∈ L} has a positive Lebesque measure µ in R s × R s . Hence, the set has a positive Lebesque measure. By the Fubini theorem, the set in (42) has sections of positive Lebesque measure. Thus, there exists h ∈ R s such that Therefore, there exist δ > 0 and a set H ⊂ G of positive Lebesque measure such that the function in (41) Denote h k = M −k h, k ∈ N, then, by (11), we get Since y ∈ H, by (43), we obtain Since v ∈ L p (R s , R N ) and h k goes to 0 as k → ∞, we get ρ p < 1. ✷ Remark 15 The proof of Theorem 6 is much simpler than that of Theorem 1 in C(R s ). Indeed, an elegant argument with a contraction operator T on the affine subspace V (U, p) cannot be directly extended to prove the continuity of ϕ due to the following reason: the piecewise constant function f for which v f ≡ z is not continuous. Thus, it is not clear how to show that V (U, p) is nonempty. We are not aware of any simple proof of this fact in the multivariate case. We, however, apply this argument once again in estimating the rate of convergence of the subdivision schemes in section 8. Hölder regularity in L p (R s ) To be able to determine the exact Hölder regularity of a refinable function ϕ ∈ L p (R s ), 1 ≤ p < ∞, we need to adjust the definitions of the transition matrices T d , d ∈ D(M). To do so we replace the set Ω in Definition 2 by the setΩ in (46), the latter contains Ω and is determined by a certain admissible absorbing set ∆. Remark 16 An arbitrary set that contains some neighborhood of the origin is absorbing. It is also easy to show that any convex body (convex set with a nonempty interior) that contains the origin is absorbing. For the sake of simplicity, we choose ∆ to be an arbitrary simplex with one of the vertices at the origin and such that its interior intersects all the spaces J i , i = 1, . . . , q(M). In this case ∆ is absorbing, and the sets ∆ ∩ J i , i = 1, . . . , q(M), are absorbing in the corresponding subspaces J i . We call such a simplex ∆ admissible. Note that for each t > 0, the set t ∆ is also an admissible simplex. DefineΩ ⊂ Z s to be the minimal set such that Such a setΩ always exists, due to ∪ k∈Z s (k + G) = R s . Note that Ω ⊂Ω. In many cases Ω = Ω, but not always, see examples 3 and 4. Thus, supp ϕ + ∆ ⊂Ω + G . UsingΩ we redefineṽ are now of sizeÑ = |Ω|. This leads to the appropriate modificatioñ of the finite set A in (13). The modified subspacesṼ ,Ũ ,Ũ i , i = 1, . . . , q(M), andW differ from the subspaces V, U, U i and W , respectively, only by the lengths of their corresponding elements. We are now ready to formulate the main result of this subsection. Taking the limit for ε → 0, we obtain the claim. To establish the reverse inequality α ϕ,J i ,p ≤ log 1/r i ρ i,p , we argue as in the second part of the proof of Theorem 6. We show the existence of a vector h ∈ J i ∩ ∆ and of a subset H ⊂ J i of positive Lebesque measure (on the space J i ) for which inequality (45) holds (with ρ p replaced by ρ i,p ). Taking a limit in that inequality as k → ∞ and using the fact that k ≤ C 2 + log r i h h k , where C 2 > 0 independent of k, we complete the proof. ✷ Examples The following examples illustrate the need for the modifications of the set Ω in subsection 7.3. The next example shows that in some cases Ω =Ω. Example 4 The solution of the univariate refinement equation Their common invariant subspace U = W = {u ∈ R 3 : u 1 +u 2 +u 2 = 0} is two-dimensional. In the basis e 1 = (1, −1, 0) T , e 2 = (0, 1, −1) T of U, the matrices A d = T d | U , d ∈ D(M), are , we need to compute ρ p for a family of nonnegative matrices. For such families, ρ 1 is equal to the Perron eigenvalue of the arithmetic mean of matrices [52]. In our case this is the matrix 1 3 . Since ϕ ≡ 1 on its support, it follows that α ϕ, 3 , and we have α ϕ,1 = − log 2 ρ 1 (Ã). Remark 17 It is well known that, if p is an even integer, then ρ p (A) can be efficiently computed as an eigenvalue of a certain matrix derived from the matrices in A [38,47]. Hence, Theorem 7 allows us to find the Hölder L p -regularity at least for even integers p, in particular, for p = 2, see Example 5. The construction of a continuous refinable function described in Section 3 is realized pointwise and, hence, is not applicable in L p (R s ). Moreover, the vectors v(z s ) are not well defined if v ∈ L p (R s , R N ), thus, the constructions of U and of the function ϕ are modified in the following way, using Proposition 5. First, we find the eigenvector z ∈ V of the operator T = 1 m d∈D(M ) T d associated with the eigenvalue 1. If such a vector does not exist, then ϕ / ∈ L p (R s ). If such z ∈ V exists, then the subspace U is the minimal common invariant subspace of the matrices T d , d ∈ D(M), that contains m vectors T d z − z, d ∈ D(M). If ρ p = ρ p (A) < 1, then the solution ϕ ∈ L p (R s ). Numerically ϕ can be computed as follows. For every d 1 , . . . , d k ∈ D(M), k ∈ N, the value 1 Vol(G d 1 ...d k ) G d 1 ...d k v(x) dx, which is simply the mean of the function v = v ϕ on the set of the tile G d 1 ...d k (let us recall that Vol(G d 1 ...d k ) = m −k ) is equal to A d 1 · · · A d k z. So, we can compute the mean values of the solution ϕ on all sets of the tiling G k . For instance, if χ = χ G is the characteristic function of the tile G, then the function ϕ k = T k χ is a piecewise-constant approximation of the solution ϕ. On each set G d 1 ...d k , ϕ k (x) is equal to A d 1 · · · A d k z. The function ϕ k converges to ϕ at the linear rate ϕ k − ϕ Lp(R s ) ≤ C(ρ p + ε) k as k goes to infinity. In rare cases, when the eigenvalue 1 of T is not simple, by Proposition 6, there exists at most one vector z ∈ V for which the corresponding subspace U yields ρ p = ρ p (T U ) < 1. The rate of convergence of subdivision schemes In this section, we use the matrix approach to compute the rate of convergence in C(R s ) of subdivision schemes with anisotropic dilations, see Theorem 8. Example 6 illustrates one more difference between isotropic and anisotropic cases. Similar analysis can be done in the case of L p -convergence using the results of section 7, see Remark 19. Subdivision schemes are recursive algorithms for linear approximation of functions by their values on a mesh in R s [4,21]. Refinable functions appear naturally in the context of subdivision and of corresponding wavelet frames. Let ℓ(Z s ) be the space of all sequences and ℓ ∞ (Z s ) of all bounded sequences over Z s , respectively. The subdivision operator on ℓ (Z s ) is defined by The subdivision scheme (repeated application of S) converges in C(R s ) if for every a ∈ ℓ ∞ (Z s ) there exists a function f a ∈ C(R s ) such that lim k→∞ f a M −k · − S k a ℓ∞ = 0 . The map a → f a is a linear shift-invariant operator from ℓ ∞ (Z s ) to C(R s ). The limit function f δ for a = δ (with δ 0 = 1 and zero otherwise) is the solution ϕ of the refinement equation (1) normalized by R s ϕ(x)dx = 1. By e.g. [3,4], the subdivision scheme converges if and only if τ < 1. Thus, the convergence is always linear, whenever it takes place. Necessary (but not sufficient) conditions for the convergence are that the refinable function is continuous and the sum rules (4) are satisfied. Remark 18 It is well known that the rate of convergence of a subdivision scheme is equal to that of the cascade algorithm (repeated application of the transition operator T to some initial function f 0 ). We denote by V the affine space of continuous compactly supported functions on R s such that have the same rate of convergence as that of the corresponding subdivision scheme [3,4]. In other words, τ is equal to the spectral radius of the operator T | W . Moreover, one can restrict W to functions supported on the set K defined in (6), the rate of convergence stays the same [3,4]. In the isotropic case, it is known that τ = ρ(T | W ) with T in (10). We derive an analogous result in the anisotropic case. The proof of the reverse inequality τ ≥ ρ is similar to the proof of Theorem 2. By Theorem A2, there exists u ∈ W such that max T d 1 · · · T d k u ≥ C(u) ρ k , k ∈ N. If we find N − 1 functions g i ∈ W, supp g i ⊂ K, i = 1, . . . , N − 1, such that at some point z ∈ G, the vectors {v g i (z)} N −1 i=1 constitute a basis of the space W , then the claim follows. Indeed, the vector u possesses a representation u = The latter sum does not exceed On the other hand, T k g i C(R s ) ≤ τ k g i C(R s ) . Therefore, we obtain The expression inside the brackets is independent of k, hence taking k → ∞, we obtain ρ ≤ τ . To construct the functions g i we take arbitrary g ∈ C(R s ) such that supp g ⊂ K ∩ G and g(z) = 1 at some point z. ✷ Moreover, in the isotropic case, if the refinable function ϕ is stable, then the rate of convergence is related to the the Hölder exponent of ϕ by log 1/ρ(M ) τ = α ϕ [4]. For unstable refinable function this may not be true, however, in the univariate case, the convergence analysis can be still done as shown in [50]. The following example shows that, in the anisotropic case, the equality log 1/ρ(M ) τ = α ϕ may fail even if the refinable function is stable. Remark 19 A similar result as Theorem 8 holds for subdivision schemes in L p (R s ). Their rate of convergence is also equal to ρ p (T | W ). The proof is essentially the same as the one of Theorem 8, with the joint spectral radius replaced by the p-radius.
2017-02-01T14:19:17.000Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "44a5d138c6cc4003fb00bffac4cdcbc030211cdb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1702.00269", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "44a5d138c6cc4003fb00bffac4cdcbc030211cdb", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234665153
pes2o/s2orc
v3-fos-license
DEVELOPMENT OF A COMPUTER NETWORK OF THE REGIONAL OFFICE OF WATER RESOURCES IN POLTAVA REGION WITH AN INTELLIGENT DATABASE MANAGEMENT SYSTEM . The subject matter of the article is the process of matching the optimal solutions for improving the local computer network. The purpose is to update the existing local computer network of the Regional Office of Water Resources in Poltava region. The task is to justify the opportunity for using various types of network applications and components. Having applied knowledge about the characteristics of each viewing items’ properties, the obtained results are used to enter them into a single network. Results. All possible types of network applications and components were identified and the most optimal process for updating and improving the computer network at the Regional Office of Water Resources in Poltava region was chosen. Also, with the introduction of a new local computer network, the possibility of choosing from the most widespread local networks was analyzed and having come to a conclusion that the local computer network topology of the star is the most optimal option for this enterprise. The article analyzes and points out that the implementation of this computer network also increases the network security, and significantly increases the speed of fixing the emerging problems in any workstation without affecting the overall network performance. To summarize the conclusions : in order to maximize network productivity the local computer network with a dedicated server was selected, i.e. star topology, that has led to an increase in network security and an increase in the speed of fixing problems in any workstation without affecting the overall network health. Introduction Perhaps the modern Internet is the largest engineering system created by a person. It contains millions of connected computers, communication lines and switches; with billions of users connected through various data communication devices. Let's take into account that the Internet is so large and has that many different components, however in the meanwhile, is it possible to understand how it works? Are there any leading principles and structure that can serve as the basis for understanding an incredibly large and at the same time a difficult system? But if so, might be interesting to learn computer networks? However, for all these questions, we are surely answering YES! In fact, our purpose in this article is to offer a modern introduction to the fast-growing area of computer networks, highlighting the information system and the principles that are needed to understand both today's and tomorrow's technologies.Studying the computer networking general principles will help you to deal quickly with any specific network technology in the future. However, the well-known expression "Knowledge of several principles frees from memorizing a set of facts" should not be taken literallya good specialist, of course, should know a lot of details and facts. Principles knowledge allow to systematize this private statements, link them with one another in a coherent system and thereby use it more consciously and efficiently. Of course, the principles studying before the specific technologies studying is not an easy task, therefore, in this article we will consider the general computer network aspects. We will organize our computer networks review in such way. After introducing basic terminology and some aspects, at first we will consider the basic hardware components that make up the network. We will also start with the network periphery and overview the end systems and network applications that run on the network. Then we explore the computer network core by analyzing the communications and switches that transmit data. We are also exploring access networks and physical media that connect end systems to the network core. That is, different varieties of computer network construction and their problematics will be investigated. Analysis of common network building options. Historically, the main purpose of integrating computers into a network was the resources sharing: computer users connected to the network, or applications that run on these computers, being able to automatically access various resources of other computers on the network. In order to connect it is necessary that they have been provided with external interfaces. The interface -in a broad sense -is formally determined logical and / or physical boundary between the interacting independent objects. The interface specifies the parameters, procedures, and the objects interaction characteristics. Interfaces are divided into physical and logical interfaces. The physical interface is determined by a set of electrical connections and signal characteristics. Usually it is a connector with contacts' set, each of that has a specific purpose, for example, it can be a group of contacts for data transmission and data synchronization contacts. A pair of connectors is linked by cable, consisting of a wires set, each of them connects the corresponding contacts. In such cases, talking about the line or channel creation, the connection between two devices [1][2][3][4]. The logical interface is an information messages set of a certain format, which are exchanged between two devices or two programs, as well as a rules set defining exchange logic of these messages. The most commonly used interfaces are computer to computer and computer-peripheral device [5][6][7]. Computer-to-computer interface allows two computers to share information. it It is implemented with a pair on each side [8]:  a hardware unit, called a network adapter, or a network interface card (Network Interface Card, NIC);  a network interface card driver, is a special program that controls the network interface card operation. A computer interface is a peripheral device that allows a computer to control a peripheral device operation. This interface is implemented:  from the computer side -an interface card and a peripheral device driver similar to the network interface card and its driver;  from the peripheral device side -the peripheral device controller, is usually a hardware device, receives data from the computer, for instance, information bytes that needs to be printed on paper. [1] The need for access a remote device -users might have arise the most various applications: text editor, graphic editor, database management system. Obviously, that the duplication in each of the functions common applications to all of them in organizing the separated tasks execution is excessive. The most efficient is the approach in which functions are excluded from applications and executed in the form of a specialized software modules pair -(client and tasks server). Summarizing this approach in relation to other shared resources types, we will provide the following definitions: A client is a module designed for the request messages formation and transmission to the remote computer resources from various followed by receiving the results from the network and transferring them to the relevant applications. A server is a module that constantly expects to come from network a customer request arrival and, upon a request acceptance, tries to serve it, as a rule, with the local operating system participation; one server can serve several clients requests at once (one by at a time or simultaneously). Each service is associated with a specific network resources type. Thus the client and server modules that implement remote access to the devices form a network service [1]. The computer's operating system is often described as an interconnected system programs set that provides efficient computer resources control (memory, processor, external devices, files, etc.), and also provides the user with a convenient interface for working with computer hardware and application development. Speaking of the network operating system, we obviously need to expand the managed resources boundaries beyond one computer. The network operating system is the computer's operating system, which, in addition to managing local resources, provides users and applications efficient and convenient access to network informational and other computers hardware resources. Remote access to network resources is provided by: • network services; • transporting messages facilities over the network (in the simplest case, network interface cards and their drivers). Therefore, it is necessary that these modules have been added to the operating system so that it could be called networked. Operating system determines its position in the overall range of network operating systems due to how much network services and services are offered by the operating system to end users, applications and network administrators. In addition to network services, the network operating system should include software communication (transport) tools that provide messages transmission, exchanged between client and server parts of network services, with hardware communication tools. The tasks of communication between computers on the network are carried out by drivers and protocol modules. They perform functions such as message generation, splitting messages into parts (packets, frames), converting computer names into numeric addresses, duplicating messages in case of loss, determining the route in a complex network. Both network services and carriers can be integral (embedded) operating system components or exist as separate software products. A typical network operating system has a large set of drivers and protocol modules, but the user, generally, has the ability to supplement this standard set with programs he needs. The decision on how to implement clients and the network service servers, as well as drivers and protocol modules, is adopted by developers taking into account a variety of reasons: technical, commercial, and even legal. The network service can be represented in the operating system with either both (client and server) parts or only one of them [1]. Also not an integral part of the local network of the enterprise is an information system which is a set of databases and a set of hardware and software for its storage, modification and retrieval of information, for interaction with the user. Information systems are divided into personal, group and corporate. Personal IPs are focused on sharing by individual end users. Groupfor collective use by members of the working group, while solving interrelated tasks in a common database. Corporate IPs are focused on the scale of the enterprise, can support the coordinated work of geographically distributed units of the enterprise. Group and corporate IPs involve connecting workstations (personal computers, terminals ...) to a computer network. This possibility is of great practical importance. Information systems are evolving in their long life cycle -the functions of IP are becoming more complex and new functions are emerging. All this inevitably leads to a change in the structure of the database -there are new files and new fields in old files, the type of some fields may change. Note that a program written in (pure) Python will work properly with the file, even if the changes occurred only in those fields with which it does not work directly -at least you need to change the file description and relay the program. The opportunity to ensure independence according to the data opens up due to the fact that: • DBMS has information about the data structure (from the description in DDL); • DML operators -the language in which applications are written, performed by the database. Data independence technology is based on the concept of 3-level database representation (ANSI / SPARC-1975): • logical (average) level: conceptual representation of the database (conceptual scheme of the database) -a description of the structure of the database of the subject area as a whole, but without details of the physical structure of storage; • physical (lower) level: internal representation of the database (internal scheme of the database) -a description of the structure of database storage, including access methods; • intended for the user (upper) level: external representations of the database (external schemes, subcircuits) -a description of the structure of database fragments, local to different subsystems of IP and application processes of these subsystems; information system applications use only the appropriate external representations of the database. Also, the enterprise network equipment plays an important role. This information must be taken into account when creating a computer network in an enterprise. A LAN switch is a device designed to connect multiple computer network hosts within a single segment. Switch Zyxel GS1900-24E; Data transfer rate: 1000 Mbps; Connectors: 24xRJ-45. Twisted pair is a network cable type, with one or more pairs of isolated conductors twisted together (with a small number of turns per unit length) to reduce reciprocal movements when transmitting a signal and coated with a plastic sheath. The cable is connected to network devices using the RJ-45 connector. It supports data transmission at a distance of about 100 meters. RJ-45 is a physical interface that is commonly used to connect computer networks using a twisted pair over a network switch, or when creating a network from two computers to one another through a network card. In accordance with the customer requirements, the marketing search for the necessary hardware and software of the local computer network was carried out (Table 1, 2, 3) and the estimated total project cost ( Table 4). There are many characteristics associated with the traffic transfer through physical channels. • The proposed load is the data stream coming from the user to the network input. The proposed load can be characterized by the speed of data stream into the network in bits per second (or kilobits, megabytes, etc.). • Data flow rate is the actual data flow speed that has passed through the network. This speed may be less than the proposed load speed, as the data in the network may be distorted or lost. • Communication channel capacity, also called bandwidth, which represents the maximum possible information flow rate over a channel. The specificity of this characteristic is that it reflects not only the physical medium parameters of transmission, but also the features of the chosen transmitting discrete information method in this medium. For example, the fullness of the communication channel in the Ethernet network on an optical fiber is 10 Mbps. This speed is extremely possible for combining Ethernet and optical fiber technology. Thus, fast Ethernet technology provides data transfer over the same optical fiber with a maximum speed of 100 Mbps, and Gigabit Ethernet technology -1000 Mbps. The transmitter of the communication device must operate at a rate equal to the bandwidth of the channel. This speed is sometimes called the bit rate of transmitter. Bandwidth -this term can be misleading because it is used in two different values. First, using it can characterize the transmission medium. Secondly, the term "bandwidth" is used synonymously with the term communication link capacity [2]. While transferring information from a computer to a computer information must be transmitted in two directions. So, it's necessary to understand that even when you simply receive information or transmit it, the information exchange goes from both directions. Namely, the main stream of data is created which is of interest to the recipient receiving the receipt information. However, an inverted information flow is also formed. Physical communication links are divided into several types depending on whether they can transmit information in both directions or not. • The duplex channel provides simultaneous information flow in both directions. A two-way channel can consist of two physical environments, each of which is used to transmit information only in one direction. It is possible when one medium serves for the simultaneous transmission of counter flows, in this case, additional methods are used to separate each stream from the total signal. • The simplex channel allows to transmit information only in one direction. [2] The more computers one network connects, the more difficult it is to work with it. That is, if you decide to combine multiple computers into one network, you need to pre-determine the configuration and network topology. There is a large number of topologies for building a network, and the most popular ones are the ring topology and star topology. Ring topology is a topology in which each computer is connected by communication lines to two others: it receives information from one, and transfers it to another. Each communication line has only one transmitter and one receiver. An important feature of the "ring" is that each computer restores (retransmits, amplifies) the signal coming to it, that is, acts as a repeater. The signal attenuation throughout the "ring" is not as important as the attenuation between the neighboring computers of the "ring". The ring networks sizes reach tens of kilometers, which significantly exceeds other topologies. There is no clearly distinguished host in the "ring" topology, all computers can be the same and equal. The "ring" topology usually has high resistance to overload, provides reliable work with large information flows transmitted over the network, there are usually no conflicts in it, nor is it a mandatory central subscriber that can be overwhelmed by large information flows. [3] The "star" topology is a topology with clearly defined host, to which all other subscribers connect. The entire information exchange goes exclusively through a host computer, which thus has more load, so it cannot deal with anything other than the network. It is clear that the host subscriber network equipment should be more complex than the peripheral subscribers' equipment. In this case, it is not necessary to speak about the subscribers' equality. As a rule, the host computer is the most powerful, and it is entrusted with all the functions of managing the exchange. Conflicts in the "star" network topology essentially are impossible, because management is fully centralized, there is nothing to confront. If we talk about the stability of the star concerning to the computers failures, the peripheral computer failure does not affect the functioning of the remaining network part, but any host computer failure makes the network completely unusable. Therefore, special measures should be taken to increase the reliability of the host computer and its network equipment. Any cable breakdown or short circuit in the "star" topology violates the exchange with the only one computer, and all other computers can normally continue to work [3]. The central topology element is a passive wire, to which several computers are connected according to the scheme. The main advantages of this topology are its simplicity, low cost budget and easy nodes connection to the network. The disadvantages are low reliability (any defect in the wire immediately brings the entire network out of order). At the same time, small networks, as a rule, have a typical topology -a star, a ring, or a common bus. The largest networks are characterized by polyface networks between computers. In such networks, it is possible to identify individual arbitrarily connected fragments that have a typical topology, so they are a mixed network topology [2]. After analyzing all aspects and parameters of each of the network elements, we came to a common conclusion on optimal equipment. All possible ways of creating networks and their characteristics (including the best and worst sides) have been examined and provided. So, this article is, an innovation for a person who just begins getting acquainted with the networks (Fig. 1). Also, all products characteristics that are required to create a local network will be recommended hereinafter. Including software cost evaluation and hardware. Conclusions In this paper, all possible types of network applications and components were identified and the most optimal process for updating and improving the computer network at the Regional Office of Water Resources in Poltava region was chosen. Also, with the introduction of a new local computer network, the possibility of choosing from the most widespread local networks was analyzed and having come to a conclusion that the local computer network topology of the star is the most optimal option for this enterprise. The article analyzes and points out that the implementation of this computer network also increases the network security, and significantly increases the speed of fixing the emerging problems in any workstation without affecting the overall network performance.
2020-12-31T09:05:49.314Z
2020-09-11T00:00:00.000
{ "year": 2020, "sha1": "536b8d3c3283edf5aa3b7cad6e779be21f1e58d4", "oa_license": "CCBYNC", "oa_url": "http://journals.nupp.edu.ua/sunz/article/download/1965/1613", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "98ca7a5f7f6fc21677178112f3454efae7b5a6fd", "s2fieldsofstudy": [ "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1148546
pes2o/s2orc
v3-fos-license
Extended T2 tests for longitudinal family data in whole genome sequencing studies Family data and rare variants are two key features of whole genome sequencing analysis for hunting the missing heritability of common human diseases. Recently, Zhu and Xiong proposed the generalized T2 tests that combine rare variant analysis and family data analysis. In similar fashion, we developed the extended T2 tests for longitudinal whole genome sequencing data for family-based association studies. The new methods simultaneously incorporate three correlation sources: from linkage disequilibrium, from pedigree structure, and from the repeated measures of covariates. We assess and compare these methods using the simulated data from Genetic Analysis Workshop 18. We show that, in general, the extended T2 tests incorporating longitudinal repeated measures have higher power than the single-time-point T2 tests in detecting hypertension-associated genome segments. Background Compared with traditional genome-wide association studies (GWAS), whole genome sequencing (WGS) is a more efficient way of finding the missing heritability of diseases [1]. While GWAS are mostly based on microarray genotyping, which can discover only common genetic variants, WGS is able to reveal rare and structural variants, which are crucial factors behind disease phenotypes [2]. As the cost of sequencing decreases significantly, we expect that WGS will become increasingly predominant in the hunt for novel disease genes. Most of the recent discoveries from sequencing studies were based on the Mendelian trait model [3]. Genetic association studies based on the complex trait model are challenging because of limited sample size as well as the new properties of sequencing data. WGS data are distinct from GWAS data in two major aspects. First, WGS provides a huge number of rare variants. Even with large allelic effects, caused by very small minor allele frequencies (MAFs), the association tests between single rare variants and the trait are less powerful and unreliable [4]. Second, family designs play a critical role in WGS. Because of its relatively high cost, WGS tends to exploit families of patients, so that the rare causal variants are likely enriched through cotransmission of the disease [5]. Furthermore, the pedigree structure allows statistical imputation of the genotypes at no experimental cost, which potentially increases the statistical power [6,7]. In a recent celebrated work, Zhu and Xiong proposed a set of generalized T 2 tests for family-based WGS data [8]. These methods simultaneously address the correlations among genetic variants (i.e., linkage disequilibrium [LD]) and the correlations among family members (i.e., kinship). Rare-variant collapsing procedures [9,10] are also integrated into the tests. However, these methods cannot incorporate covariates and do not address the correlation structure for longitudinal repeated measures. In this study, we further extended the methodology of the T 2 tests to address these limits. By applying these methods to an analysis of the Genetic Analysis Workshop 18 (GAW18) simulation data, we showed that the asymptotic null distributions of Zhu and Xiong [8] are problematic in controlling the type I error rate, and that our extended methods are likely more powerful for longitudinal data. Generalized T 2 tests for family data Zhu and Xiong [8] showed that the covariance as a result of both LD and kinship could be explicitly expressed as a Kronecker product of the corresponding covariance matrices. Following the idea of Hotelling's T 2 test [11,12], the authors proposed a generalized T 2 test that incorporates these covariance matrices, which are estimated separately by using the same data. Depending on various strategies of collapsing of rare variants, here we consider three generalized T 2 tests of Zhu and Xiong. T 2 : The genotypes of rare variants between adjacent common variants are summed up, and one covariance matrix is estimated for both common and collapsed rare variants. CMC.ZXpaper (CMC test): The rare variants are collapsed in the same way as above, but the covariance matrices are estimated separately for common and rare variants (assuming they are uncorrelated). CMC.ZXcode: Rare variants are collapsed by the maximum of their genotypes, and one covariance matrix is estimated for both common and collapsed rare variants. This strategy follows the R function pedCMC of Zhu and Xiong (https://sph.uth.edu/hgc/faculty/xiong/soft ware-D.html). Extended T 2 test for family data with longitudinal repeated measures Building on the idea of Zhu and Xiong, we further extend the generalized T 2 tests to account for the longitudinal repeated covariates. Figure 1 shows the data structure and the idea of the extension. Specifically, the extended T 2 tests compare the blocks of common variants, rare variants, and covariates with repeated measures in cases and in controls, while simultaneously accounting for the correlations among genetic factors, among pedigree individuals, and among longitudinal repeated measures. The response is the occurrence of the event at any of the measurement points. Following the notations in Figure 1, let n c be the number of the cases, n d be the number of the controls, and n = n c + n d. The genotype column vector of the tth common variant is Z t = (Z t 1 , . . . , Z t n ) , the aligned column vector of all T common variants is represented by Z = (Z 1 , . . . , Z T ) . Similarly, for the collapsed genotypes of rare variants, the genotype column vector of the sth rare variant is V s = (V s 1 , . . . , V s n ) , and V = (V 1 , . . . , V S ) for totally S rare variants. Considering the covariates with longitudinal repeated measures, the column vector of the cth covariate at the jth repeated measurement point is and the aligned column vector is . . , A CJ ) for totally C covariates, each measured for J times. Similarly, the row vectors are denoted as follows. For i = 1, . . . , n, the vectors are for longitudinal covariates. The row average in cases isZ c = n c i=1 Z i /n c , and that in controls is The row averages for rare variants and covariates are obtained analogously. The idea of the extended T 2 test is simply to compare the difference between the row average of the case blocks and the row average of the control blocks. Let η = (Z , V , A ) . The difference between row averages can Figure 1 Data structure for composing the extended T 2 tests. Data contain 3 blocks: common variants, rare variants, and longitudinal covariate measures. The statistics integrate the correlations among both rows and columns, and test whether there exists a significant difference between the row vector mean of the cases and that of the controls. be written in terms of η . That is where if we define D r = (u 1 , . . . , u n ) , with u i = 1 for cases i = 1, . . . , n c and u i = 0 for controls i = n c + 1, . . . , n, and denote 1 as a vector of 1 of length n and I (k) as an identity matrix of dimension k, then the matrix Following the idea of the generalized T 2 test, the extended T 2 test is The key problem is to estimate . Following the assumption of Zhu and Xiong [8] that Z and V are independent, we consider two cases. In the first case, assume A is also independent with Z and V. Then Z and V are the covariance matrix among the elements in Z i and V i , respectively (e.g., the LD among the genetic variants), is the kinship matrix, and ⊗ denotes the Kronecker product. For the covariate block, we consider A = Var (A) = A ⊗ * , where A is the covariance matrix among the elements A i , and * is a matrix that captures the correlations among individuals in terms of environmental covariates. To better account for the heterogeneity of the data in cases and in controls, we applied the method in Hotelling's T 2 test for estimating the covariance matrix (which is different from equation (6) in Ref. [8]). Then equation (3) is simplified to We consider two simplification assumptions: (a) * = I indicates that covariate variables among individuals are independent, considering the individual dependence has been captured by the genetics; and (b) * = indicates that covariate variables among individuals have the similar dependence pattern as that according to genetics (e.g., children may be more likely to smoke if parents do, or the age of children is correlated with the age of parents). According to the various rare-variant collapsing strategies in the above generalized T 2 tests by Zhu and Xiong [8], the corresponding extended T 2 tests are denoted T2.longi, CMC.ZXpaper.longi, and CMC. ZXcode.longi, respectively. Asymptotic and permutation tests Zhu and Xiong derived asymptotic chi-square distribution for the null. In their paper [8], the degrees of freedom (DF) equal the number of variants; in their R code, the DF equal the rank of data matrix. The latter is better but still gives inflated p values as shown below. Thus, we applied a permutation test for the type I error rate being well controlled. Specifically, let T 2 g and T 2 gl , l = 1, . . . , L, l = 1, . . . , L, denote the test statistics of the gth genome window from the original data and from the lth permutation, respectively. The empirical p value for the gth window is p g = # T 2 gl ≥ T 2 g , l = 1, . . . , L /L, where L =1000. Because the target is to find the associations with genetic variants, not with the covariates, the permutations are applied only to the genotype data to retain the relationship between response and the covariates. Results For evaluating the above methods, we used the "dose" genotype data of 1,215,399 single-nucleotide variants (SNVs) on chromosome 3 and the 200 simulation replicates of hypertension outcomes and covariates (age, hypertension medicine use, smoking status). As an arbitrary, yet simple, way to group variants, we split chr3 into 19,080 windows, each 10 kilobase pairs (kbp) long. In each window, rare variants (MAF <0.05) between adjacent common variants were collapsed, leaving 654,415 genetic factors (common or rare variants, or collapsed rare-variant groups) to be analyzed. The average number of genetic factors contained in the windows is 34.3, the median is 32, the minimum is 1, and the maximum is 330. For the simulated phenotypes, the number of individuals is 849 in 20 families, where the family sizes are from 21 to 74, with the mean 42.45 and the median 36.5. There are 188 simulated true SNVs contained in 129 true windows (1, 3, 7, 32, and 86 windows contain 5, 4, 3, 2, and 1 true SNVs, respectively) on chr3. The knowledge of these true SNVs was used only for evaluating the power of these association tests, not for designing data analysis strategy. To assess the asymptotic null distributions of the tests provided in Zhu and Xiong [8], we obtained the asymptotic p values of these tests for all false windows in chr3. The Q-Q plot of Figure 2 shows that all three methods have inflated p values with large genomic inflation factors λ [13]. For example, when one chooses a p value cutoff of 0.05, the actual (empirical) error rate is approximately 0.1. At the same time, the following results show that permutation test controls the type I error rate well. Thus, the inflated type I error rate is likely caused by the inappropriate asymptotic null distributions, not by possible stratification. We studied the power of these tests in detecting true windows over chr3. Based on the phenotype data in the simulation replicate 1, the right panel of Figure 3 shows the receiver operating characteristic (ROC) curve for power (estimated by the true positive rate) over a variety of p value cutoffs. In general, the power is low at small or moderate p values. This phenomenon indicates that the sample size is still relatively too small for detecting many weak genetic effects simulated in the data. At the same time, it is clear that the 3 extended T 2 tests that incorporate longitudinal information are significantly better than the generalized T 2 tests that only consider the measures at the first time point. Because the two setups: * = I and * = in (3) led to similar results, we only report that by * = I for simplicity. The left panel of Figure 3 shows that the permutation test controls the type I error rates well for all methods. To compare the overall capabilities of these tests, we studied their power (i.e., true positive rates) in detecting each of all windows over 200 simulation replicates. As illustrated in Figure The type I error rate is the same percentage of these windows, except the genotypes were permuted to destroy the genetic associations. Discussion In the simulation data of GAW18, true SNVs are always allocated on genes. Using genes as windows to group SNVs may concentrate the true SNVs and has the potential to improve the detection power. However, the idea of WGS, instead of exome sequencing, is that the disease-related genetic factors might allocate outside of genes. So we did not use the knowledge that true SNVs are in genes; instead, we evaluated the methods based on fixed-genome segment windows. There are several limitations and future research topics based on the current work. First, matrix estimation is a key issue in this methodology development. Good estimation of matrices and their inverses can better incorporate correlation structures' potential to improve the performance. Here we simply adopted the same variance matrix estimate in Hotelling's T 2 test. This is a maximal likelihood estimate if observations are independent. Unfortunately, independency is not true for family data in the first place. Besides the correlation issue, for a high-dimensional matrix with a potentially sparse structure, there are better estimates of the covariance matrix and its inverse [14]. Second, the permutation test is relatively slow, especially for handling large amounts of data in WGS. It would be desirable to derive more accurate asymptotic distributions for fast and precise p value calculation. Third, necessary modifications of these tests are needed to handle missing data and unequal numbers of repeated measures, which are common problems. Conclusions We have extended Zhu and Xiong's [8] generalized T 2 tests to incorporate the covariates with longitudinal repeated measures. These methods account for 3 sources of correlation structures among genetic variants, family members, and time series observations. Compared with the T 2 test methods for snapshot observations, the new methods have higher power to detect hypertensionrelated genome segments according to the GAW18 simulation data.
2016-05-04T20:20:58.661Z
2014-06-17T00:00:00.000
{ "year": 2014, "sha1": "9319b7417c07a347dcb3f0f409b7b5ac41e1a0c3", "oa_license": "CCBY", "oa_url": "https://bmcproc.biomedcentral.com/track/pdf/10.1186/1753-6561-8-S1-S40", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1b9e0d692dd5c8a58be01fe8d2aa587c200f9a1a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
138002716
pes2o/s2orc
v3-fos-license
POSSIBILITIES OF METALS EXTRACTON FROM SPENT METALLIC AUTOMOTIVE CATALYTIC CONVERTERS BY USING BIOMETALLURGICAL METHOD MOŻLIWOŚCI WYKORZYSTANIA METOD BIOMETALURGICZNYCH DO EKSTRAKCJI METALI ZE ZUŻYTYCH METALOWYCH KATALIZATORÓW SAMOCHODOWYCH An automotive catalytic converter is built mainly of a ceramic (Al2O3) or metallic FeCrAl steel carrier with a porous surface into which the elements from Precious Group Metals (PGMs) are deposited. To those elements belong: platinum, palladium and rhodium, which play the catalytic role [1,2]. Currently, the composition of catalytic carrier significantly varies depending the engine displacement and the type of used fuel. The amount of Pt, Pd and Rh in each carrier can range from 1-5 grams for a small car to 12-15 grams for a truck. About 4 percent of automotive catalytic converters on the market contains a metallic carrier, which is made using corrugated FeCrAl foils, rolled into a spiraling cylinder shape. Such construction allows to increase the contact surface of exhaust gases with the precious metals, which are the catalysts of the reaction and significantly speed up the oxidization of carbon oxides and hydrocarbons and in the same time reduction of nitrogen oxides [3,4]. As a result, the substances appearing at the outlet of the converter, like carbon dioxide, water and nitrogen, are neutral to the environment [4-6]. The typical alloy used for catalytic converter applications is ferritic stainless steel with 17-22% chromium and 4-6% aluminium. Rare Earth Elements (REE) are also added to FeCrAl alloy in the amount of ~ 0,01%. Common used REE are La, Ce, Pr, Nd, Sm, Gd, Dy, Y, and Er. A metallic carrier is used rather than ceramic one for certain applications to reduce light-off, back-pressure, etc [7]. Catalytic converters with metallic carriers called Metal Substrate Converters (MSC) were originally designed for sport and racing cars where low back pressure and reliability under continuous high load is required. Considering the fact, that the metal heats up much faster than ceramics, metal catalysts rapidly reach „working temperature” and quickly purify exhaust gases. At the same time, they are more resistant to damage caused by thermal shock and high temperature. The advantage of metal supported catalysts is their better thermal conductivity obtained due to high content of platinum [8]. The differences between both type of automotive catalytic converters carriers are shown in Table 1. Although there are many methods of treating used catalytic converters with ceramic carrier in pyroand hydrometallurgical way, there is little literature information on the metals recovery from spent automotive catalysts with metallic carriers. The metallic carrier is grinded by passing it through one or more different types of shredders or hammer A R C H I V E S O F M E T A L L U R G Y A N D M A T E R I A L S Introduction An automotive catalytic converter is built mainly of a ceramic (Al 2 O 3 ) or metallic FeCrAl steel carrier with a porous surface into which the elements from Precious Group Metals (PGMs) are deposited. To those elements belong: platinum, palladium and rhodium, which play the catalytic role [1,2]. Currently, the composition of catalytic carrier significantly varies depending the engine displacement and the type of used fuel. The amount of Pt, Pd and Rh in each carrier can range from 1-5 grams for a small car to 12-15 grams for a truck. About 4 percent of automotive catalytic converters on the market contains a metallic carrier, which is made using corrugated FeCrAl foils, rolled into a spiraling cylinder shape. Such construction allows to increase the contact surface of exhaust gases with the precious metals, which are the catalysts of the reaction and significantly speed up the oxidization of carbon oxides and hydrocarbons and in the same time reduction of nitrogen oxides [3,4]. As a result, the substances appearing at the outlet of the converter, like carbon dioxide, water and nitrogen, are neutral to the environment [4][5][6]. The typical alloy used for catalytic converter applications is ferritic stainless steel with 17-22% chromium and 4-6% aluminium. Rare Earth Elements (REE) are also added to FeCrAl alloy in the amount of ~ 0,01%. Common used REE are La, Ce, Pr, Nd, Sm, Gd, Dy, Y, and Er. A metallic carrier is used rather than ceramic one for certain applications to reduce light-off, back-pressure, etc [7]. Catalytic converters with metallic carriers called Metal Substrate Converters (MSC) were originally designed for sport and racing cars where low back pressure and reliability under continuous high load is required. Considering the fact, that the metal heats up much faster than ceramics, metal catalysts rapidly reach "working temperature" and quickly purify exhaust gases. At the same time, they are more resistant to damage caused by thermal shock and high temperature. The advantage of metal supported catalysts is their better thermal conductivity obtained due to high content of platinum [8]. The differences between both type of automotive catalytic converters carriers are shown in Table 1. Although there are many methods of treating used catalytic converters with ceramic carrier in pyro-and hydrometallurgical way, there is little literature information on the metals recovery from spent automotive catalysts with metallic carriers. The metallic carrier is grinded by passing it through one or more different types of shredders or hammer The main task of automotive catalytic converters is reducing the amount of harmful components of exhaust gases. Metallic catalytic converters are an alternative to standard ceramic catalytic converters. Metallic carriers are usually made from FeCrAl steel, which is covered by a layer of Precious Group Metals (PGMs) acting as a catalyst. There are many methods used for recovery of platinum from ceramic carriers in the world, but the issue of platinum and other metals recovery from metallic carriers is poorly described. The article presents results of preliminary experiments of metals biooxidation (Fe, Cr and Al) from spent catalytic converters with metallic carrier, using bacteria of the Acidithiobacillus genus. mills. This step is preceded by separation of various fractions of steel scrap (such as foil-fraction) from the fraction containing precious metals. The Umicore Precious Metals Refining (Maxton USA) -one of the world's largest precious metals recycling facilities -has dedicated and fully automated shredding line to metallic carriers from spent automotive catalysts. This technology allows to achieve a high recovery yield for the washcoat [9,10]. However, the crushing process can cause a problem -a fine dust (obtained after this process) is rich in precious metals [11]. In addition to classical pyro-and hydrometallurgical methods for spent catalysts processing, attempts of metal extraction involving microorganisms are also undertaken [12,13]. Most widely discussed works relate to the extraction of metals such as Mo, Co, Al, Ni, V and w from spent petroleum/refiners catalysts [14]. Review of results coming from bioleaching of these metals were extensively presented in the work [14]. There are not many works regarding the possibility of using bacteria in leaching the metals coming from automotive catalytic converters. The one available refers to bioleaching platinum from ceramic catalysts [15], however there is no data on the possibility of extracting metals from metallic converters. Attempts to extract metals such as Fe, Cr and Al from spent metallic catalyst using bioleaching with mixed culture of Acidithiobacillus bacteria were undertaken. The aim of the research was to transfer iron, chromium and aluminium from a solid phase into solution in the presence of sulfuric acid and Fe 3+ -product of bacteria metabolism. Experimental method Waste material. Research was conducted on the waste material in the form of spent Metal Substrate Converters (MSC). The material was cut and subjected to bioleaching experiments in particle size fractions of 2 mm. The metals content in the sample was determined by the atomic adsorption spectrometry (AAS) - Table 1 shows their composition. Analysis. The changes of Fe, Cr and Al concentration in solutions, the oxidation-reduction potential (ORP) and pH were analysed during experiments. To quantify the amount of Fe, Cr and Al dissolved in the leaching media, 5 cm 3 sample was filtered and the filtrate was analyzed by the Atomic Absorption Spectrophotometer AAS (SpectrAA 20 PLUS VARIAN). 1 cm 3 solution was also taken to determine the concentration of ferric ion by spectrophotometer (Unicam UV/Vis Spektrometer UV 4). Results and discussion bioleaching process was conducted in distilled water acidified by addition of 10 M sulfuric acid with a mixture of bacteria A. ferrooxidans and A thiooxidans adapted and unadapted to high metals concentration. Process of chemical leaching of iron, chromium and aluminium waste was carried out under identical conditions. The results are shown in Fig. 1 as a dynamic change of the Fe, Cr and Al concentration during leaching, under conditions of initial pH = 1.5. The concentration of iron in biological samples with adapted and unadapted bacteria remained at a comparable level (Fig. 1). The maximum concentration of iron leached in this way was found at 14th day and was about 235 mg/ dm 3 and 211 mg/dm 3 with adapted and unadapted bacteria, respectively. The results of the control leaching experiments were significantly lower (0.63 -1.2 mg/dm 3 ). After reaching maximum concentration of Fe total (Fe total ) in biological tests (14th day) within the next few days there was its decline. This relationship is reflected in the course of pH and Fe 3+ concentrations changes. The initial increase in pH (Fig. 2) can be attributed to the oxidation of Fe 2+ to Fe 3+ as a result of bacterial oxidation (Eq. 1) which is accompanied by the consumption of acid and process of waste dissolution [16][17][18][19][20]: (1) The initial concentration of Fe 3+ was maintained at about 91.5 mg/dm 3 , which contributed to the Fe total leaching. Together with the observed decrease in the concentration of Fe 3+ (on 20th day of bioleaching) and parallel decline in pH, a process of slowing down dissolution with a downward trend of Fe total concentration in the solution was observed. Fe 3+ was not detected in the chemical leaching tests (control experiments). This indicates that the oxidation of Fe from waste is mainly affected by a biological process. An increasing trend of aluminium concentration during the leaching process was observed. A comparable degree of Al transition to solution in biological and control samples was achieved, 10.28 mg/dm 3 and 8.8 mg/dm3 on 96 days, respectively. Minimal effect of bacterial oxidation on aluminium extraction was found. The main role in leaching process plays the pH and chemical mechanism of oxidation. Increasing pH (Fig. 2) favours extraction of aluminium, both in chemical and biological tests. Low dissolution of Al (32%) observed Ilyas et al. [21] during leaching metals in column with acidophilic heterotrops. These results indicated that dissolution of aluminium was contributed to both acid leaching and bioleaching. In case of chromium the lowest value of metal concentration in the solution during the biological and chemical leaching was obtained. Concentration of this metal in tests with unadapted bacteria remained at the level of 0.17 mg/dm 3 . Increase in chromium transition to the solution was observed on 96th day for the sample with adapted bacteria (7.66 mg/dm 3 ). This behaviour indicates activation of further adaptation process by bacteria, dependent not only on the concentration of metal but also on the cumulative contact time. Conclusion On the basis of Fe bioleaching from spent metallic catalyst converter it was found that the extraction of iron involving adapted and unadapted bacteria proceeds similarly, but bacteria plays a crucial role in this process. On the contrary, in the case of aluminium only slightly more metal (by 20%) was obtained by the bacterial activity in comparison with the control leaching test where only chemical leaching took place. Clearly better results were achieved for chromium, where process efficiency using adapted bacteria has grown along with a contact time. Finally, it can be concluded that in the metals oxidation, which affect the degree of their transition into solution, takes part both biological oxidation process (especially for Fe and Cr) as well as chemical oxidation (in the case of Al).
2019-04-29T13:12:30.172Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "41cfc264ff365c7c632ef2d3224aa5aeff72b54f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1515/amm-2015-0320", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1f46ebfb5a78b70a8e50c8214530862f11e90e1e", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
234251350
pes2o/s2orc
v3-fos-license
Electrographic Features of Spontaneous Recurrent Seizures in a Mouse Model of Extended Hippocampal Kindling Abstract Epilepsy is a chronic neurological disorder characterized by spontaneous recurrent seizures (SRS) and comorbidities. Kindling through repetitive brief stimulation of a limbic structure is a commonly used model of temporal lobe epilepsy. Particularly, extended kindling over a period up to a few months can induce SRS, which may simulate slowly evolving epileptogenesis of temporal lobe epilepsy. Currently, electroencephalographic (EEG) features of SRS in rodent models of extended kindling remain to be detailed. We explored this using a mouse model of extended hippocampal kindling. Intracranial EEG recordings were made from the kindled hippocampus and unstimulated hippocampal, neocortical, piriform, entorhinal, or thalamic area in individual mice. Spontaneous EEG discharges with concurrent low-voltage fast onsets were observed from the two corresponding areas in nearly all SRS detected, irrespective of associated motor seizures. Examined in brain slices, epileptiform discharges were induced by alkaline artificial cerebrospinal fluid in the hippocampal CA3, piriform and entorhinal cortical areas of extended kindled mice but not control mice. Together, these in vivo and in vitro observations suggest that the epileptic activity involving a macroscopic network may generate concurrent discharges in forebrain areas and initiate SRS in hippocampally kindled mice. Introduction Epilepsy is a chronic neurological disorder characterized by unprovoked or spontaneous recurrent seizures (SRS) and comorbidities. Temporal lobe epilepsy is the most common type of epilepsy seen in adult and aging populations and highly diverse in etiologies and electro-clinical manifestations (Engel Jr 1996;Ferlazzo et al. 2016). Currently, about one third of epilepsy patients are resistant to treatments with common antiepileptic drugs and only a small portion of these patients are candidates for surgery or other alternative treatments such as vagal nerve or deep brain stimulation. It is therefore imperative to understand the pathogenesis of epilepsy to yield more efficacious antiepileptic treatments. Kindling through repetitive brief electrical stimulations of a limbic structure has long been used as a model of temporal lobe epilepsy (see reviews by Gorter et al. 2016;Löscher 2016;Sutula and Kotloski 2017). While kindling via the classic or standard protocol lasting a few weeks does not generally induce SRS, over or extended kindling that applies ≥100 stimulations over a period of up to a few months can induce SRS. Specifically, extended kindling induced SRS in monkeys (Wada et al. 1975;Wada and Osawa 1976), dogs (Wauquier et al. 1979), cats (Wada et al. 1974;Gotman 1984;Shouse et al. 1990Shouse et al. , 1992Hiyoshi et al. 1993), rats Rovner 1978a, 1978b;Milgram et al. 1995;Michael et al. 1998;Sayin et al. 2003;Brandt et al. 2004), and mice (Song et al. 2018b). Unlike commonly used models that induce acute status epilepticus by an application of kainite/pilocarpine or intensive brain electrical stimulation and then result in subsequent SRS (Mazarati et al. 1998;Dudek and Staley 2017;Gorter and van Vliet 2017;Henshall 2017;Kelly and Coulter 2017), SRS development following extended kindling is a slowly evolving process without initial status epilepticus (Aronica et al. 2017;Sutula and Kotloski 2017). As such, the extended kindling model may complement the status epilepticus and other SRS models (Navarro Mora et al. 2009;Walker et al. 2017) to study diverse epileptogenic processes relevant to human temporal lobe epilepsy (Engel Jr 1996;Ferlazzo et al. 2016). The electroencephalographic (EEG) features of SRS have been characterized in kindled monkeys and cats via simultaneous intracranial recordings from multiple brain structures (Wada et al. 1975;Wada and Osawa 1976;Hiyoshi et al. 1993). In prefrontal or amygdala-kindled monkeys, focal motor seizures at the stage 2 of Racine scale (Racine 1972) featured unilateral discharges in hippocampal, prefrontal, and piriform cortical areas, and generalized seizures at the Racine stage 5 manifested concurrent bilateral discharges in multiple cortical and subcortical structures (Wada et al. 1975;Wada and Osawa 1976). In amygdalakindled cats, focal motor seizures at the Racine stage 1-2 featured cortical discharges unilateral to kindled amygdala with minimal propagation to the subcortical recording sites, and generalized stage 5 motor seizures resulted from discharges that initially arose from kindled or unstimulated amygdala and ipsilateral dorsomedial thalamus and globus pallidus and then spread with evident time delays to other bilateral structures (Hiyoshi et al. 1993). EEG features of SRS have also been documented in rodent models of extended kindling. In amygdala-kindled rats, focal stage 0-2 motor seizures were associated with discharges in the kindled amygdala (Brandt et al. 2004) or ipsilateral dentate gyrus (DG) (Michael et al. 1998), and generalized stage 6-7 motor seizures were associated with discharges in bilateral amygdala (Pinel and Rovner 1978a), the kindled amygdala (Michael et al. 1998;Brandt et al. 2004), or ipsilateral DG (Michael et al. 1998). Generalized stage 3-5 motor seizures in association with kindled hippocampal discharges were observed from hippocampally kindled mice (Song et al. 2018a). Overall, however, EEG discharges from the kindled and unstimulated sites in individual rats or mice remain undetailed. It is unknown whether generalized motor seizures are initiated by concurrent bilateral discharges or discharges that originate from the kindled site and then spread to other brain regions as demonstrated in kindled monkeys (Wada et al. 1975;Wada and Osawa 1976) or cats (Hiyoshi et al. 1993). Mouse models have been increasingly employed for epilepsy research largely due to advances in genetic/molecular manipulations that offer advantages to investigate the role of targeted molecular signaling in epileptogenesis. We attempt to establish a mouse model for future examinations of kindling-induced SRS in genetically/molecularly manipulated mice. Previous work of our laboratory has described protocols for extended hippocampal kindling and SRS monitoring in naïve C57 black mice (Bin et al. 2017), tested the effects of some of clinically used antiepileptic drugs on SRS (Song et al. 2018b), and examined performance of extended kindled mice in open filed and water maze tasks (Liu et al. 2019). Our present study is to continue characterization of this mouse model with a particular focus on electrographic features of SRS. Specifically, we recorded intracranial EEG from the kindled hippocampus and an unstimulated forebrain structure in individual mice to examine the temporal relation of corresponding regional discharges. The unstimulated structure was alternated in 5 groups of mice and targeted the hippocampus, parietal cortex, piriform cortex, entorhinal cortex, or dorsomedial thalamus. We also prepared brain slices from kindled and control mice to examine local circuitry excitability and susceptibility to induce epileptiform activity. Data from these in vivo and in vitro experiments suggest that epileptic activity involving a macroscopic network may generate concurrent discharges in forebrain areas and initiate SRS in hippocampally kindled mice. Animals Male C57 black mice (C57BL/6 N) were obtained from Charles River Laboratory (Saint-Constant, Quebec, Canada). Mice were housed in a local vivarium that was maintained at a temperature of 22-23 • C and with a 12-h light on/off cycle (light-on starting at 6:00 AM). Mice were caged in group (up to 4 mice per cage) with food and water ad libitum. We chose to kindle mice (ages 11-13 months) to model new-onset temporal lobe epilepsy seen in adult and aging populations (Ferlazzo et al. 2016) while minimizing the health-related complications that are common in older mice (Flurkey et al. 2007). All experimentations conducted in this study were reviewed and approved by the Animal Care Committee of the University Health Network in accordance with the Guidelines of the Canadian Council on Animal Care. Electrode Implantation Electrode construction and implantation were similarly done as per previous studies of our laboratory (Wu et al. 2008;Jeffrey et al. 2014;Bin et al. 2017;Stover et al. 2017;Song et al. 2018b). All electrodes were made of polyamide-insulated stainless steel wires (110 μm outer diameter; Plastics One). Surgeries were performed under isofluorane anesthesia. A stereotaxic frame with 2 micromanipulators was used for electrode placement. Implanted electrodes were secured onto the skull using a gluebased method (Wu et al. 2008). Each mouse was implanted with 2 pairs of twisted-wire bipolar electrodes. One pair of electrodes was positioned to the hippocampal CA3 region for kindling stimulation and local recordings (bregma −2.5 mm, lateral 3.0 mm, and depth 3.0 mm; Franklin and Paxinos 1997), and another pair positioned to an ipsilateral or contralateral site. The latter was alternated in 5 groups of mice and targeted the contralateral hippocampal CA3, contralateral or ipsilateral parietal cortex (bregma −0.5 mm, lateral 2.0 mm, and depth 0.5 mm), ipsilateral piriform cortex (bregma 0.5 mm, lateral 3.0 mm, and depth 5.0 mm), contralateral dorsomedial thalamus (bregma −1.5 mm, lateral 0.5 mm, and depth 3.5 mm), or ipsilateral entorhinal cortex (bregma −3.5 mm, lateral 4.0 mm, and depth 5.0 mm). A reference electrode was positioned to a frontal area (bregma +1.5 mm, lateral 1.0 mm, and depth 0.5 mm). The putative tip locations of implanted electrodes were determined later in brain histological experiments if suitable ( Supplementary Fig. 1). We used the above implantation approach to assess regional EEG discharges while minimizing complications of multielectrode implantations in the small mouse brain. Specifically, the rodent hippocampus has strong bilateral connections (Amaral and Witter 1998), which may allow faster spread of discharge signals from the kindled hippocampus to contralateral hippocampus relative to other brain structures, particularly the parietal cortex. The entorhinal and piriform circuitries are known to be susceptible to seizure activities and epileptogenesis (Vismer et al. 2015) and therefore may be prone to express spontaneous EEG discharges. Neurons of the dorsomedial thalamus have widespread connections with cortical and subcortical structures and have been shown to modulate seizure activities in other models (Bertram et al. 2001). Spontaneous EEG discharges have been observed from the dorsomedial thalamus of extended kindled cats (Hiyoshi et al. 1993). In the following text, the 5 different implantations were abbreviated as the hippo-hippo, hippo-cortex, hippo-piriform, hippo-thalamus, and hippoentorhinal, respectively. Hippocampal Kindling A train of stimuli at 60 Hz for 2 s was used for hippocampal kindling (Reddy and Rogawski 2010;Jeffrey et al. 2014;Bin et al. 2017;Stover et al. 2017;Song et al. 2018b). Constant current pulses with monophasic square waveforms, pulse duration of 0.5 ms and current intensities of 10-150 μA were generated by a Grass stimulator and delivered through a photoelectric isolation unit (model S88, Grass Medical Instruments). An ascending series was used to determine the threshold of evoked after-discharges (ADs) in individual mice. Kindling stimulation was conducted at 25% above the threshold value. We attempted to keep stimulation intensity constant throughout the extended kindling period. However, the initial stimulation intensity often became inconsistent in evoking ADs, which might be largely due to contaminations or fouling of the implanted electrodes. Due to this complication, stronger stimuli at 60-110% above the initial threshold value were used if needed. Kindling stimuli were applied twice daily and ≥5 h apart Rovner 1978a, 1978b;Milgram et al. 1995;Mazarati et al. 1998;Sayin et al. 2003;Brandt et al. 2004;Bin et al. 2017). Each stimulation episode lasted for a few minutes while the mouse was placed in a glass container for EEG-video monitoring (Stover et al. 2017;Song et al. 2018b). Control mice were handled twice daily and ≥5 h apart for 60 days. EEG Recordings Local differential recordings through the twisted-wire bipolar electrodes were used to monitor evoked responses and spontaneous EEG activities (Jeffrey et al. 2014;Bin et al. 2017;Stover et al. 2017;Song et al. 2018b). Monopolar EEG recordings were used only if the local differential recordings were unsuccessful. Discharges recorded by monopolar recordings were not used for analysis of regional discharges to avoid remote signal influences. Evoked and spontaneous EEG signals were collected using 2-channel or 1-channel microelectrode AC amplifiers with extended head-stages (model 1800 or 3000, AM Systems). Evoked ADs of the stimulated hippocampus were captured using the model 3000 amplifier via TTL-gated switches between recording and stimulating modes. These amplifiers were set with an input frequency band of 0.1-1000 Hz and an amplification gain of 1000. A build-in notch filter at 60 ± 3 Hz was used in some experiments. Amplifier output signals were digitized at 5000 Hz (Digidata 1440A or 1550, Molecular Devices). Data acquisition, storage, and analyses were done using pClamp software (Version 10; Molecular Devices). Continuous EEG-Video Monitoring Continuous 24-h EEG-video monitoring was done as previously described (Bin et al. 2017). Each mouse was placed in a modified cage with food and water ad libitum. A slip ring commutator was mounted atop the cage and connected to implanted electrodes of the mouse via flexible cables. A webcam (model C615, Logitech) was placed near the cage to capture animal motor behaviors. Video data were acquired at 20-25 frames per second. Dim lighting was used for webcam monitoring during the light-off period. A cursor auto-click program (Mini Mouse Macro program; http:// www.turnssoft.com/mini-mouse-macro.html) was used to operate EEG and video recordings and save data every 2 h. EEG and video data were collected roughly 24 h daily and for up to 6 consecutive days per session. Supplementary Table 1 lists the cumulative time of 24-h EEG-video monitoring and SRS detected for individual mice. Individual mice underwent EEG-video monitoring for 24 h 1-2 weeks post electrode implantation and then after about 80, 100, 120, and 140 kindling stimulations. No further kindling stimulation was applied, if ≥2 SRS events were observed in the 24-h monitoring. Continuous 24-h EEG-video monitoring was performed intermittently for up to 3 months after termination of kindling stimulation (2-6 consecutive days per session, 2-3 sessions of 10-20 days apart) (Fig. 1A). Control mice experienced twice daily handling manipulations and underwent EEG-video monitoring for 24 h after 120 manipulations. Seizure Analysis Electrographic Discharge Events Evoked ADs and spontaneous discharges were recognized by repetitive spikes with simple and/or complex waveforms, amplitudes approximately 2 times the background signals, and durations of ≥10 s. Discharge events were inspected independently by 3 researchers (H.L., H.S., and L.Z.). Uncertain discharges (≤2% of SRS events in individual mice) were not included in the data presented below. Hypersynchronous (HYP) and low-voltage fast (LVF) signals are 2 major onset patterns of EEG discharges recognized in patients with temporal lobe epilepsy and in relevant animal Cumulative stages of evoked motor seizures to SRS. These measures did not differ significantly among the hippo-hippo, hippo-cortex, hippo-piriform, and hippothalamus groups (P > 0.05, 1-way ANOVA following a Bonferroni post hoc test). Measures from the hippo-entorhinal group not included for group comparison due to a low sample size. models (Bragin et al. 2005;Velasco et al. 2007;Lévesque et al. 2012;Avoli et al. 2016). The HYP onset was considered as a cluster of abruptly arising large spikes from the baseline. Spontaneous discharges that began with abrupt attenuation of the background activity were considered to have the LVF onset, although LVF associated rhythmic activity was not always evident to visual inspection. Specifically, the LVF-onset components were recognized by small amplitudes (≤65% of preceding signals in most cases) and variable lengths (0.5-5 s) prior to the appearance of repetitive incremental spikes. The magnitudes of LVF signals were measured from corresponding regional discharges without evident artifacts at onset. Original signals were treated with a band-pass filter Bessel, to flatten DC-like changes associated with the LVF and to reduce high-frequency noises. Standard deviation (SD) values of the LVF-onset and preceding signals were obtained via signal analysis functions of pClamp. The SD of LVF signals were normalized as percentages of the preceding signals, and normalized measures from ≥20 discharge events were obtained from individual mice in the hippo-hippo, hippo-cortex, hippo-piriform, and hippo-thalamus groups ( Supplementary Fig. 2). Phase-Amplitude Coupling (PAC) and Wavelet Phase Coherence (WPC) PAC strength was assessed using the algorithm by Tort et al. (2010) where phase and amplitude of different frequency bands were obtained via the complex wavelet Morlet wavelet transform using a mother wavelet with a central frequency of 0.8125 Hz and bandwidth of 5 Hz previously used on human EEG data (Grigorovsky et al. 2020). The phase and amplitude were computed from the real and imaginary wavelet coefficients using: The low-frequency range used for the phase information was 1-30 Hz, and the high-frequency range used for the amplitude information was 32-512 Hz with increments on a logarithmic scale. The coupling strength is then computed as the normalized Kullback-Leibler distance of the amplitude distribution over the binned phases (20 • bins) from a uniform distribution. The PAC strength was computed using a sliding window of 4 s to allow for a minimum of 4 cycles per frequency band. PAC seizure dynamics were computed as the mean PAC strength using both spectral and temporal averaging. Peridischarge segments were chosen to be at least 8-s long to include at least 2 PAC windows and defined as preictal (8 s prior and up to electrographic onset); discharge or ictal onset segments were the first 8 s of the electrographic seizure; discharge or ictal segments were entire ictal trace excluding onset and offset segments; discharge or ictal offset segments were the last 8 s of the electrographic seizure; and postdischarge or postictal segments were 8 s following electrographic offset. PAC windows containing large movement artifacts were excluded from the analysis. WPC was computed through the phase extraction of different frequency bands through the complex wavelet transform of EEG from each site (Cotic et al. 2015). The relative phase difference is obtained using φ(s, τ ) = tan −1 ( s,τ) ) where W * is the complex conjugate, s is the scaling coefficient, and τ is the shift in time. Phase coherence is then defined as ρ(s, τ ) = | e j φ(s,τ) |, which ranges from zero to one, where one indicates a phase lock between the frequency bands of the 2 signals. WPC was applied to each wavelet central frequency from 0.25 to 512 Hz with increments on a logarithmic scale and window size proportional to 8 cycles of each frequency. Motor Seizure Events The Racine scale modified for mice (Racine 1972;Reddy and Rogawski 2010) was used to score evoked and spontaneous motor seizure activities. Briefly, stage 0-no response or behavioral arrest; stage 1-chewing or facial movement; stage 2-chewing, head nodding, and/or unilateral forelimb clonus; stage 3-bilateral forelimb clonus; stage 4-rear; stage 5-fall or a loss of righting reflex. In addition, both the evoked and spontaneous stage 3-5 motor seizures were often preceded by backward body movement and associated with tail erection, excessive salivation, and/or eye closure. Circular movements (≥3 turns) along the kindled or contralateral site (Pinel and Rovner 1978a) or brief fast runs were also noticeable before the spontaneous stage 3-5 motor seizures. The evoked and spontaneous motor seizures were assessed independently by several researchers (J.C., N.S., C.C., P.C., S.L., and Y.L.) through video reading (Jeffrey et al. 2014;Bin et al. 2017;Stover et al. 2017;Song et al. 2018b). The concordance rates for recognizing stage 3-5 seizures were ≥90% among these researchers. SRS Detection Combined EEG and video analyses were employed to detect SRS. EEG signals were first screened to detect spontaneous discharges. Detected discharge events were time-stamped, and corresponding video data were reviewed to score motor seizures. Discharge and motor seizure analyses were done separately as mentioned above. We used this approach for convenience in SRS detection as spontaneous discharges were clearly distinguishable from background signals in our EEG recordings, whereas scoring motor seizures through video reading was laborious and often complicated by mouse's position in the cage and/or by surrounding bedding materials. Due to these complications in video reading, we did not analyze motor seizures by video reading alone. To be more stringent in SRS detection, SRS with decipherable discharges in 2 corresponding regional recordings and identifiable motor behaviors in video reading were presented below, and potential SRS events with decipherable EEG discharges from one recording site or without analyzable motor behaviors were not included in the present data presentation except where specified. In our EEG-video monitoring, EEG signals were recorded continuously, whereas video was captured at a rate of 20-25 frames per second and not synchronized with EEG recordings. Due to these limitations, we did not assess the temporal relation of EEG discharges and associated convulsive behaviors. All recordings were done in a submerged chamber and at a perfusate temperature of 35-36 • C. Each slice was perfused with ACSF a high rate (∼15 mL/min), and both the top and bottom surfaces of the slice were exposed to the perfused ACSF. Humidified gas of 95%O 2 -5%CO 2 was passed over the perfusate to increase local oxygen tension. Previous works from our laboratory and other laboratories have shown that a fast both-side perfusion of rodent brain slices is important to maintain spontaneous population activities under submerged recording conditions (Wu et al. 2005 ; Hájos and Mody 2009). A twisted-wire bipolar electrode made of polyimide-insulated stainless steel wires (outer diameter 110 μm) was used for local afferent stimulation. Constant current pulses (0.1 ms duration, at a near maximal intensity of 150 μA) were generated by a Grass stimulator (S88) and delivered through a photoelectric isolation unit. The stimulating electrode was placed near the cell body layer of the CA3 and DG area to elicit local responses. Recording electrodes were made with thin-wall glass tubes (World Precision Instruments). Extracellular electrodes were filled with a solution containing 150 mM NaCl and 2 mM HEPES (pH 7.4; resistance of 1 ∼ 2 MΩ). Patch electrodes were filled with an "intracellular" solution containing 140 mM potassium gluconate, 10 mM KCl, 2 mM HEPES, and 0.1 mM EGTA (pH 7.25 and resistance of 4-5 MΩ). A dual-channel amplifier (Multiclamp 700A or 700BA, Molecular Devices) was used to record extracellular filed potentials and intracellular signals. Data acquisition, storage, and analyses were done using a digitizer (Digidata 1400) and pClamp 10 software (Molecular Devices). Slices that displayed stably evoked field potentials (≥0.2 or 0.5 mV for cortical or hippocampal responses) during baseline monitoring were included for data analysis. Synaptic field potentials were elicited every 20 s, and their peak amplitudes were measured from averages of 4-5 consecutive responses. Spontaneous CA3 sharp waves (SPWs) were recognized as rhythmic events with amplitudes approximately 2 times that of background signals, base durations of 20-200 ms, and incidences of 0.3-4 events/s. SPWs were visually inspected and measured from 1-min data segments in individual slices. Interictal spike events were recognized with amplitudes of ≥0.5 mV, base durations of 200-600 ms, and incidences of 2-5 events/10 s. An event detection function (threshold search method) of pClamp was used to automatically detect the interictal events. If needed, original data were treated with a band-pass filter (0.2-500 Hz, Bessel 8-pole) before event detection. Detected events were visually inspected and artifacts were excluded. SPW-related synaptic currents in CA3 pyramidal neurons were analyzed as previously described (Wu et al. 2005(Wu et al. , 2006. About 15-20 events were collected at −60 or −40 mV for each neuron. Brain Histological Assessments Brain histological sections were prepared using a protocol modified from previous studies of our laboratory (Jeffrey et al. 2014;Stover et al. 2017). Each mouse was anesthetized by sodium pentobarbital as described above and infused transcardiacally with standard ACSF and then with 10% neutral buffered formalin solution (Sigma-Aldrich). Removed brains were further fixed in a hypertonic formalin solution (with 20% sucrose) for ≥24 h. Brain coronal sections of 50 μm thickness were obtained using a Leica CM3050 research cryostat. Sections were mounted onto glass slides (Superfrost plusmicroscope slides, Fisher Scientific), dried at room temperature for ≥1 week, and then stained with cresyl violet (Sigma Aldrich). Images were obtained using a slide scanner (Aperio digital pathology slide scanner AT2, Leica; at ×20 magnification) and analyzed using ImageScope (Leica) or Image J software (National Institute of Health, USA). Statistical Analysis Statistical tests were conducted using Sigmaplot (Systat Software Inc.) or Origin (One Roundhouse Plaza, Suite 303, Northampton, MA, USA) software. Data were presented as means and standard error of the mean (SEM) throughout the text and figures. Statistical significance was set at P < 0.05. For normally distributed data, group differences were assessed using a Student's t-test or 1-way analysis of variance (ANOVA) followed by a Bonferroni post hoc test. When data were not distributed normally, a Mann-Whitney U test or a nonparametric ANOVA on rank (Kruskal-Wallis) followed by a post hoc test was used for group comparison. A Chisquare or Fisher exact test was used for comparing proportions. SRS Progression We implanted 2 pairs of bipolar electrodes in each mouse. One pair of electrode was positioned in the hippocampal CA3 for kindling stimulation and local recording and another electrode positioned in an unstimulated structure. The latter was alternated in 5 groups of mice and targeted the contralateral hippocampal CA3, contralateral or ipsilateral parietal cortex, ipsilateral piriform cortex, contralateral dorsomedial thalamus, or ipsilateral entorhinal cortex. These different implantations were abbreviated as hippo-hippo, hippo-cortex, hippo-piriform, hippo-thalamus, and hippo-entorhinal, respectively. Kindling seizure progression was assessed by the numbers of stimuli needed to reach a kindled state (3 consecutively evoked stage 5 motor seizures) or to induce SRS, cumulative evoked ADs, and motor seizures. For mice in the hippo-hippo, hippo-cortex, hippo-piriform, and hippo-thalamus groups (12, 13, 11, and 14 mice for each group), the numbers of kindling stimulations required to reach the kindled state were in a range of 15.5 ± 1.9 to 18.6 ± 2.0 stimuli (mean ± SEM hereinafter; Fig. 1B); cumulative durations of evoked ADs to the kindled state were ranged from 272.1 ± 27.8 to 325.3 ± 38.2 s (Fig. 1D). SRS were observed in the 4 groups of mice following 100.4 ± 4.0 to 114.8 ± 4.0 stimuli (Fig. 1C). Cumulative durations of evoked ADs to SRS were 2770.0 ± 110.3 to 3068.2 ± 145.7 s (Fig. 1E); cumulative stages of evoked motor seizures to SRS were 356.3 ± 12.7 to 432.5 ± 25.9 (Fig. 1F). There was no significant group difference among these measures (1-way ANOVA), suggesting that SRS progression is not substantially influenced by electrode implantations in different unstimulated structures. Measures were similarly obtained from 3 mice in the hippo-entorhinal group but not included for group comparison due to small sample size. Hippocampal kindling was terminated in 17 other mice after 23-96 stimuli due to loss/malfunction of implanted electrodes or health-related complications. Aberrant hippocampal spikes with peak amplitudes of ≥2 times of background signals and intermittent incidences of 1-5 events/10 s, but not SRS, were observed in 2/17 mice while being monitored after 80 stimulations. Control mice (n = 10) received electrode implantations in hippocampal and piriform/cortical areas and experienced twice daily handling manipulations. Neither seizures nor aberrant hippocampal spikes were observed from these control mice while being monitored after 120 handling manipulations. Together these observations suggest that SRS result from a sufficient cumulation of evoked seizure activity and that the chronic handling manipulation and intracranial electrode implantation per se do not induce seizure activity. Motor Seizure Expression of SRS Spontaneous motor seizures were assessed using the Racine scale modified for mice (Racine 1972;Reddy and Rogawski 2010; Reddy and Mohan 2011) because they were like evoked motor seizures. Briefly, stage 0, 1, and 2 motor seizures were recognized by behavioral arrest, chewing/facial movement, and head nodding/unilateral forelimb clonus; stage 3, 4, and 5 motor seizures were recognized by bilateral forelimb clonus, rearing, and falling, respectively (Supplementary Videos 1-3). A total of 3244 SRS events with identifiable motor behaviors were captured from 45 mice. These included 310 SRS events collected from 5 mice in the hippo-hippo group, 498 events from 13 mice in the hippo-cortex group, 269 events from 11 mice in the hippo-piriform group, 2113 events from 14 mice in the hippo-thalamus group, and 54 events from 3 mice in the hippo-entorhinal group, respectively. SRS with expression of stage 3, 4, or 5 motor seizures were predominantly observed from individual mice in the hippohippo, hippo-cortex, hippo-piriform, and hippo-entorhinal groups. Collectively, SRS with expression of stage 3-5 motor seizures accounted for 85.2-100% of total SRS events observed from the 4 groups of mice. SRS with expression of stage 0, 1, or 2 motor seizures were more frequently observed from individual mice of the hippo-thalamus group. In addition, SRS with consecutive expression of stage 0, 1, or 2 motor seizures were observed from 4 mice in this group (interevent intervals of 3-15 min and 3-17 events/episode). Collectively, SRS with expression of stage 0-2 motor seizures made up 57.1% of the total SRS observed from mice of the hippo-thalamus group, which were significantly greater than those observed from mice in the hippo-hippo, hippo-piriform, or hippo-cortex group (Chi-square test, P < 0.001). We therefore suggest that SRS with expression of stage 3-5 motor seizures are prominent in hippocampally kindled mice and that the severity of motor seizures may be influenced by chronic implantation of thalamic electrodes in some mice (see Discussion). EEG Features of SRS Coexpressed Discharges in Kindled Hippocampal and Corresponding Unstimulated Areas Spontaneous discharges were recognized by repetitive spike waveforms with amplitudes approximately 2 times that of background signals and durations of ≥10 s. Most discharges began with LVF signals (see below), which were followed by incremental rhythmic spikes and then sustained largeamplitude spikes with simple or complex waveforms. Discharge termination in most cases featured a sudden cessation of spike activity and a subsequent component of signal suppression lasting several seconds. Discharge durations were determined by the time between the LVF onset and the spike cessation. A total of 2790 SRS events with decipherable EEG signals in both corresponding regional recordings and identifiable motor behaviors in video were collected from the 5 groups of mice. These included 233 events from 5 mice in the hippo-hippo group, 498 events from 13 mice in the hippo-cortex group, 269 events from 11 mice in the hippo-piriform group, 1736 events from 13 mice in the hippo-thalamus group, and 54 events from 3 mice in the hippo-entorhinal groups, respectively (Fig. 2). Measures from some events during the abovementioned consecutive stage 0-2 motor seizures were not included in Figure 2. On these occasions, discharges were evident in kindled hippocampal recordings but barely recognizable in corresponding thalamic recordings ( Supplementary Fig. 3). Coexpressed discharges in the kindled hippocampus and corresponding unstimulated structure were observed in all 2790 SRS events captured. Durations of corresponding regional discharges were not significantly different in the 5 groups of mice, irrespective of stages of associated motor seizures (Student's ttest or Mann-Whitney U test; Fig. 2A-E). There was no significant difference among discharge durations sorted according to motor seizure stages in each group (nonparametric ANOVA on rank test). Linear regression analysis revealed no strong correlation between durations of regional discharges and the stages of motor seizures in each of the 5 groups of mice (R 2 = 0.01-0.02). Similar coexpression of corresponding regional discharges was also observed when associated motor seizures were unanalyzable due to complications in video monitoring (data not shown). However, some thalamic discharges were unappreciable during the abovementioned consecutive stage 0-2 motor seizures ( Supplementary Fig. 3B,C). Corresponding Regional Discharges Displayed Concurrent LVF Onsets Most discharges began with LVF signals, which were considered as components that displayed small amplitudes (≤65% of preceding signals in most cases; see Materials and Methods; Supplementary Fig. 2) and durations of 0.5-5 s prior to appearance of repetitive incremental spikes. The LVF signals often appeared right after a spike with simple or complex waveform (Figs 3A-5A, Supplementary Figs 3A-7A) and/or overlaid onto a slow baseline shift ( Fig. 5A; Supplementary Figs 2A, 7A). The latter might represent an altered DC signal due to the input frequency range (0.1-1000 Hz) of the amplifiers used in our recordings (see Materials and Methods). It was difficult to systematically define the starts of regional LVF signals because of the variable signals immediately preceding the LVF onset. We therefore considered the LVF signals start from the immediately preceding spikes (Figs 3A-5A; Supplementary Figs 3A-7A) or from time points at which signal amplitudes were abruptly and markedly attenuated from preceding signals (Fig. 6A, Supplementary Fig. 8A). The LVF onsets of corresponding regional discharges consistently showed a concurrent relation irrespective of the unstimulated structure targeted. For example, LVF signals with immediately preceding spikes were observed from bilateral hippocampal discharges ( Fig. 3A; Supplementary Fig. 5A) and hippocampalcortical discharges ( Fig. 4A; Supplementary Fig. 6A), and these spikes occurred nearly synchronously in bilateral hippocampal and hippocampal-cortical recording. The relation of corresponding regional LVF onsets was independent of discharge durations and associated motor seizures. As shown in Supplementary Figures 5A, 7A, 8A, discharges associated with stage 1 or 2 motor seizures were much shorter than those associated with stage 3 (A-E) Measures from mice in the 5 implantation groups were presented. Numbers of SRS events and mice examined in each group were indicated. There was no significant difference between durations of stimulated and unstimulated sites at each stage of motor seizures (Student's t test or Mann-Whitney U test). Regional discharge durations corresponding to stage 1-5 motor seizures were not significantly different in each group (nonparametric ANOVA on rank test). There was no strong correlation between regional discharge durations and motor seizure stages (linear regression analysis, R 2 = 0.001-0.022). or 4 motor seizures (Figs 3A, 5A, 6A), but their LVF onsets were highly comparable in waveform and concurrence. In contrast to the consistently observed LVF onset, 4 hippocampal discharges were considered to begin with HYP signals, which featured a cluster of abruptly arising large spikes. These HYP discharges were observed from one mouse within 18 h after termination of hippocampal kindling ( Supplementary Fig. 10A), whereas all subsequent discharges observed from same mouse were found to begin with the LVF signals (Supplementary Fig. 10B). Additionally, some hippocampal discharges began with repetitive incremental spikes, and these discharges were observed only during the above-mentioned consecutive stage used for the phase information was 1-30 Hz and the high-frequency range used for the amplitude information was 32-512 Hz, with increments on a logarithmic scale. Note that PAC in window 4 between 20-25 Hz (x) and 128-512 Hz (y) signals and in window 6 between 2-3 and 64-512 Hz signals were stronger in the top than in bottom panels. (C) WPC plot for the corresponding regional signals in (A). WPC was applied to each wavelet central frequency from 0.25 to 512 Hz with increments on a logarithmic scale and window size proportional to 8 cycles of each frequency. Scale 1 indicated a phase lock. Note phase-locked/near phase-lock discharge signals appearing around the 20-32 s time stamps and in a frequency range of 20-80 Hz as well as around the 30-40 and 56-70 s time stamps and in a frequency range of 1-16 Hz. Also, note these coherent signals were accompanied with dissimilar regional PAC in windows 3-7. 0-2 motor seizures ( Supplementary Fig. 3B,C). Together, these observations suggest that the LVF signals are a dominant onset pattern of spontaneous EEG discharges in hippocampally kindled mice. Corresponding Regional Discharges Displayed Different Waveforms We conducted local differential recordings through twisted wire bipolar electrodes to sample "local" signals while attenuating the influences of remote signals. Region-specific activities were observed from the kindled hippocampus and corresponding unstimulated ipsilateral/contralateral structure. Specifically, discharges simultaneously recorded from the 2 corresponding areas were different in waveform (Figs 3A-7A) and/or termination times ( Fig. 4A; Supplementary Fig. 6). In addition, large amplitudes spikes were observed from the kindled hippocampus but not corresponding unstimulated cortical, piriform, entorhinal, or thalamic area (Fig. 7A, Supplementary Fig. 4A,B). These spikes appeared several seconds after hippocampal discharges and displayed variable spike rates (3-5 spikes/s) and durations (19.7 ± 1.3 s/episode, 79 events from 12 mice). Small amplitude hippocampal spikes were also observed before or after discharges (21 ± 2.1 s/episode, 19 events/5 mice, Supplementary Fig. 5A). We speculate that the hippocampal spikes might largely represent "local" hyperexcitability of the kindled hippocampal circuitry. Furthermore, in recordings made from some mice in the hippo-thalamus group and during the abovementioned consecutive stage 0-2 motor seizures, discharges were evident in the kindling hippocampal area but unappreciable in the corresponding thalamic area (Supplementary Fig. 3B,C). PAC and WPC analyses PAC analysis was performed for a set of 28 corresponding regional discharge (1-2 events per mouse and 3-5 mice per group for the 5 implantation groups). PAC strengths were assessed in 8 sliding 4-s time windows (Weiss et al. 2013;Zhang et al. 2017;Grigorovsky et al. 2020) that matched pre-and postdischarge signals as well as the onset, middle, and offset epochs of regional discharges (Grigorovsky et al. 2020). The slow oscillation used for the phase information was 1-30 Hz and the fast oscillation used for the amplitude information was 32-512 Hz. Regional discharges appeared to show a general trend in PAC features: weak PAC mainly between the phase of 1-4 Hz and the amplitude of 32 up to 256 Hz signals were noticeable for discharge onset epochs; stronger PAC between the phase of 8-12 or 16-20 Hz and the amplitude of 64 up to 512 signals were evident for middle and later discharge epochs; PAC for postdischarge signals were like those for discharge onset epochs (panel B of Figs 3-7 and Supplementary Figs 5-9). Overall, the patterns and strengths Figure 3. Note in (B) and in windows 3-5 there were stronger PAC between roughly 12-20 and 64-256 Hz signals in the bottom (piriform) than in top (hippocampus) panels. Note in (C) phase-lock or near phase-locked discharge signals appearing around in the 15-30 s time stamps and in at frequencies around 16 Hz as well as in the 35-60 s time stamps and in a lower (1-5 Hz) and a higher (15-25 Hz) frequency range. These coherent signals were accompanied with dissimilar regional PAC in windows 3-6. of PAC were distinct between corresponding discharge events analyzed. When data from each implantation group were pooled together, mean PAC indexes for the onset and middle parts of discharges were significantly greater for the kindled hippocampus than for the cortex or thalamus (Fig. 8B,D) but weaker for the hippocampus relative to the piriform cortex (Fig. 8C). WPC analysis was performed for the 28 corresponding discharge events mentioned above. WPC was computed through the phase extraction of different frequency bands from 0.25 to 512 Hz. Phase differences of corresponding 2 signals were scaled in a range of 0-1, with 1 indicating a phase-lock. In general, the onsets of corresponding regional discharges, irrespective of the unstimulated structures targeted, did not show an evident Figure 3. Note in (B) and in windows 4-5 there were stronger PAC between 20-25 and 128-512 Hz signals in top (hippocampus) than low (thalamus) panels. Note in (C) phase-locked or near phase-locked discharge signals appearing around in the 18-32 s time stamps and in a frequency range of 16-80 Hz as well as in the 36-56 s time stamps and in a frequency range of 4-80 Hz. The later coherent signals were accompanied with seemingly similar regional PAC in windows 6-7. increase in coherent signals as compared with predischarge activities. Phase-locked or near phase-locked signals were evident for the early, middle, and/or later parts of corresponding regional discharge. These coherent signals varied in length and expressed in relatively narrow frequency bands from approximately 10 Hz up to 80 Hz in most cases (panel C of Figs 3-5, 7 and Supplementary Figs 5-9). Coherent discharge signals with more persistent expression in a wider frequency range of 10-100 Hz were occasionally observed (Fig. 6C). In addition, coherent discharge signals with more uniform expression in a low-frequency range of 1-5 Hz were also noticeable in some cases (Figs 3C, 5C, 6C; Supplementary Fig. 9C). Overall, the occurrence of these coherent discharge signals was accompanied with dissimilar regional PAC (panel B of Figs 3-7 and Supplementary Figs 5-9). Taking together the electrographic observations and the results of PAC and WPC analyses, we postulate that the concurrent regional discharges observed in our present experiments cannot be entirely attributable to remote volume conducted signals (see Discussion). Baseline Activity and Induced Epileptiform Field Potentials Observed From Brain Slices The above observations suggest that corresponding regional EEG discharges with concurrent LV onsets are a main electrographic feature of SRS in hippocampally kindled mice. However, these observations were limited as spontaneous discharges were monitored from only 2 sites in each mouse and it was difficult to assess the temporal relation of corresponding regional LVF Figure 3. Discharges were associated with a stage 3 motor seizure. Note large artifacts in middle discharges and large amplitude spikes following hippocampal discharge. (B and C) PAC and WPC plots similarly arranged as in Figure 3. Note in (B) and in windows 3-6 there were weak but different regional (hippocampal vs. entorhinal) PAC. Note in (C) phase-locked or near phaselocked discharge signals appearing around in the 11-24 s time stamps and in a frequency range of roughly 10-32 Hz. onsets. If spontaneous discharges were originated primarily from the kindled hippocampus and then spread to other brain structures for generation, one would anticipate that the kindled hippocampal CA3 circuitry may be more susceptible than other unstimulated forebrain circuitries for genesis of epileptiform activities when examined in vitro. We therefore conducted brain slice experiments to explore this. Horizontal brain slices (0.4 mm thick) encompassing ventral hippocampal and entorhinal areas or ventral-medial piriform area were prepared from extended kindled mice with SRS. Slices similarly prepared from mice after chronic handling manipulations or electrode implantation alone served as controls. Extracellular and whole-cell patch recordings were used to monitor local field potentials and singlecell activities. Activities of the hippocampal CA3 and DG areas were examined in slices ipsilateral to the kindling site, and piriform and entorhinal activities were monitored from slices ipsilateral or contralateral to the kindling site. All recordings were done in a submerged chamber and at a perfusate temperature of 35-36 • C. "Baseline" CA3 and DG Activities Slices perfused with standard ACSF were used to examine whether CA3 and DG "baseline" activities are altered by extended hippocampal kindling. CA3 or DG population spikes were evoked by local afferent stimulation at near maximal intensity and recorded from the cell body layer. The peak amplitudes of CA3 and DG population spikes were comparable in slices Figure 8. Summaries of regional PAC. (A-E) corresponding regional discharges from mice in the 5 implantation groups were analyzed (1-2 discharge events per mouse and 3-5 mice per implantation group). Mean PAC indexes were computed using both spectral and temporal averaging. Discharges were abbreviated as ictal in x-axis. Peri-ictal segments were chosen to be at least 8 s long to include at least 2 PAC windows. Onset sections were the first 8 s of the electrographic discharge onset. Ictal sections included entire discharges excluding onset and offset segments. Offset sections were the last 8 s of the electrographic discharges. Postictal sections were 8 s following electrographic discharge termination. PAC windows containing large movement artifacts were excluded from the analysis. * , Kindled hippocampus versus corresponding unstimulated structure, P < 0.05, Mann-Whitney U test. Note in (B and D) significantly stronger PAC in the kindled hippocampus than in corresponding unstimulated cortex or thalamus. Also note in (C) significantly weaker PAC in the kindled hippocampus relative to corresponding unstimulated piriform cortex. of kindled and control mice (Table 1), but effects of paired stimuli on DG spikes were different between the 2 groups of slices. Spike inhibition by paired stimuli 5-20 ms apart and spike enhancement by paired stimuli at intervals of 80-100 ms were significantly attenuated in slices of kindled mice as compared with slices of control mice (Fig. 9). In CA3 pyramidal neurons monitored by whole-cell current-clamp recordings, resting membrane potentials, action potential peak amplitudes, and half-widths were not significantly different between neurons of kindled and control mice, but input resistance measures were lower in kindled CA3 neurons (Table 1). The latter observation might be partly due to increased synaptic activities hence decreased membrane resistance described below. Rodent hippocampal slices can exhibit spontaneous field potentials or in vitro sharp waves (SPWs; Karlócai et al. 2014;Buzsáki 2015). Previous work from our laboratory has suggested that in vitro SPWs of the mouse hippocampus are originated from the CA3 area and generated by local network activities involving both glutamatergic and GABAergic synapses (Wu et al. 2005(Wu et al. , 2006. We therefore examined whether CA3 in vitro SPWs are altered in extended kindled mice with SRS. Monitored by extracellular recordings in 18 slices from kindled mice and 17 slices of control mice, CA3 SPWs were smaller and less frequent 195.0 ± 10.6 120.9 ± 6.5 * Action potential peak amplitudes mV) 111.3 ± 1.5 103.0 ± 2.0 Action potential half-width (ms) 1.02 ± 0.04 1.06 ± 0.04 Action potential voltage threshold (mV) −42.8 ± 0.9 −44.4 ± 0.7 35 cells/10 mice 30 cells/12 mice * Note: Control versus extended kindled, Student's t-test, P = 0.025. Figure 9. Effects of paired stimulations on DG population spikes. (A and B) Representative traces collected from slices of a control mouse (A) and a kindled mouse (B). Population spikes were evoked by paired stimuli with interstimulus intervals of 10-80 ms. Traces were superimposed for illustration purpose. Note that in response to paired stimuli with an interval of 10 ms, the second stimulus evoked a small spike in (B) (kindled) but not in (A) (control). (C) Data collected from 12 slices of 4 control mice and from 13 slices of 5 kindled mice. y-axis, the amplitude ratios (%, mean ± SE) of the second versus the second spikes. x-axis, interstimulus intervals. * Kindled versus control, Student's t-test, P < 0.05. Note that spike inhibition or enhancement by paired stimuli at intervals of 5-20 or 80-100 ms were attenuated in slices of kindled mice. in kindled relative to control mice ( Fig. 10A-C). SPW-related synaptic currents in CA3 pyramidal neurons were also altered in kindled mice. When voltage clamped at −40 mV, SPW-related inhibitory postsynaptic currents (IPSCs) were barely detectable in neurons of kindled mice but robust in neurons of control mice (Fig. 10A). When held at −60 mV, CA3 neurons of kindled mice displayed large excitatory postsynaptic currents (EPSCs), whereas neurons of control mice showed mixed EPSCs/IPSCs in correlation with local SPWs (Fig. 10B). Overall, outward (inhibitory) synaptic conductance at −40 mV was smaller, whereas inward (excitatory) conductance at −60 mV was greater in kindled than in control CA3 neurons (18 or 17 neurons from 4 or 5 mice in each group, Fig. 10D,E). However, the reversal potentials of evoked IPSCs, assessed in the presence of a general glutamate receptor antagonist kynurenic acid (2.5 mM), were not significantly different between kindled and control CA3 pyramidal neurons (−70.3 ± 2.1 mV vs. −72.4 ± 1.6 mV, n = 6 or 7 neurons, Supplementary Fig. 11). Epileptiform field potentials with multiple spikes and prolonged waveforms, spontaneously occurring or induced by single or high-frequency (80 Hz for 1 s) stimulation, were not observed from the CA3 and DG as well as from piriform and entorhinal areas in all slices examined (66 from 12 kindled mice and 82 slices from 16 control mice). Induced Epileptiform Field Potentials The above observations suggest that the CA3 and DG circuitries may be altered toward hyperexcitability in slices of extended kindled mice, but such alterations do not cause population epileptiform activity in slices perfused with standard ACSF. We then perfused slices with high-bicarbonate ACSF (alkaline pH 7.8-7.9; see Materials and Methods) to induce epileptiform field potentials and to explore whether susceptibility for induced epileptiform activity is altered in slices of kindled mice. We used this approach as similar high-bicarbonate ACSF reliably induced ictal-like discharges in cortical slices prepared from surgical specimens of epilepsy patients (Huberfeld et al. 2011). Additionally, in our pilot experiments, high-bicarbonate ACSF was more consistent than low-Mg 2+ ACSF (0.5 mM), high-K + ACSF (8-10 mM) or ACSF containing 4-aminopyramine (100 μM) to induce ictal discharges in slices of kindled mice. Two main types of self-sustained epileptiform field potentials, referred to as interictal-like spikes and ictal-like discharges, were observed from the CA3, piriform, and entorhinal areas following perfusion of slices with high-bicarbonate ACSF (Fig. 11A-C). The interictal spikes featured peak amplitudes of ≥0.5 mV, durations of ≤ 500 ms and incidences of 2-5 events/10 s. The ictal discharges manifested with larger peak amplitudes (≥1 mV), longer durations (mean values of ≥30 s), and less frequent incidences (mean interevent intervals of ≥90 s; Fig. 11E). Bath application of phenytoin (50-100 μM for 8-10 min) suppressed CA3 or piriform ictal discharges in 4 or 3 slices of kindled mice examined, whereas CA3 interictal spikes remained detectable in the presence of phenytoin (Fig. 10A-C). These observations are in line with the previous findings that 4-aminopyramine-induced entorhinal ictal discharges but not interictal events were abolished by other clinically used antiepileptic drugs (carbamazepine, topiramate, and valproic acid) in rat brain slices (D'Antuono et al. 2010). The propensities to exhibit regional epileptiform field potentials were different between slices of kindled and control mice (Fig. 11D). Monitored from the CA3 area, interictal spikes were observed in 23 slices and ictal discharges in remaining Figure 11. Epileptiform field potentials induced by high-bicarbonate ACSF. (A-C) Extracellular traces collected from slices of 2 kindled mice and a control mouse. Top, continuous traces illustrated after treatment with a band-pass filter (2-500 Hz). The application times of phenytoin (100 μM) indicated above traces. Arrowed events illustrated in a wide frequency range (0-1000 Hz). Note in (A and B) phenytoin suppressed ictal discharges but not interictal spikes. Also note in (C) interictal spikes persisted following phenytoin application. (D) Proportions of slices with or without induced epileptiform field potentials. * , Kindled versus control; $, piriform versus CA3 or entorhinal area; #, CA3 versus piriform or entorhinal area; Chi-square or Fisher's exact test, P < 0.05. (E) Discharges measured from the CA3, piriform, and entorhinal areas (5-14 events/area). #CA3 versus piriform or entorhinal area, 1-way ANOVA, P < 0.05. 14 slices of kindled mice, whereas only interictal spikes were detected in all 33 slices of control mice. Monitored from the piriform area, ictal discharges were observed in 14/15 slices of kindled mice, whereas only interictal spikes were detected from 3/15 slices of control mice. Monitored from the entorhinal area, interictal spikes or ictal discharges were observed from 5 or 7 of 18 slices of kindled mice, whereas only interictal spikes were observed in 8/15 slices of control mice. The proportion of slices with detected CA3 or piriform ictal discharges was significantly greater in slices of kindled mice than in slices of control mice (Fisher's exact test, P ≤ 0.01; Fig. 11D). In addition, while regional ictal discharges appeared at similar times following perfusion of high-bicarbonate ACSF (4.02 ± 0.44, 4.75 ± 0.54, and 4.46 ± 0.98 min for the CA3, piriform and entorhinal areas, respectively), the proportion of slices with detected ictal discharges was significantly greater for the piriform than the CA3 or entorhinal area (Fisher's exact test, P ≤ 0.05; Fig. 11D). Together these observations suggest that self-sustained ictal discharges induced by high-bicarbonate ACSF may be a unique feature of slices of extended kindled mice and that the ability to generate such in vitro ictal discharges is not limited to the kindled hippocampal CA3 circuitry. Brain Histological Observations We obtained coronal brain sections (50 μm thick) from 8 extended kindled mice with detected SRS and from 6 control mice. These sections were stained with cresyl violet for general morphological assessments. Putative tip locations of implanted electrodes were identified in 13 mice. These locations were approximate to the stereotaxic coordinates of the targeted hippocampal, cortical, entorhinal, or thalamic areas ( Supplementary Fig. 1). Gross brain injury unrelated to implanted electrodes, such as structural deformities, cavities, and dark-stained scar tissues as previously observed from mouse models of brain ischemia (El-Hayek et al. 2011;Wang et al. 2015;Wu et al. 2015;Song et al. 2018a), were not observed in these kindled and control mice ( Fig. 12A-C). There was no evident atrophy in the kindled hemisphere as the area ratios of the kindled versus contralateral hemisphere, assessed at 8 coronal levels, were not significantly different between the extended kindled and control mice (n = 6 each group, Fig. 12D). However, disrupted tissues along putative tracks of implanted electrodes were noted in 3 kindled mice. It is unclear whether such tissue disruption resulted from initial insertions of electrodes and persisted afterward or happened due to electrode withdrawal during brain dissection, or both. Signs of discreet cell injuries, such as shrunken cytoplasmic areas, and dark-stained nuclei and/or cell loss, were variably observed in hippocampal or other areas of kindled mice. Together these observations suggest that extended kindled mice with SRS exhibit subtle-to-moderate cell injuries of varied degrees but not gross brain injury. Similar SRS Progression in Mice With Different Electrode Implantations In our present experiments, each mouse was implanted with 2 pairs of bipolar electrodes. One pair of electrodes was placed in the kindled hippocampus and another pair in an ipsilateral or contralateral unstimulated forebrain structure. The latter structure was alternated in 5 groups of mice and targeted the contralateral hippocampus, ipsilateral/contralateral parietal cortex, ipsilateral piriform cortex, contralateral dorsomedial thalamus, or ipsilateral entorhinal cortex. In response to extended hippocampal kindling, the numbers of stimuli, cumulative evoked ADs, and motor seizures needed to induce SRS were largely comparable in mice of the hippocampal-hippocampal, hippocampal-cortex, hippocampal-piriform, and hippocampalthalamus groups. While data from mice in the hippocampalentorhinal group were not included for group comparison due to a small sample size, overall, these observations suggest that electrode implantation approach used in our experiments is not a major determining factor of SRS progression. The SRS progression we observed in mice seems to be different from that previously observed in rats. In the rat model of extended kindling, the numbers of stimulations required to induce SRS were in a range of 92-508 (mean 348; Pinel and Rovner 1978a, 1978b), 192-277 (Michael et al. 1998), or 174-281 (Brandt et al. 2004. SRS emergence was also observed following 300 stimulations (Milgram et al. 1995) or after 90-100 evoked stage 5 motor seizures (Sayin et al. 2003). Cumulative durations of evoked ADs to SRS were ranged from 13 761 to 27 405 s (Brandt et al. 2004). The measures from rats are apparently greater than those we observed from mice (Fig. 1C,E), raising a possibility that intracranial electrode implantation in the small mouse brain may promote SRS progression. However, we conducted hippocampal kindling in mice of 11-13 months old, and the previous studies kindled various structures (amygdala, hippocampus, entorhinal cortex, caudate, olfactory bulb, or performant path) in adult rats (initial body weights of 200-500 g). It is conceivable that multiple experimental variables, including animal species/strains, animal ages, kindling sites, and intracranial electrode implantation, may influence SRS progression in the rodent model of extended kindling. SRS With Predominant Expression of Stage 3-5 Motor Seizures We assessed spontaneous motor seizures using the Racine 0-5 stages modified for mice (Racine 1972;Reddy and Rogawski 2010;Reddy and Mohan 2011) because of similar behavioral appearances of spontaneous and evoked motor seizures in hippocampally kindled mice. Briefly, spontaneous stage 3-5 motor seizures manifested with bilateral forelimb clonus, rear, and fall; spontaneous stage 0-2 motor seizures were recognized by behavioral arrest, chewing/facial movement, and head nodding/unilateral forelimb clonus. SRS that presented stage 3, 4, or 5 motor seizures were frequently observed from individual mice in the hippocampal-hippocampal, hippocampalcortex, hippocampal-piriform, and hippocampal-thalamus groups, whereas SRS that expressed stage 0, 1, or 2 motor seizures were more frequently observed from mice in the hippocampal-thalamus group. Overall, proportions of SRS with expression of stage 0-2 motor seizure were greater in the hippocampal-thalamus group than in other groups. These observations suggest that hippocampally kindled mice may predominantly exhibit stage 3-5 spontaneous motor seizures and that perturbation of the dorsomedial thalamic circuitry by chronic electrode implantation may influence the expression pattern of spontaneous motor seizures. While the underlying mechanisms remain to be explored, the latter may be in line with the modulatory roles of the dorsomedial thalamic circuitry previously observed from other seizure models (Bertram et al. 2001;Sloan et al. 2011;Bertram 2014). There were limitations and complications in our assessments of spontaneous motor seizures. We analyzed spontaneous motor seizures that were accompanied with EEG discharges, but we did not detect spontaneous motor seizures by video analysis alone. As such we might have underestimated spontaneous motor seizures such as myoclonic jerks associated with interictal spikes previously described in extended kindled rats Rovner 1978a, 1978b;Michael et al. 1998;Brandt et al. 2004). In addition, we used only a webcam to monitor motor activity for each mouse. It was difficult to recognize stage 0-2 motor seizures when the Cryostat coronal sections (50 μm) were stained with cresyl violet and images were obtained by a slide scanner at ×20 magnification. Kindled hemispheres in (B and C) indicated. (D) Ratios (%) of bilateral hemispheric areas. Hemispheric areas were measured at 8 coronal levels as indicated in x-axis. Data were obtained from 6 control and 6 kindled mice. There was no significant group difference at each coronal level tested (Student's t-test or Mann-Whitney U test, P > 0.05). mouse was not faced to the webcam, which might have rendered more errors in scoring stage 0-2 than stage 3-5 motor seizures. Moreover, we did not isolate EEG-video monitoring from environmental noises and used dim lighting during the light-off period. These factors might have influenced the incidence and severity of spontaneous motor seizures detected in our experiments. In rats that underwent extended amygdala kindling and then combined EEG-video monitoring (12 h per day for 7 consecutive days), 3-12 SRS events with expression of stage 4-5 motor seizures were observed from the monitoring periods (Brandt et al. 2004). In our experiments, extended kindled mice exhibited 3-11 SRS events per day and SRS with expression of stage 3-5 accounted ≥85% of total SRS observations in most mice. If our observed stage 3-5 motor seizures are comparable to the stage 4-5 motor seizures recognized in kindled rats (Brandt et al. 2004), then the expression of generalized spontaneous motor seizures appeared to be more frequent in hippocampally kindled mice than in amygdala-kindled rats (Brandt et al. 2004). The abovementioned experimental confounds may also partly explain this apparent difference. SRS Manifested With Concurrent EEG Discharges in Forebrain Areas We monitored spontaneous EEG discharges from the kindled hippocampus and an unstimulated forebrain structure in mice of the 5 implantation groups. In nearly all SRS events with decipherable EEG signals in both implanted areas, discharges were found to coexpress in the 2 corresponding structures. Most discharges began with the LVF signals irrespective of the targeted structures and associated motor seizures, and the LVF onsets of corresponding regional discharges appeared concurrently in all corresponding discharge events examined. Collectively, these observations suggest that concurrent discharges of the kindled hippocampus and corresponding unstimulated forebrain structures are a predominant electrographic feature of SRS in hippocampally kindled mice. We used local differential recording through twisted bipolar wire electrodes to sample "local" EEG signals while attenuating influences of remote signals. The discharges simultaneously recorded from the kindled hippocampus and corresponding unstimulated structure were generally different in waveform. Additionally, "focal" spikes before or following discharges were observed from the kindled hippocampus but not from corresponding forebrain structures. Hippocampal discharges alone were also observed in some of corresponding hippocampal-thalamic recordings. While region-specific signals were observed, caution should be taken when interpreting these observations. EEG signals recorded via fine intracranial electrodes are generally referred to as local field potentials (Buzsáki et al. 2012). Multiple sources can contribute to the local field potentials including synaptic activities, intrinsic ionic currents, electronic interactions via gap junctions among neurons and non-neuronal cells, and ephaptic effects. The waveform, amplitude, and frequency of the local field potentials are dependent upon the proportional contributions of these sources, the distances between these sources and recording site, the geometry and architecture of recorded brain tissue, and the impact of volume conduction (Buzsáki et al. 2012). The latter is particularly pertinent to our present experiments as the volume conduction impact is conceivably greater in the small mouse brain than in larger brains of other animal species. We performed PAC and WPC analyses to explore dynamics of regional discharges and the impacts of volume conduction. PAC is a type of cross-frequency coupling analysis where the phase of slow oscillations modulates the amplitude of faster oscillations (Tort et al. 2010). PAC has been increasingly used to investigate physiological and pathological brain activities including epileptic discharges (Hyafil et al. 2015). For EEG discharges recorded by subdural electrodes from surgical candidates of epilepsy patients, there were strong PAC between the slow (2-9 Hz) and fast (50-200 Hz) oscillatory signals in middle and later epochs of regional discharges. Such PAC were more prominent in recording sites within than those outside a surgically confirmed seizure onset zoom, suggesting a sensitive marker of epileptogenic circuitries (Weiss et al. 2013;Zhang et al. 2017;Bandarabadi et al. 2019;Grigorovsky et al. 2020). A dynamical and temporal evolution of PAC was noticed for spontaneous regional discharges recorded from kindled mice. The low oscillation whose phase modulates high-frequency oscillations started mainly in 1-4 Hz band for discharge onset and then increased to as high as 16-20 Hz with discharge progression. Although there was no consistent PAC difference in discharge onset, PAC in middle and/or later parts of discharges were distinct between the kindled hippocampus and corresponding unstimulated forebrain structures. Specifically, mean PAC index during discharges were significantly greater or less in the kindled hippocampus than in corresponding parietal cortex, dorsomedial thalamus, or piriform cortex. While these PAC differences are limited in assessing regional epileptogenicity as discharges were monitored from only 2 sites in each mouse, they do suggest that discharges from the kindled hippocampus and corresponding unstimulated forebrain structures differ in oscillatory activities and modulation patterns. WPC analysis measures possible correlations between 2 signals by assessing their differences in instantaneous phases. Such phase differences can be presented in a time-frequency scalogram that shows temporal phase changes for different frequencies. In comparison to other relevant analyses, WPC allows separation of the effects of amplitude and phase when measuring the relations between 2 simultaneously recorded signals. As fast brain oscillations are typically associated with low amplitudes, WPC is an effective tool for the study of spatiotemporal relationships spanning the frequency domain (Thatcher 2012). WPC analysis revealed temporal phase changes of corresponding regional discharges recorded from kindled mice. In general, an increase in signal phase coherence was not evident for discharge onsets relative to predischarge activities. Phase-locked or near phase-locked signals became appreciable for the early, middle, and/or later epochs of corresponding regional discharges, but these coherent signals expressed with variable lengths and in relative narrow frequency bands in most discharge events analyzed. Such variable and nonuniform expression of coherent discharge signals cannot be fully explained by volume conducted remote signals, assuming the latter through a purely homogeneous and isotropic ohmic environment are phase-locked and more effective for low-than high-frequency signals when recorded from 2 independent sites (Buzsáki et al. 2012). However, volume conduction in the mouse brain particularly during epileptic discharges may not be homogeneous and ohmic, and phase coherence was assessed for only 2 regional discharges in our analysis. As such, it is difficult to determine the contribution of remote volume conducted signals to the observed regional discharges. Considering that these coherent discharge signals were accompanied with dissimilar regional PAC, we postulate that the concurrent discharges observed from the kindled hippocampus and corresponding unstimulated structure represent integrated activities that may largely arise from the local circuitry but encompass signals from surrounding or remote circuitries through volume conduction. The rodent hippocampus has strong bilateral communications through dorsal and ventral hippocampal commissures (Amaral and Witter 1998). Particularly, CA3 pyramidal neurons have divergent and monosynaptic projections to the contralateral hippocampus (Shinohara et al. 2015). Hippocampal projections to other brain structures, such as the parietal cortex, piriform cortex, dorsomedial thalamus, and entorhinal cortex targeted in our present experiments, are weak and indirect relative to the bilateral hippocampal connections. If the spontaneous discharges are originated primarily from the kindled hippocampus and then spread to other brain structures for generation, the contralateral hippocampus would be expected to receive stronger spread discharge signals and show faster discharge onset relative to other structures. However, regional LVF onsets with similar concurrence were noted between bilateral hippocampal discharges and between hippocampal and cortical discharges or other corresponding regional discharges. It is possible that spontaneous discharges spread from the kindled hippocampus to other unstimulated structures, but that such spread was too fast in the small mouse brain and the onset time lag of corresponding regional discharges was too small to be recognized, possibly obscured by the complex signals immediately preceding the LVF onsets. However, it is difficult to postulate a fast spread of the LVF signals from the kindled hippocampus to other unstimulated structures. Alternatively, spontaneous discharges might arise concurrently in multiple forebrain structures including the kindled hippocampus through a macroscopic epileptic activity, which may be triggered by a common drive yet to be identified (Li et al. 2018). In line with this view, bilateral entorhinal and DG discharges with simultaneous LVF onsets have been demonstrated in a rat model of unilateral intrahippocampal injection of kainic acid (Bragin et al. 2005). Regardless of the underlying mechanisms, corresponding regional discharges with concurrent LVF onsets are a main electrographic feature of SRS in hippocampally kindled mice. A main technical limitation of our EEG recordings is that only 2 sites were monitored in individual mice. This approach might minimize complications associated with multielectrode implantations in the small mouse brain but disallowed simultaneous detections of discharges from multiple brain structures in each mouse. In addition, 2 recording sites in the small mouse brain are limited in assessing regional discharge spread. Moreover, tip locations of implanted electrodes were histologically examined in a limited number of kindled mice with SRS. While data collected from mice in the 5 implantation groups are supportive of the concurrent expression of forebrain regional discharges, further works that simultaneously record EEG signals from multiple brain structures and histologically depict recording sites in individual mice are required to characterize the temporal relation of regional discharges in our model. Epileptiform Discharges Induced Brain Slices In Vitro We conducted brain slice experiments to examine whether local circuitry activities are altered by extended kindling. In slices perfused with standard ACSF, the amplitudes of evoked CA3 and DG population spikes were not significantly different between kindled and control mice, but DG spike inhibition by paired stimuli (interstimulus intervals of 5-25 ms) was attenuated in kindled mice. Basic intracellular parameters and reversible potentials of pharmacologically isolated IPSCs were largely comparable between CA3 pyramidal neurons of kindled and control mice, but CA3 in vitro SPWs were smaller and less frequent and SPWrelated synaptic currents in CA3 pyramidal neurons were more excitatory in kindled mice. However, epileptiform field potentials, which either occurred spontaneous or self-sustained following local afferent stimulation, were not observed from slices of kindled mice when perfused with standard ACSF. While we focused on the CA3 area, our present observations are principally in line with the previous findings in amygdala-kindled rats with SRS (Sayin et al. 2003). In brain slices obtained from extended kindled and control rats, DG spike inhibition by paired stimuli (interstimulus intervals of 15 or 25 ms) was attenuated in kindled rats compared with control rats. In addition, the amplitudes and decay times of evoked IPSCs were decreased in DG granule neurons of kindled rats relative to neurons of control rats, but IPCS reversal potentials and basic intracellular parameters were comparable in both groups of DG neurons. Taking together the previous findings (Sayin et al. 2003) and our present observations, we suggest that local circuitry activities of CA3-DG areas may be altered toward hyperexcitability in extended kindled rats and mice, but such alterations do not lead to the genesis of population epileptiform activities when examined under standard in vitro conditions. We adopted the approach of high-bicarbonate ACSF (Huberfeld et al. 2011) to examine self-sustained epileptiform activities in slices. Perfusion of slices with high-bicarbonate ACSF induced 2 types of self-sustained epileptiform field potentials. The interictal spikes were observed from slices of kindled and control mice, but the ictal discharges were observed only from slices of kindled mice. The ictal discharges, but not the interictal spikes, were suppressed by phenytoin, which are in line with phenytoin's effects on EEG activities observed from extended kindled mice (Song et al. 2018b) and with the effects of other clinically used antiepileptic drugs demonstrated in other in vitro models (D'Antuono et al. 2010). In addition, the ictal discharges emerged from the CA3, piriform, and entorhinal areas at similar times following application of high-bicarbonate ACSF, but the propensity of exhibiting ictal discharges was greater in the piriform cortical area relative to the kindled CA3 area. While this difference might be due to the extents of CA3 and piriform circuitries preserved in brain slices and their reactions (see below) to alkaline ACSF, it did suggest that the ability to exhibit self-sustained ictal discharges following exposure to high-bicarbonate ACSF is not limited to the kindled hippocampal CA3 circuitry. Further works are needed to further examine this issue by inducing ictal discharges via different manipulations and in other brain areas. Major changes in high-bicarbonate ACSF relative to our standard ACSF were a decrease of NaCl (from 125 to 71 mM), increases of KCl (from 3.5 to 6.5 mM) and NaHCO 3 (from 25 to 80 mM), and an alkaline shift in pH (from 7.35-7.4 to 7.8-7.9) when aerated with 95%O 2 -5%CO 2 . These changes likely induced epileptiform activities by diverse, synergistically acting mechanisms including enhanced ionic and synaptic activities by extracellular alkalization, positive shifts in resting membrane potentials, and the reversal potential of GABAa-IPSPs/IPSCs, and aberrant activities mediated by connexins and pannexin channels (Ruusuvuori and Kaila 2014;de Curtis et al. 2018;Scemes et al. 2019). It is conceivable that the abovementioned ionic and synaptic processes, particularly the glutamatergic activity (Huberfeld et al. 2011), may be altered in extended kindled mice, therefore rendering the expression of self-sustained ictal discharges in slices. While these alterations remain to be investigated in our model, the use of high-bicarbonate ACSF seems a reliable in vitro protocol to reveal heightened ictogenesis of epileptic circuitries in rodent models of chronic SRS. No Evident Gross Brain Injury in Extended Kindled Mice Examined We conducted basic histological assessments in a limited number of extended kindled mice with SRS. Tissue disruptions related to implanted electrodes and discreet cellular injuries of varied degrees were observed, but gross brain injury independent of electrode implantation was not evident in the kindled mice examined. The latter observations are in general agreement with previous studies in extended kindled rats Rovner 1978a, 1978b;Milgram et al. 1995;Michael et al. 1998;Sayin et al. 2003;Brandt et al. 2004). However, we did not perform stereological cell counts and immunocytochemical assessments of GABAergic interneurons as previously demonstrated in extended kindled rats with SRS (Cavazos et al. 1994;Sayin et al. 2003;Brandt et al. 2004). In rats that experienced 150 evoked stage 5 seizures, hippocampal volumes were not reduced but neuronal densities in hippocampal subfields were decreased to 51-81% of control measures. Neuronal densities were also decreased to 76-90% of controls in subfields of the entorhinal cortex but not in the somatosensory cortex (Cavazos et al. 1994). In rats that exhibited SRS following 90-100 evoked stage 5 seizures, the numbers of cholecystokinin-positive GABAergic interneurons in the DG were decreased by 25-76% compared with those in control rats (Sayin et al. 2003; but see Brandt et al. 2004). The decrease in dentate GABAergic interneurons was associated with attenuated dentate spike inhibition by paired stimuli and decreased IPSCs in dentate granule neurons (Sayin et al. 2003). These neuronal losses are thought to resemble pathological findings in human temporal lobe epilepsy (Cavazos et al. 1994;Sayin et al. 2003;Aronica et al. 2017). It is highly likely that hippocampally kindled mice with SRS might suffer similar neuronal loss as previously characterized in extended kindled rats (Cavazos et al. 1994;Sayin et al. 2003). Further works are needed to characterize such neuronal injuries and determine their impacts on progression, incidence, and electrographic initiation of SRS. Considering that bilateral EEG discharges with simultaneous LVF onsets were observed from rats following unilateral intrahippocampal injection of kainic acid (Bragin et al. 2005), it is conceivable that complex mechanisms beyond the initial local brain injury may underlie generalized seizures in rodent models of chronic SRS. In this context, the concurrent regional EEG discharges we observed from extended kindled mice with SRS are unlikely explained by local brain injuries. Summary The main objectives of our present experiments were to detail electrographic features of SRS in the mouse model of extended hippocampal kindling. By intracranial recordings from the kindled hippocampus and different unstimulated forebrain structures in individual mice, we found that EEG discharges with LVF onsets occurred almost simultaneously in corresponding recording sites in nearly all SRS detected and that regional discharge durations were largely unrelated to the severities of the associated motor seizures. By examining local circuitry activities in brain slices, we found that alkaline ACSF induced ictal-like discharges in the CA3, piriform, and entorhinal areas of kindled mice but not control mice, and that the piriform had greater propensity than the kindled CA3 area to generate such in vitro discharges. Together, these in vivo and in vitro observations are supportive of the hypothesis that epileptic activities involving a macroscopic network may generate concurrent discharges in forebrain areas and initiate SRS in hippocampally kindled mice. It is our hopes that our present experiments may help further investigations that examine kindling-induced SRS in mouse models of neurological diseases (Sabetghadam et al. 2020). Supplementary Material Supplementary material can be found at Cerebral Cortex Communications online. Notes Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2021-05-11T00:05:49.962Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f71ae5e3f28e206d89b974e38755482372577c2a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/texcom/tgab004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6d0204538a8ab7cb4992752ddd748bbd9f78fe81", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
238037817
pes2o/s2orc
v3-fos-license
Spatial Assesment of Forest Fire Distribution, Occurrence and Dynamics in Province-2, Nepal This study was objectively conducted to assess trends of forest fire occurrence, burnt area and its causes and management measures in Province-2, Nepal. Altogether 48 Questionnaire Survey and 32 Key Informant Interviews were organized to collect primary data while secondary data were collected from Fire Information for Resource Management System from year, 2002 to 2019. Total 36 maps were produced showing fire occurrences (18) and burnt area pattern (18) of targeted area. Result showed that total 5289 forest fire incidents and total 499,538.9 ha were burnt from 2002 to 2019. The highest number of forest fire incidents was observed in March with 2975 incidents covering 56.24%. The highest incidence was recorded in Lower Tropical Sal and Mixed Broadleaf Forest with 3237 observations. One-Way ANOVA showed that fire occurrence and burnt area among Lower Tropical Sal and Mixed Broadleaf Forest (LTSMF), Hill Sal Forest (HSF) and Outside Forest Region (OFR) were significantly different at 95% confidence level. Mann-Kendal correlation showed that there was positive correlation (R=0.393) between year and forest fire occurrence in LTSMF as well as between year and burnt area of HSF (R=0.09). Principal Component Analysis in Parsa district showed, unextinguished cigarette butts and litter fall was positively correlated. Abbreviations: Rainfall; LDP: Litter fall; NF: Natural Fire; Cigarette Smoke; Unnecessary Weeds and Litter; Strict Rules Fine Against Forest Occurred by Carelessness; CPMFF: Community Involvement in Management of Forest Fire. Introduction Forest resources are the vital requirement of every living being to survive on the earth as without this life cannot be imagined. Globally, there are many types of limiting factors which affect the growth of the forest [1]. Some of the important limiting factors are forest fire, extreme rainfall, erosion, illegal logging, invasive in Amazon [5]. A number of incidents of forest fire were recorded and over 1.2 million trees were lost in Pakistan's northwest Khyber Pakhtunkhwa province during the time period of July 2018 to June 2019. 6 In Sri Lanka massive forest fire was recorded which was found to broke out in the Maragala Mountain area in Moneragala, in the Uva Province and it was found with extreme temperature nearby over 400oC and damaged about 2.023 square km forest area [6]. Over 2,318.88 hectares of forest was lost in July 2019 in Thailand. The MODIS data showed over 30,000 forest fires took place in India in 2019. And it was also found that around 95 percent of the forest fires in India were on account of human activity [7]. Forest fire occurs every year in Nepal too losing large junk of forest. We have no database of how much damage is caused and we have no record of number of lives is lost every year due to forest fire. Thus, such researches are important particular in Province 2, Nepal which is the most vulnerable and prone area to forest fire [8,9]. Of the total forest fire incidences it was found that about 58% forest fire are caused by deliberate burning by grazers, [10] poachers and non-timber forest product collectors, 22% are caused by negligence and 20% occur by accident [11][12][13]. Along with GIS, the MODIS data is very useful to analyze the forest fire and damage caused [8,14,15]. Nepal lacks proper institutional as well as financial and technological capability to combat forest fires and also the country doesn't have the full record of forest fire occurrence and its impacts [8,16]. So, to bridge this gap in the assessment of trend and dynamics of forest fire, this study is an attempt to map the forest fire dynamics and to assess the trend of forest fire occurrence and burnt area pattern in Province number-2 of Nepal using Geographic Information System (GIS) and explore causes of forest fire and its management options in the targeted area. Study Area The study area is located in Province-2 of Nepal. Province-2, lies in the southeastern region of Nepal that was formed after Primary Data Collection The primary data and information was collected using PRA tools like Questionnaire survey and Key Informant Interview. Questionnaire survey was conducted among 48 respondents (6 from each 8 districts) and total 32 Key Informant Interview was organized (4 from each 8 districts). Respondents were from Parsa (10), Bara (10), Rautahat (10), Sarlahi (10), Mahottari (10), Dhanusha (10), Siraha (10) and Saptari (10). For the Key Informant Interview, key informants having deeper understanding about the subject matter were selected. 32 key informants selected were forest officials, local users and CFUG members from each 8 districts. Perception level on damage, causes, consequences and different adaptation measures required for control of forest fire was gathered using the applied PRA tools. Close-ended questions were prepared about the fire, its causes, preventive methods, and management strategies. The email communication, telephone communication and field visits were applied to collect the required data. The closed ended questions were ranked with series of statements using Likert scale and the respondents were to rate their views in the scale ranging from 5 "Highest score" to 1 "Lowest score". These scores were then analyzed using Principal Component Analysis (PCA) method. This study analyzed the trend of occurrence of forest fire and mapped the forest fire incidence and burnt area dynamics of the forest of Province-2 of Nepal using active fire records from multitemporal data of Moderate Resolution Imaging Spectroradiometer satellites, provides the processed data to universities and research institutes as part of the academic frontier project. The MODIS active fire product detects fires in 1x1km pixels that are burning at the time of overpass under relatively cloud-free conditions [18,19]. In concern to Nepal, it does not have own reliable information, tools and technology for the detection and monitoring due to which MODIS vector data were downloaded from NASA's official website. MODIS Data The forest fire point and burnt area related shape file was archived from the MODIS data as data sources for this study. The MODIS burned area product (MCD-64) was extracted from NASA's official website (FIRMS) which gives whole burned area of South Asia (Table 1). Ground Verification Data given by NASA (MODIS fire occurrence and burnt area data) was verified by the information and data provided by the Department of Forest and Soil Conservation and forest officials of each district of Province-2. The record of GPS coordinates of fire occurrence are available in Department of Forest and Soil Conservation which were provided by officials and were used to validate the fire occurrence. The location of the burnt area was verified with the Key Informant Interview during data collection. Field visit was also done to collect the location of frequently fire occurring areas. Spatial Analysis MODIS fire data was used to analyze patterns and trend of forest fire incidence and burnt area by forest fire in Province-2 from 2002 to 2019. Software, Arc GIS 10.5 was used for analyzing and interpreting the satellite images, GIS layers data, burnt area pattern and forest fire trend analysis based on density of fire accumulation. Statistical data was entered and analyzed using SPSS. Charts, bar line and graphs was prepared using Microsoft Excel 2010 and Microsoft Word 2010 (Table 2). Spatio-Temporal Distribution of Forest Fire Occurrence The Out of total detection, about 4991 fires were detected with greater than 50% confidence. Forest fire distribution pattern in Province-2 in last 18 years are presented in (Figure 2). Month Wise Spatio-Temporal Distribution of Forest Fire Occurrence Forest fire occurrence on a monthly basis was studied for each year. The result showed that the highest fire incidents occurred in Spatio-Temporal Distribution of Burnt Area based on Forest Types The burnt area pattern was differed according to the forest ANOVA showed that there was a significant correlation between the burnt area and the fire incidences in HSF at 95% confidence level as P-value was 0.001 (i.e. P<0.05). The variables were also examined using the t-test, which showed significant relation between the two variables viz. burnt area and forest fire incidences, as the P-value was 0.001. Correlation between Fire Incidence and Burnt Area in Outside Forest Region The Curve estimation showed that there was positive correlation (R=0.562) between the burnt area and forest fire incidences in the OFR. The equation showed that, y = 28.340*x + 764.408, where y is the burnt area coverage and x is fire incidences. The equation depicts that the burnt area coverage in Outside Forest Region was increased by of 28.340 ha every year (Figure 17). ANOVA and t-test were applied to test the correlation between the burnt area and forest fire incidences in Outside Forest Region (OFR). ANOVA showed that there was a significant correlation between the burnt area and the fire incidences in OFR at 95% confidence level as P-value was 0.015 (i.e. P<0.05). The variables were also examined using the t-test, which showed significant relation between the two variables i.e. burnt area and forest fire incidences, as the P-value was 0.015. (Table 3). Causes and Management Practices of Forest Fire: The result showed that there were 6 major causes of forest fire which were The key informants also provided information on causes of forest fire. About 75% of them responded that most of the forest fire is caused mainly due to human carelessness like unextinguished cigarettes, debris burning, and carelessness while handling fire etc. and remaining 25% responded that causes as delay in rainfall, accumulation of litter (fuel) etc. are causes of forest fire. When asked about difficulties in fire control and management, 65% informants said that the community participation in control of forest fire has improved in recent years as awareness programs have been facilitated by the Division Forest Office. But the local viewpoint was found to be quite different than the forest officials, who mentioned that the forest fire rate increases due to lack of forest control tools, training and patrolling provided by the responsible departments under Division Forest Office. Spatial and Temporal Trend of Forest Fire The spatial and temporal distribution patterns of forest fire Among the total forest fire event, 2498 forest fire incidence was observed in Parsa district, 757 forest fire incidence was observed in Bara district, 509 forest fire incidence was observed in Rautahat district, 451 forest fire incidence was observed in Sarlahi district, 339 forest fire incidence was observed in Mahottari district, 257 forest fire incidence was observed in Dhanusha district, 90 forest fire incidence was observed in Siraha district and 388 forest fire incidence was observed in Saptari district. Out of total detection, about 4991 fires were detected with greater than 30% confidence. Research shows that highest forest fire incidence in Province weeds, and undergrowth having low moisture content only because of high temperature, low relative humidity and precipitation and high wind velocity in this season [20]. This research study shows that the lowest fire incidence was observed from the month of June to October which is supported by the study by Khanal (2015) [21] which states that low fire activity was observed in the period between July to October in Nepal. Comparing this study with the study of forest fire in other countries in South Asia it was found that the fire occurrence month in other South-Asian countries were almost similar to Nepal. The study by Tian (2013) [22] shows that among the forest fires in China, March was the month with the most fires (60.0%), followed by April ( Major Causes of Forest fire and its Management Practices According to the study conducted it was analyzed that most of In the paper Report on Fires in South Asian Region it has been mentioned that Bhutan's climate conditions during winter (lack of rainfall, and high wind velocities) very much favor ignition of fires. Also, the end of the dry winter season is used to prepare the fields with burnings and it is very common that these fires escape and cause damage. Based on this research it has been found that long dry period, delay in rainfall and uncontrolled burning of accumulated litter were major causes of forest fire in Province-2 which shows similarity in research by Benndorf (2008) [28]. This shows that Parsa district. This study shows that community involvement in management of forest fire is one of the most important management strategies to control forest fire in Province-2 which is supported by the study done by Padillah (2006) [27]. Community involvement in monitoring and management of forest fire must be considered effective as they possess valuable knowledge of place, fire history, traditional management practices, fuel loading etc. and having ownership makes these communities feel more inclined and responsible for the management practices. Our study also depicts the increasing need of public awareness programs as a management strategy which is also supported different studies in Gambia, Honduras and India [29]. Public awareness can be used as an effective tool for fire management as increase in public awareness program can lead towards increase in active participation of forest users in forest fire management and forest biodiversity enhancement activities. Our study clarifies that the creation of fire lines and fire breakers can be used as an efficient tool for forest fire management which is consistent to the study by Padillah (2006) [27]. Fire-lines are effective management approach as construction of fire lines around and inside the forest helps to align the fuels which ultimately helps to segregate, stop and control forest fire. Similarly regular forest fire patrolling, removing unwanted weeds and litter and strict laws and fine against carelessness to forest fire are other important practices as shown by this research study that must be taken in consideration for management of forest fire in Province-2, which is also supported by the study done by Benndorf (2008) [28]. The study by Benndorf [30][31][32] states that practices as provision of a legal and financial basis of fine, strict law enforcement, provision of basic tools and materials for fire patrolling and launching of forest fire management programs must be reinforced for management of forest fire which utterly supports the findings of this research study for management of forest fire in Province-2. Conclusion The fire incidence was found varying in different year in different districts of Province-2. The highest forest fire occurrence
2021-08-20T18:43:42.929Z
2021-04-20T00:00:00.000
{ "year": 2021, "sha1": "4475a569088390119c3b4437732e458b452fdeca", "oa_license": "CCBY", "oa_url": "https://biomedres.us/pdfs/BJSTR.MS.ID.005666.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "28b60758bd9462d3597004950b79dafc3f5888a9", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
201103791
pes2o/s2orc
v3-fos-license
Single-shot carrier-envelope-phase measurement in ambient air The ability to measure and control the carrier envelope phase (CEP) of few-cycle laser pulses is of paramount importance for both frequency metrology and attosecond science. Here, we present a phase meter relying on the CEP-dependent photocurrents induced by circularly polarized few-cycle pulses focused between electrodes in ambient air. The new device facilitates compact single-shot, CEP measurements under ambient conditions and promises CEP tagging at repetition rates orders of magnitude higher than most conventional CEP detection schemes as well as straightforward implementation at longer wavelengths. Laser sources for near single-cycle pulses in the nearinfrared [1] and infrared [2] developed in the past two decades allow the study of light-matter interactions with a temporal resolution reaching a few tens of attoseconds, well below the period of an optical cycle [3]. One of the keys for achieving such high temporal resolution is the ability to control the carrier-envelope phase (CEP) of the laser pulses. Mathematically, the electric field of a Fourier-limited laser pulse, propagating in the z-direction can be described as: where E 0 is the electric field amplitude, ε the ellipticity, ω the carrier frequency, τ the pulse duration, and φ the CEP. In attosecond science, sub-cycle temporal resolution is achieved by the nonlinear gate induced by the strongest cycle in a few cycle pulse. While the pulse envelope remains rather stable from shot to shot, the CEP is prone to vary due to fluctuations of dispersion, caused by changes in path length, and pump energy experienced by consecutive pulses in a pulse train. Fluctuations of the CEP translate into a time jitter of the temporal gate by about half a period of the driving light pulse, thus deteriorating the temporal resolution. Therefore, it is of importance to measure and stabilize the CEP accordingly. Several schemes have been devised to measure the CEP. The f-2f technique for example relies on the spectral interference of the fundamental and second harmonic of sufficiently broadband fields [4][5][6][7]. While recent progress has facilitated single-shot CEPmeasurements at a central wavelength around 800 nm [8], the f-2f technique is limited by the availability of spectrometers, and thus not easily transposable to different wavelength ranges and limited in acquisition rate (so far to 10 kHz [9]). A variant of the f-2f setup where the grating spectrometer is replaced by temporal dispersion in * Corresponding author: boris.bergues@mpq.mpg.de a km long fiber and a fast photodiode detection (TOU-CAN) removes some of these limitations while relying on a careful selection of the dispersive medium [10]. Another well-established, and widely used technique in the last decade relies on above threshold ionizatin (ATI) of a rare gas atoms (typically xenon). While the use of ATI was initially proposed for both circularly [11] and linearly [12] polarized pulses, its implementation known as the stereo-ATI phase-meter [13] relies on time-of-flight (TOF) measurements of ATI recollision electrons ionized by linearly polarized pulses. This technique has facilitated single-shot CEP-measurement [14], at repetition rates up to 100 kHz [15], and has allowed major breakthroughs in the study of field-driven dynamics in atoms [16][17][18][19], molecules [20][21][22], nanostructures [23,24] and solids [25,26]. Despite its great success, the stereo-ATI phase meter is a rather sophisticated apparatus relying on ultra-high vacuum components, and microchannel-plate detection. The main reason for the complexity is the need for an electron-TOF measurement, which is only possible under ultra-high vacuum. Additionally, the unfortunate scaling of the recollision probability with wavelength of λ −5 to λ −6 [27], impedes the extension of the stereo-ATI phase meter to longer wavelengths. Therefore, the development of a more compact, singleshot CEP measurement technique, that can be reduced in terms of complexity, and extended in its wavelength range (and potentially operate at higher repetition rates than the stereo-ATI phase meter), is highly desirable. It has been demonstrated that strong field exitation, and ballistic light field acceleration of the conduction band population in a wide bandgap solid can produce an electric current, whose direction and amplitude relate to the CEP of the laser pulse [28]. These currents can be detected using electronic amplifiers enabling the measurement of the CEP [29]. Single shot sensitivity has, however, so far not been achieved with this technique. The use of circularly polarized input pulses offers several advantages over a linearly polarized input [11,30,31]. During strong-field ionization of a single atom in a cir-arXiv:1908.07481v1 [physics.optics] 20 Aug 2019 cularly polarized laser pulse, electrons are preferentially emitted in the polarization plane. When the Coulomb interaction with the ionic core is neglected, their drift momentum is perpendicular to the direction of the maximum electric field. Thus, the CEP can be directly retrieved from the preferred electron emission direction, since in this case it coincides with the angle of the maximum electric field. In general, the emission direction will coincide with the CEP up to a constant offset value [30]. Most importantly, no time-of-flight measurement is required with a circular-polarization phase-meter (CP-phase-meter), which allows the implementation of much simpler CEPmeasurement devices. ATI based CEP-measurements using circularly polarized pulses have been simulated [30] and tested experimentally [32]. Alternatively, it has been shown that CEP measurements can also be done by sampling the THz pulses emitted from laser generated ambient air plasma [33][34][35]. Here, we demonstrate a compact single-shot CP-phase-meter relying on the measurement of transient electrical currents in ambient air plasma. This new device is the potentially simplest conceivable implementation of the CP-phase-meter [30]. Combining the advantages of circular polarization and electric detection, it enables a straightforward, single-shot CEP measurement under ambient conditions. The acquisition rate is only limited by the bandwidth of the high-gain electric amplifiers (currently MHz rates). The fact that the concept relies on direct ionization makes it easily extendable to pulses with longer wavelengths with comparable peak intensities. EXPERIMENTAL SETUP The experimental setup for the single-shot CEP characterization in air is shown in Fig. 1 (a). Circularly polarized few-cycle laser pulses are focused to a spot size of 32 µm full width at half maximum (FWHM) in between three metal electrodes: two tipshaped electrodes separated by 60 µm and a third larger, planar electrode positioned 90 µm below the two tips (see Fig. 1 (b)). In the focus, the laser pulses reach peak intensities of about 2 × 10 15 W cm −2 and ionize ambient air, inducing a transient current. For each laser pulse, the CEP-dependent direction of the transient current vector is probed by measuring the currents I 1 and I 2 flowing between each of the two tips and the ground electrode. The currents are amplified by a factor of 10 7 V/A with a transimpedance amplifier. The two amplified single shot signals (cf. Fig. 1 (c)), one for each tip (blue and red circuits in Fig. 1 (a)), are then integrated using a boxcar integrator. The boxcar DC voltage outputs Q 1 and Q 2 , which are proportional to the charges flowing in the two circuits, are recorded for each laser shot using a DAQ-card. The laser system used in the present study is a 10 kHz titanium:sapphire chirped pulse amplification (CPA) system (Spectra Physics Femtopower HR CEP4) that delivers CEP stable (down to ca. 100 mrad rms [8]) pulses with 700 µJ pulse energy, sub-25 fs pulse duration and a central wavelength of about 780 nm. The output pulses are spectrally broadened in a gas-filled hollow-core fiber and compressed with a combination of chirped mirrors and fused silica wedges to sub-two cycle duration, typically 4 fs (FWHM intensity envelope). The central wavelength is 750 nm. Pulses with an energy of about 100 µJ are sent through a broadband quarter-wave plate to convert their polarization from linear to near circular (ε = 0.84) and are focused in between the electrodes with a spherical silver mirror (f = 350 mm). The CEP is controlled by changing the dispersion in the stretcher of the CPA multi-pass amplifier. RESULT The measured signals Q 1 and Q 2 are plotted in Fig. 2 (a) for a series of 1650 consecutive laser shots recorded while linearly changing the CEP from 0 to 2π. Both signals were centered by subtraction of the CEP averaged value and normalized in amplitude. Note that both the offset subtraction and the normalization do not require a stable CEP and can be performed in the same way for a pulse sequence with a randomly fluctuating CEP. The CEP dependent signals Q 1 and Q 2 oscillate out of phase with a phase shift of 92 • , close to what is expected for perfect positioning of the laser focus, where the angle between the lines connecting the injection point with the two electrodes spans 90 • [30]. As for stereo-ATI phase-meter measurements, Q 1 and Q 2 can be plotted parametrically as a function of their polar angle θ = arctan2(Q 2 , Q 1 ) and r = Q 2 1 + Q 2 2 (see Fig. 2 (b)). The quantity dr/r = 0.107 rad provides a lower limit for the uncertainty of the measurement in Fig. 2 (b) [36]. While the CEP is a monotonic function φ(θ) of the polar angle θ in the parametric plot of Fig. 2 (b), this function is not necessarily linear. Deviation from a linear relation may have different causes, including a slight ellipticity of the input pulse polarization, and a focus that is not perfectly centered in the gap. This is analogous to the stereo-ATI phase meter, where the shape of the parametric plot, depends on the exact experimental conditions such as the position of the TOF integration gates [36]. Fortunately, in either case, the exact shape of the parametric plot is not important for the measurement as the CEP can be retrieved from the polar angle via a rebinning procedure [14]. The latter relies on the assumption that all CEP values are equally probable within the CEP scan, which is well fulfilled in the present experiment. The dependence of the CEP on the polar angle is then simply obtained by sorting the polar angles in ascending order over the range of the scan and mapping them onto a linear CEP interval from −π to π. In order to determine the precision of the measurement, we compare in Fig. 3 (a) the retrieved CEP to its nominal value, which (for a perfectly stable CEP) is inferred from the known dispersion introduced in the stretcher. The latter is varied as a triangular function of time to generate a uniform CEP distribution between −π and π. The calibration function φ(θ) was determined for each oscillation period of the triangular waveform with the method described above. An upper limit for the uncertainty of the measurement is calculated as the standard deviation of the difference between the measured and the nominal CEP curves (shown in Fig. 3 (b)). For the data of Fig. 2 (b) we obtain an upper limit of 206 mrad. On longer time scales, the accuracy evolves from 211 mrad on the time scale of a few seconds (data of Fig. 3 (a)) to 356 mrad for an acquisition time of one minute. Even though the stability of the measurement and the signal to noise ratio can still be improved, the performance of the new CP-phase-meter is already comparable to that of the stereo-ATI phase-meter. While the f-2f technique only provides information on the CEP, the CPphase-meter naturally yields information about the pulse duration. Importantly the sensitivity of the measurement increases towards shorter pulse durations, while still supporting measurements with 10 fs pulses. This is illustrated in Fig.4, where the signals Q 1 and Q 2 are plotted as a function of the pulse propagation distance through glass. The most important asset of the new technique, besides its striking simplicity, is its potential for single shot CEP-measurements at much higher repetition rates than achievable with today's techniques. Unlike the stereo-ATI phase meter, which is intrinsically limited to a few hundred kHz by the time of flight measurement, the new technique, is only limited by the gain-bandwidth product of the amplifier. Given the 2 µs duration of the amplified current signal (cf. Fig. 1 (c)), the technique can be readily implemented at more than 100 kHz with commercially available integrators. We expect that further improvement of the signal to noise ratio by better shielding and tighter focusing will facilitate its implementation at MHz repetition rates. SUMMARY AND CONCLUSION We have demonstrated a simple implementation of the circular polarization phase-meter, which enables single-shot CEP measurement with a precision of about 200 mrad. While the performance of our prototype is comparable to that of the widespread stereo-ATI phasemeter, its complexity is dramatically reduced since it only consists of a centimeter-sized setup that works in ambient air. Since the measurement rate is only limited by the bandwidth of the current amplifier, the technique can easily be applied at a repetition rate of 100 kHz and beyond. In addition, since the CP-phase meter does not rely on the recollision process, it is also applicable at longer wavelengths. The new technique thus represents an appealing alternative to the rather complex ultra-high vacuum apparatus used nowadays for single-shot CEP detection.
2019-08-20T16:35:21.000Z
2019-08-20T00:00:00.000
{ "year": 2019, "sha1": "5149b4f836028cc97ed6d0f25e52ee448bd78cac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/optica.7.000035", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "0234ca2d6471cfbb0a39c5634b34ee2a78837a59", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
201406608
pes2o/s2orc
v3-fos-license
Expensive Childcare and Short School Days = Lower Maternal Employment and More Time in Childcare? Evidence from the American Time Use Survey This study investigates the relationship between maternal employment and state-to-state differences in childcare cost and mean school day length. Pairing state-level measures with an individual-level sample of prime working-age mothers from the American Time Use Survey (2005–2014; n = 37,993), we assess the multilevel and time-varying effects of childcare costs and school day length on maternal full-time and part-time employment and childcare time. We find mothers’ odds of full-time employment are lower and part-time employment higher in states with expensive childcare and shorter school days. Mothers spend more time caring for children in states where childcare is more expensive and as childcare costs increase. Our results suggest that expensive childcare and short school days are important barriers to maternal employment and, for childcare costs, result in greater investments in childcare time. Politicians engaged in national debates about federal childcare policies should look to existing state childcare structures for policy guidance. a consequence of states' unique economic conditions, immigration patterns, and costs of living. While a thorough investigation of childcare costs at multiple levels-the state, the county, and the Metropolitan Statistical Area (MSA)-is warranted, the requisite data are not publicly available. Thus, we use state-level differences in preschool care costs and school day lengths as broader proxies for state-level childcare generosity. The role of school day length is understudied across these literatures. This omission is conspicious given that the U.S. Constitution grounds responsibility for K-12 education in the state (Department of Education 2005), with 83 cents of every dollar coming from the state and local levels compared to only 8.3 cents from federal agencies (Department of Education 2005). This means state legislatures have important power to determine the structure and content of school education, with school days often reduced to redress state budgetary deficits (Dillon 2011). These decisions may have the unintended consequence of restricting maternal employment. The existing literature is clear-the absence of universal subsidized childcare remains a key institutional barrier to gender equality in maternal employment and childcare nationally and globally (Cha and Weeden 2014;Gornick and Meyer 2003;Kelly, Moen, and Tranby 2011;Offer and Schneider 2011). Further, short school days may limit maternal employment. Yet no study to date has extended these models to weigh whether state-to-state variation in preschool and school-aged care resources structure mothers' employment and childcare time. Given that greater income inequality at the state level widens class-based gaps in investment in children (Schneider, Hastings, and LaBriola 2018), state context matters in explaining how parents invest their time. It follows that childcare time investments also may be conditioned by institutionalized barriers to maternal employment-expensive childcare costs and short school days. Further, no study to our knowledge has investigated how changes in childcare costs over a 10-year span affect maternal employment and childcare patterns. To address these gaps, this study assesses two central questions: (1) Are mothers less likely to be employed and likely to spend more time in childcare in states where childcare is inexpensive and school days are long? and (2) Are increases in childcare costs over time associated with mothers' lower odds of employment? To test these questions, we pair individual-level data from the American Time Use Survey (ATUS) for prime workingaged mothers (2005-2014 aged 18 to 59, n = 37,993) with two strategically selected state-level measures-childcare (preschool) cost (adjusted for median married couples' incomes) and school day length. First, we test whether some states are better at supporting working mothers than others by offering inexpensive childcare and long school days. Then, we test whether mothers living in states with more generous child and school-aged resources are more likely to work full-or part-time and also to spend less time in childcare. By utilizing data that span a decade, we also test whether changes in childcare costs structure mothers' employment status and childcare time. This allows us to determine not only whether mothers' employment is sensitive to childcare costs but also whether increases in childcare cost, relative to median incomes, are detrimental to maternal employment outcomes and associated with more time in childcare. Our results highlight the importance of expansive child and school-care provisions on mothers' employment and domestic time, from which we draw concrete policy recommendations that should inform current policy debates about universal childcare. Linking Child and School-Aged Care Resources to Maternal Employment Existing state-level research on childcare costs largely focuses on the impact of one policy-Temporary Assistance for Needy Families-on barriers to maternal employment (welfare to work) (Blau and Tekin 2007;Cancian and Meyer 2004;Cancian et al. 2002;Gallagher et al. 1998;Zedlewski and Loprest 2001). At the individual level, state-subsidized and accessible childcare is shown to be essential for the transition from welfare to work (Schumacher and Greenberg 1999;Schumacher, Greenberg, and Duffy 2001). Meyers, Heintze, and Wolf (2002) show (looking at California alone) that expanding low-income mothers' access to state-subsidized childcare to 50 percent would increase their labor market entry by 75 percent. These studies indicate that expensive childcare is a barrier to maternal employment, but the assessment of Temporary Assistance for Needy Families, which is accessible only to low-income mothers, lacks broader generalizability. Assessing childcare costs more generally, others document that expensive childcare increases mothers' labor market exits and deters their employment reentries, with stronger effects among low-skilled, low-income, and single mothers (Blau and Robins 1989;Blau and Tekin 2007;Cleveland, Gunderson, and Hyatt 1996;Connelly 1992;Han and Waldfogel 2001;Hofferth and Collins 2000;Michalopoulos and Robins 2000;Ribar 1992). Mothers of preschool-aged children are most vulnerable to labor market exits (Gelbach 1999;Leibowitz, Klerman, and Waite 1992), the likelihood of which can be reduced through more generous childcare credits and subsidies (Connelly 1992;Han and Waldfogel 2001;Lefebvre and Merrigan 2008). Indeed, maternal employment increased by 12 percentage points after Washington, D.C., began offering two years of universal preschool; 10 of the 12 percentage points' (83 percent) increase is directly attributable to childcare expansion (Malik 2018). These studies indicate childcare costs are a key barrier to maternal employment and suggest that increases in childcare costs over time will be detrimental to mothers' odds of employment. At the state level, Capizzano (2000) provides a descriptive overview of informal and formal childcare across 12 states, demonstrating that formal center-based care is modal but national averages mask important state variation (Capizzano 2000). States vary dramatically in their childcare costs, quality, and availability, with cash assistance and universal pre-K being central policy recommendations to improve children's learning outcomes and mitigate gender inequality in maternal employment (New America Foundation 2018). Assessing childcare cost at the county level, Hofferth and Collins (2000) show employment exits for moderate-earning mothers are more common in counties with expensive and impacted childcare. Yet their reliance on 1989-1990 survey data from 144 of the 3,141 U.S. counties indicates a need for a more recent and comprehensive assessment. Further, our application of time use surveys allows for a direct examination of state-level childcare resources on mothers' childcare time. We expect mothers will retreat to the home-spending more time in childcare and less in employment-in states with expensive childcare (relative to married couples' median earnings) and short school days. Further, we expect mothers will substitute their own time in childcare to mitigate the rising costs of childcare. Our application of time use data spanning a decade allows us to determine whether mothers' employment and childcare time are sensitive to childcare costs and increases in cost relative to income, an expansion of existing research. We also expand upon existing research by expanding our investigation beyond preschoolaged costs alone (see Han and Waldfogel 2001 for discussion). The structural barriers to maternal employment linger into children's school years with the inability to synchronize work and school schedules cited as a core reason mothers reduce employment (Morgan 2006; Stone 2008; Williams 2010). Short school days may limit mothers' employment and, consequently, lead mothers to spend more time caring for children. In many states, kindergarten school days are only 3 hours per day (Education Commission of the States 2011), and the average American student spends roughly 6.7 hours in school per day and 180 days per year (National Center for Educational Statistics 2003). By contrast, the average full-time American worker spends 8.17 hours per day in employment (Bureau of Labor Statistics 2017), and without mandated (paid or unpaid) sick or holiday leave, parents are vulnerable to working 365 days per year (Travail 2012). The lack of synchronization between school and work schedules makes mothers vulnerable to labor market exits (Stone 2008). Existing research shows extending school days produces better student educational outcomes (Rivkin and Shiman 2013). Our research assesses whether longer school days have the additional benefit of being positively associated with maternal employment. From these bodies of literature, we draw three clear hypotheses: Hypothesis 1: Mothers will report lower odds of employment in states with short school days and those with expensive preschool care. Hypothesis 2: Mothers in states with expensive preschool care and short school days will spend more time in childcare than those in states with inexpensive childcare and long school days. Data To address these questions, we match repeated cross-sectional data during a 10-year time period (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) with state-level measures of preschool cost and school day length. At the individual level, we use 2005-2014 data from ATUS. ATUS is a nationally representative sample of Americans' daily activities for a 24-hour period sponsored by the Bureau of Labor Statistics and collected by the U.S. Census Bureau. We restrict our sample to mothers who currently have a school-aged child in the home (17 or younger) and who are of prime working age (18 to 59). This provides an effective sample size of 29,796 respondents from all 50 states and the District of Columbia. We then match these respondents with two statelevel measures: the cost of full-time childcare as a percentage of the median family income for married couples and the average length of a school day. The childcare measures are from the Childcare Aware of America State Fact Sheets (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014), which are derived from childcare referral and state and federal government agencies in each state that collects data on market rates of childcare services in the state. We utilize the states' annual measures to provide estimates of childcare across states and changes in states over time adjusted for married couples' median family incomes. For years when a state failed to report its childcare measures, we applied the previous year's data. We gathered school day length data from 2007 to 2014 but found little variability in the average school day lengths across this period and thus apply data from the earliest time point of the average school day length across years (Education Commission of the States 2007States , 2008States , 2011States , 2014. To test for the robustness of our models, we also estimated those that control for the mediating effects of state economic conditions for two measures: (1) the unemployment rate to account for elasticity in the labor market and (2) the percentage of women in service jobs to measure demand for female employment and availability of a service market to offload housework and childcare derived from the U.S. census. We found that unemployment and service concentration did not explain our childcare effects, and thus these models are not presented. Rather, we find our childcare (preschool) costs and school day length effects are robust to the inclusion of these controls. Methods Because the data are time variant with years nested within states, we utilize multilevel models (Raudenbush and Bryk 2002). This modeling strategy permits an assessment of the relationship between state-level child and school-aged care resources and our outcomes of interest while accounting for the fact that individuals are nested within survey years and states. The measure of employment is analyzed through a multilevel multinomial logistic regression. We choose multinomial logistic regression, as opposed to ordinal logistic regression, because ordinal logistic regression assumes that the associations between the independent and dependent variables are parallel across categories of the dependent variable. We tested this assumption, finding that the slope parameters vary significantly across categories of the dependent variable. The measures of time spent caring for children are analyzed through multilevel linear regression. This modeling strategy also allows us to parse out longitudinal versus cross-sectional effects. Following Moller, Alderson, and Nielsen (2009) and Moller et al. (2016), all state-level child and school-aged care resource variables are measured longitudinally (deviations from state-specific means) and cross-sectionally (i.e., overall state means). All of our models are estimated using SAS. We ran a series of unconditional models for each dependent variable to determine the best error structure. Based on this analysis, we include a random effect for states that accounts for nesting of individuals within states. We also attempted to include a random effect for time, but the model fit was inferior. Instead, we included a fixed effect for time. We use the Huber-White sandwich estimators (Huber 1967;White 1980White , 1982 to relax assumptions of normality and homoscedasticity. This robust variance-covariance estimate option relaxes the assumption of independence within groups and produces asymptotically unbiased (i.e., consistent) error terms. We also include a random slope for income in the linear models. All control variables at the individual and state levels are centered on their grand means to ease interpretation of our primary variables of interest. The models also apply the individual-level weights. Measures Dependent Variables We analyze two dependent variables. The first examines mothers' current self-reported employment status. ATUS asks respondents to report their current employment status through predetermined categories. We estimate models across these categories: currently not in the labor force, working part-time, and working full-time (omitted category). We estimate multinomial logistic regression with those employed full-time as the comparison group to identify associations between state-level childcare measures and the odds of part-time work and being out of the labor market. The second dependent variable measures mothers' time (minutes per day) in childcare on the diary day. Childcare measures time in primary childcare activities related to the helping or caring for household children and all activities related to children's education or health (e.g., doctor's appointments, school conferences) but does not include childcare as a secondary, or simultaneous, activity as typically conceptualized in time use research (i.e., watching children while engaged in another primary activity including employment). Instead, the ATUS measure of secondary childcare refers to time during a primary activity when a child younger than age 13 is "in the care of" the respondent. It thus includes time when mothers are not directly engaging with or even copresent with children and thus may not be constrained by employment hours in similar ways to primary childcare activities. Because we want to clearly delineate how the cost of childcare and length of the school day affect employment and direct, primary childcare activities, we exclude secondary childcare (Hofferth, Flood, and Sobek 2016). The models were run as multilevel linear regression models, an approach consistent with previous time use research (Hook 2010). Individual Controls Given our focus on state-level child and school-aged care, we apply a range of individual measures as controls. Total family income is divided into five possible outcomes: less than $25,000 (reference category); $25,000-$59,999; $60,000-$99,999; $100,000-$149,999; and $150,000-plus. The lowest income quartile serves as our comparative group. We estimate those missing on income as a distinct category rather than imputing their scores through multiple item imputation because the estimation of imputed data in multilevel models does not account for its nested structure and thus fails to account for the lack of independence in our data structure (Drechsler 2015; Van Buuren 2011). We compare those with a college education or higher (value = 1) or some college (value = 1) and those with a high school diploma or less (value = 0, comparative group) as we expect more highly educated mothers to be more likely to be employed, spend less time in housework, and spend more time in childcare (Hays 1996;Sayer 2005). Marital status is an important predictor of maternal employment and unequal housework loads, and thus we compared married mothers (value = 1) to divorced and single mothers (value = 0) (Spain and Bianchi 1996; Sayer 2005). We explore age categorically, comparing those age 46 to 59 years (value = 1) to those age 18 to 45 (value = 0) as we expect the older cohort to report lower employment than the younger cohort (Goldin 2006;Percheski 2008). Race is controlled through a series of dichotomous measures including black, Hispanic, other race/multiracial and non-Hispanic white (omitted category) (Sayer and Fine 2011). We also include year dummies to account for the repeated cross-section structure of our data (2014 serves as the comparative group). Descriptive Overview of Employment, Childcare Time, and State-level Measures Table 1 provides a descriptive overview of key independent and state-level measures. We find that states vary dramatically in their employment and childcare patterns. North Dakota, Delaware, Nebraska, and Vermont have some of the highest rates of maternal employment, with more than one in two mothers working full-time. By contrast, Idaho, Utah, California, and Washington have the lowest rates of full-time maternal employment, with only two out of five mothers employed full-time. Part-time work is concentrated in Utah, Massachusetts, Minnesota, and Wisconsin and is less common in Delaware, Wyoming, Vermont, and Arkansas. California, Arizona, West Virginia, and Idaho lead the nation with the highest concentration of mothers out of the labor force. Table 1 also shows state-level variation in childcare time. Mothers report the longest average time in caring for children in West Virginia, Connecticut, New Hampshire, and the District of Columbia, at roughly an hour and a half per day. By contrast, mothers in Mississippi, South Carolina, Arizona, and Wyoming spend the shortest average time in childcare at a little more than an hour per day. Turning to the state-level measures, childcare (preschool) is most expensive in many of the coastal states, such as New York, Massachusetts, and California, and absorbs roughly 15 percent of married couples' median incomes. This is remarkable given that median income for married couples in these states is generally higher than in some of the states with lower relative childcare costs. Indeed, childcare costs are least expensive in many of the lowerincome Southern states, including Mississippi, Louisiana, and Alabama, as well as many of the Plains states like Nebraska, Iowa, and South Dakota. In these states, childcare costs absorb less than 10 percent of married couples' average incomes with childcare. In Mississippi, the least expensive state, childcare consumes only 7.5 percent of couple average income, about one half less the amount of average income consumed by childcare costs in New York, the most expensive state. School days are also longest in many of the inexpensive childcare states including Louisiana, Alabama, Mississippi, and Nebraska, suggesting inexpensive childcare states also support mothers of school-aged children with longer average school days. By contrast, school days are shortest in the expensive childcare states including California, Washington, Hawaii, and Rhode Island. Collectively, these descriptive statistics indicate that maternal employment, childcare time, childcare costs, and school day lengths vary across states. Correlations across State-level Measures The descriptive statistics suggest states may form distinct childcare cultures; Table 2 tests for significant correlations across these measures and their relationship to maternal employment. Consistent with the descriptive patterns, states with longer school days report less expensive childcare. These states also have a higher concentration of full-time working mothers. Mothers in states where childcare is more expensive spend more time caring for children, which is consistent with their lower odds of full-time employment. Of course, some of these relationships may be explained by individuals' sociodemographic characteristics. The subsequent section expands these descriptive statistics by (1) testing these relationships net of individual-level controls and (2) adding estimates of changes in childcare cost over time. Table 3 provides the regression results from the multilevel regression for employment and childcare. Model 1 includes childcare costs as a cross-sectional and longitudinal measure. Consistent with expectations, mothers' odds of working part-time are higher as childcare becomes more expensive (relative to median family incomes), an association significant only cross-sectionally. Since the comparative group is full-time employment, the results provide the inverse interpretation that mothers' odds of full-time employment are higher as childcare costs decrease. Model 2 includes the school day length to show that mothers are less likely to work part-time and more likely to work full-time in states where school days are longer. By contrast, mothers' odds of part-time (compared to full-time) work are more prevalent as school days shorten. Regression Results for Employment and Childcare Models 3 and 4 test whether expensive childcare and short school days are associated with mothers' time in childcare. Model 4 shows time in childcare is not associated with school day length, counter to expectations. By contrast, model 3 shows mothers spend more time in caring for children in states where preschool care is more expensive. Given that mothers' odds of full-time work decrease as childcare becomes more expensive relative to income, these results suggest mothers may trade time in employment to absorb some of the cost of expensive preschool care. In addition to the significant cross-sectional association, increases in childcare costs over time are associated with mothers' greater investment in time caring for children. Taken together, these results suggest that mothers' time caring for children is sensitive not only to the overall cost of childcare but also to increases in its cost over time. This lends further evidence to the notion that mothers absorb some of the expense of childcare by increasing their own time in caring for children at the expense of full-time work as costs increasingly squeeze family budgets. Further, that these associations vary significantly across U.S. states indicates that some mothers experience greater institutional barriers to employment than do others. We tested whether these effects vary by the age of children (comparing those with preschool to school-aged children) but did not find our effects were limited to mothers of children in specific age groups. We note, however, that the division of our sample across states, time, and age of young children leaves many states with small sample sizes, weakening the power of our models. Thus, while we do not find significant effects by age of the child, we encourage researchers to take this into account in future research as we may be suffering from weak estimation power in our models. Contextualizing the Relationship between Childcare Costs and Mothers' Time To better contextualize the results for childcare cost and maternal employment, Figure 1 graphically depicts these relationships by presenting the odds of full-time employment (the excluded category) for states based on the average cost of childcare, holding other variables constant. Mothers in New York, where childcare costs are the highest in the nation, are less likely to work full-time than those in Mississippi, where childcare costs are more affordable. Although these effects appear small in size, they are relatively large considering national trends in mothers' employment. Specifically, on average, women's full-time employment as a share of their population declined from .43 in 1999to .39 in 2010, rebounding to .42 in 2018(Department of Labor 2018, 2019. The predicted probabilities across states have comparable variability to that found in national statistics between 1999 and 2018, as the predicted probability of full-time employment between states ranges between .43 and .48 (Department of Labor 2018, 2019). We further illustrate the results from the multinomial logistic regressions in Figure 2, which presents predicted probabilities based on model 2. This figure illustrates that the probability of full-time employment also varies across states. Indeed, Texas has the longest school day length at 7.17, while Washington has the shortest school day length at 6.22, and the variation-predicted probabilities range from .42 to .49, respectively. This is a considerable difference considering how little women's full-time employment has changed over time in recent history. Table 3, model 3, by contextualizing the childcare results as an annual measure. Consistent with mothers' employment patterns, mothers are spending more time caring for children in states where preschool childcare is more expensive. Although the effects appear small (one-minute increase in childcare time for every 1 percent increase in childcare costs), we can see that these differences, when aggregated across the year, have dramatic effects on mothers' time use. An average mother in New York spends roughly 50 hours more per year in childcare than a comparable mother in Louisiana. Thus, small daily time use impacts can have significant consequences over time. Conclusion In this article, we tested whether childcare (preschool) cost adjusted by median family income for married couples and average school day length vary across states and whether this variation is associated with maternal employment and childcare patterns. First, we identify significant differences in childcare and school-aged care resources across states. Specifically, we find childcare costs and school day length to be significantly correlated. This means that, on average, states with inexpensive childcare have longer school days. Thus, some states provide better resources to working parents-inexpensive childcare and longer school days-across children's life course than do others. The links between childcare cost and school day length, on the one hand, and maternal employment, on the other, underscore that decisions made at the state level have potential to alleviate the institutional barriers to maternal employment. We draw two main conclusions, highlighting the policy implications where appropriate. First, we find mothers' employment is sensitive to childcare cost. Individual-level studies confirm this assessment and point to reductions in childcare cost as a key motivator of maternal employment, especially for those transitioning from welfare to work (Blau and Robins 1989;Han and Waldfogel 2001;Hofferth and Collins 2000;Michalopoulos and Robins 2000;Ribar 1992). Our results show that childcare costs at the state level are also a barrier to maternal employment, with mothers reporting lower odds of full-time and higher odds of part-time employment in states with expensive childcare. Our results suggest mothers use part-time employment to redress expensive childcare costs. We find an equivalent pattern in short school day states, indicating institutional barriers to maternal employment are not isolated to the preschool years. Through these analyses we address Han and Waldfogel's (2001) call to expand beyond childcare-alone estimates of mothers' experiences and document the care barriers to maternal employment that linger into school age. Thus, getting children into school is not the solution to mothers' carer-career problems in state markets where childcare is expensive. We also find mothers spend more time caring for children in states where childcare is expensive and as childcare costs increase during our sampled time frame (2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). This suggests that mothers may absorb childcare time rather than outsource it to the market, as childcare becomes more expensive. Mothers are more likely to work part-time than full-time in states with expensive childcare, indicating that childcare costs are a strong barrier to maternal employment. As childcare costs, like other living costs, continue to increase over time, we may see mothers increasingly knocked out of the labor market, providing direction for future researchers and policy makers alike. This suggests that in addition to lowering total costs, regulating the growth in childcare costs over time may be integral to facilitating maternal employment. This study is not without limitations. The first is our reliance on cross-sectional data, which do not allow us to identify clear causal mechanisms across levels. We find strong evidence that institutional barriers to child and school-aged care are associated with limited maternal employment. Yet whether limited state-level resources block women's reentry into the labor force at critical junctures-after the birth of a child or when children reach school age-or affect maternal employment decisions consistently across the life course are beyond the scope of these data. Also, the multilevel design limits our modeling strategy as we have a relatively small sample of U.S. states (50 and the District of Columbia). Our application of 51 level 2 units is appropriate (Bryan and Jenkins 2015) and expands upon smaller countryto-country comparisons in cross-national research (Batalova and Cohen 2002;Fuwa 2004;Heisig 2011;Hook 2006;Knudsen and Waerness 2008;Ruppanner 2010), supporting our modeling strategy. But a larger state-level sample may provide additional significant results. A further complication of our state-level design is that some of our measures vary significantly within U.S. states. For example, childcare cost and demand in metropolitan areas (New York City or San Francisco) may be more expensive and impacted than those in less urban areas, suggesting that a more detailed level 2 unit may be more appropriate (e.g., MSA-level measures). This poses some limitations to our analyses, as the effects we identify at the state level may be more pernicious when estimated at a lower unit of analysis (e.g., MSA level). Thus, our models may underestimate the impact of childcare cost and demand on maternal domestic and employment time, especially in high-cost cities. Existing studies estimating lower-level units apply the Profile of Childcare Settings data collected across 144 counties in 1990 and find expensive and impacted childcare increases maternal labor market exits (Hofferth and Collins 2000). However, these data are yet to be updated. Thus, the requisite childcare data for all 3,007 U.S. counties or 382 MSAs are not publicly available. While we acknowledge these limitations, our article is one step in the direction of understanding state-level barriers to maternal employment. This limitation points to a data void and provides directions for future research. The implications of our results for policy and future research are clear. Low levels of institutionalized child and school-aged resources place barriers on maternal employment. Ample previous research documents these patterns for childcare cost, especially among lowincome mothers and those with preschool-aged children (Blau and Robins 1989;Connelly 1992;Gelbach 1999;Han and Waldfogel 2001;Hofferth and Collins 2000;Leibowitz et al. 1992;Ribar 1992). Our study shows these relationships are robust at the state level and that the impact of limited care spans across children's life courses. Thus, getting children into school is not the silver bullet to solve the childcare-employment dilemma. Fortunately, states are in the unique position to legislate school schedules, which may, as our study suggests, have the additional benefit of facilitating maternal employment. Longitudinal data collection estimating maternal employment prior to and after policy change could directly test for causality. Further, taking steps to lower childcare costs may increase maternal employment rates. In this regard, enacting policies more consistent with the universal provisions of European welfare states may increase mothers' labor market attachment. In light of these findings, questions about cost, safety, and quality remain-states with expensive childcare may provide a higher-quality product, increasing the safety of young children in care. Or mothers may look to cheaper and lessregulated alternatives in expensivechildcare-cost states. These relationships are likely exacerbated by class inequality, underscoring that regulating childcare cost and providing high-quality care to families across all income brackets is a public safety issue that requires more scrutiny and research. Ultimately, we provide a clear argument that states are important actors for mothers' employment and that the institutional barriers to equalizing mothers' domestic and economic time span children's early and school years. Our insights point to clear avenues for policy mechanisms to reduce gender inequality in maternal employment. Correlations between Mean State-Level Employment and Childcare Time and Child and School-Aged Childcare Resources. Note: All models include the full set of individual controls including partners' weekly wages (highest quartile omitted), married, year (2014 omitted), age, child 6 to 12 in the home (child 5 and younger omitted), education (high school omitted), and race (non-Hispanic white omitted). All models also control for the unemployment rate at the state level. Dashes indicate not estimated. * p < .05.
2019-08-23T16:21:56.542Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "f7a791321391c2a776127387249d7f52c4d316b2", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2378023119860277", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96d24d0f09f5844d919c75c7bcfded74cbd5d054", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
216049065
pes2o/s2orc
v3-fos-license
Prospective, randomized, single-blinded, multi-center phase II trial of two HER2 peptide vaccines, GP2 and AE37, in breast cancer patients to prevent recurrence Purpose AE37 and GP2 are HER2 derived peptide vaccines. AE37 primarily elicits a CD4+ response while GP2 elicits a CD8+ response against the HER2 antigen. These peptides were tested in a large randomized trial to assess their ability to prevent recurrence in HER2 expressing breast cancer patients. The primary analyses found no difference in 5-year overall disease-free survival (DFS) but possible benefit in subgroups. Here, we present the final landmark analysis. Methods In this 4-arm, prospective, randomized, single-blinded, multi-center phase II trial, disease-free node positive and high-risk node negative breast cancer patients enrolled after standard of care therapy. Six monthly inoculations of vaccine (VG) vs. control (CG) were given as the primary vaccine series with 4 boosters at 6-month intervals. Demographic, safety, immunologic, and DFS data were evaluated. Results 456 patients were enrolled; 154 patients in the VG and 147 in CG for AE37, 89 patients in the VG and 91 in CG for GP2. The AE37 arm had no difference in DFS as compared to CG, but pre-specified exploratory subgroup analyses showed a trend towards benefit in advanced stage (p = 0.132, HR 0.573 CI 0.275–1.193), HER2 under-expression (p = 0.181, HR 0.756 CI 0.499–1.145), and triple-negative breast cancer (p = 0.266, HR 0.443 CI 0.114–1.717). In patients with both HER2 under-expression and advanced stage, there was significant benefit in the VG (p = 0.039, HR 0.375 CI 0.142–0.988) as compared to CG. The GP2 arm had no significant difference in DFS as compared to CG, but on subgroup analysis, HER2 positive patients had no recurrences with a trend toward improved DFS (p = 0.052) in VG as compared to CG. Conclusions This phase II trial reveals that AE37 and GP2 are safe and possibly associated with improved clinical outcomes of DFS in certain subgroups of breast cancer patients. With these findings, further evaluations are warranted of AE37 and GP2 vaccines given in combination and/or separately for specific subsets of breast cancer patients based on their disease biology. Introduction Despite progress via early detection and improved treatment, breast cancer recurrence remains a significant problem. Immunotherapy shows promise in the treatment of multiple cancers and may further improve outcomes in breast cancer. Increasing evidence suggests breast cancer is more immunogenic than once realized, particularly given the important prognostic role that the host immune response and tumor microenvironment play [1][2][3]. Immune-mediated surveillance and clearance of disease likely plays an important role in preventing recurrence in clinically disease-free patients after standard of care therapy. Cancer vaccines may help generate a tumor-specific immunity to prevent disease recurrence. HER2 is a tumor-associated antigen expressed at some level in 60-70% of breast cancers, over-expressed in 20-30% of patients, and is one potential target for breast cancer vaccines [4,5]. Monoclonal antibodies targeting HER2 provide clinical benefit, at least in part due to an immunologic mechanism in HER2 over-expressing breast cancer [6]. Likewise, vaccines targeting immunogenic HER2 peptides may provide benefit via immune-mediated cancer cell elimination. The HER2-specific vaccine nelipepimut-S (HER2 369-377, E75, NeuVax) is a human leukocyte antigen (HLA) A2 restricted, major histocompatibility complex (MHC) class I, dominant epitope derived from the extracellular domain of HER2, and has been evaluated in the adjuvant setting to prevent breast cancer recurrence in women rendered clinically disease-free after standard-ofcare therapy [7][8][9]. Nelipepimut-S induces a CD8+ cytotoxic T-lymphocyte (CTL) response to HER2. In phase II trials, nelipepimut-S was found to be safe, effective in raising HER2-specific immunity, and showed evidence of improved disease-free survival [8] However, a phase III trial of nelipepimut-S in the adjuvant setting was stopped early for futility [10]. A single, dominant CD8+ CTL targeted epitope may not be an effective strategy for all breast cancer patients. Just as distinct biologic subtypes of breast cancer are better served by different conventional therapies, they may also benefit from unique vaccine strategies. Thus, exploring additional strategies, such as, MHC class-II epitope treatment stimulating a CD4+ T helper cell response [11][12][13] and treatment with subdominant epitopes, which may be less prone to T-cell anergy by persistent antigen exposure may be beneficial [14]. One of our efforts to explore additional vaccination strategies with broader applicability is the AE37 peptide vaccine. AE37 is an MHC class-II peptide that is a modified version of the naturally occurring AE36 wild-type peptide (HER2 776-790) derived from the intracellular domain of HER2 with the addition of the 4 amino acid long Ii-Key peptide (LRMK). The Ii-key peptide is added to enhance immunogenicity to AE36 [15]. Additionally, AE37 is HLA unrestricted, allowing it to be used in a broader population of patients. In another vaccination strategy effort to avoid over-stimulation and anergy, we have tested a subdominant immunogenic peptide, GP2 (HER2 654-662). GP2 is an HLA-A2 restricted immunogenic peptide derived from the transmembrane domain of HER2. While GP2 has a lower affinity to HLA-A2 than nelipepimut-S, it has been shown to be as efficacious as nelipepimut-S in inducing a CTL response [16]. We have previously published primary analyses from our large randomized trial of the AE37 and GP2 peptides. There was no demonstrable difference in 5-year overall diseasefree survival (DFS) in the intention-to-treat populations. However, there was evidence of benefit in subgroups of breast cancer patients [7,17,18]. Here, we present the final analysis of the primary endpoint of DFS with additional follow-up as well as additional per-treatment analysis, along with comprehensive pre-specified subset analyses for both the GP2 and AE37 peptide vaccines used in a randomized controlled trial of breast cancer patients with any level of HER2 expression that were clinically disease-free and at a high risk for recurrence. Patient characteristics and clinical protocol The study was designed as a 4-arm, prospective, randomized, single-blinded, multi-center phase II trial (NCT00524277), conducted under the investigational new drug applications BB-IND #12229 and #11730. Clinically disease-free node positive and high-risk node negative breast cancer patients were enrolled one to six months after completion of primary standard of care therapy with the exception of adjuvant endocrine therapy which was allowed concurrently. Highrisk node negative patients were defined if they had any of the following: as ≥ T2, grade 3, lymphovascular invasion, estrogen or progesterone receptor negative, HER2 overexpressing tumor (IHC 3+ and/or amplified FISH > 2.0, before CAP/ASCO guideline changes), or N0 (i+) breast cancer patients with any level of HER2 expression (IHC 1-3+ and/or positive FISH > 1.2). HLA-A2 positive patients were assigned to either the GP2 or AE37 arms of the trial, both given in combination with granulocyte-macrophage colony-stimulating factor (GM-CSF) in the treatment groups or GM-CSF alone in the control group. HLA-A2 negative patients were randomized to receive either AE37 in combination with GM-CSF in the treatment group or GM-CSF alone in the control group. The primary objectives of this study were to determine if AE37 in combination with GM-CSF vaccination improves the DFS in any level HER2 expressing, node positive or high-risk node negative breast cancer patients, and to determine if GP2 in combination with GM-CSF vaccination improves the DFS in any level HER2 expressing, HLA-A2 positive, node positive or high-risk node negative breast cancer patients. In addition, the DFS were compared between all four arms of the trial. Based on our previous trials with nelipepimut-S, the difference in recurrence was 15% in controls compared with 6% in the vaccinated patients at a median follow-up of 2 years [9]. Based on this data, the trial was designed to detect a 0.48 hazard ratio corresponding to an improvement in 2-year DFS from 85% for GM-CSF control to 93% for vaccine (AE37 and GP2). A sample size of 150 subjects per group had 80% power to detect the difference at a 1-sided alpha level of 0.05 using a log-rank test for equality of survival curves. The total number of events required to achieve the specified power was 33. A sample size of 100 subjects per group would have the same power to detect a statistical difference between groups with a hazard ratio of 0.35. Twenty-five HLA-A2 positive patients who were assigned to the AE37 arm of the trial and randomized to the control group were included in the analysis of both the AE37 and GP2 arms as controls. Their inclusion is justified based on an evaluation of clinical outcomes in the control patients confirming that HLA-A2-status does not affect DFS regardless of HER2 expression [19]. Vaccine and vaccination series The GP2 and AE37 peptides were created in keeping with good manufacturing practices and purified to > 95%. Sterility, endotoxin and general safety testing were performed prior to administration. Six inoculations were given in 3-4 week intervals administered intradermally consistently in the same lymph node distribution (same arm or thigh) in each patient. Patients in each treatment arm received 500 mcg of the peptide and 125 mcg of GM-CSF, while the control arm received 125 mcg of GM-CSF alone. After the initial 6 inoculations, patients were given a total of 4 booster inoculations at six-month intervals beginning one year after each subject's date of enrollment (at 12, 18, 24, and 30 months). Clinical recurrence of disease The patients' primary physicians determined recurrence, the primary endpoint, at their individual study sites during routine follow-up. All enrolled patients were evaluated every 3 months for the first 24 months after completion of primary therapies, and every 6 months for an additional 36 months with clinical exam, laboratory, and radiographic surveillance as per standard of care. All enrolled patients were followed for clinical recurrence for up to 5 years; one site offered extended voluntary follow-up beyond 5 years for the patients randomized into the AE37 arm of the trial. Statistical analysis Patients were stratified by site and by nodal status then randomized into treatment groups in a 1:1 allocation ratio. Clinicopathologic data were compared between groups with median and interquartile range used to summarize age. The groups were then compared using analysis of variance techniques. Categorical variables were summarized with frequencies and proportions. Groups were compared using a two-sided Fisher's exact test and Forest plot. DFS was calculated from randomization date to recurrence date or death due to any cause. Data were censored by the date of last contact. DFS was analyzed using the Kaplan-Meier method with log-rank comparisons. Cox proportional hazard models were used to estimate hazard ratios and 95% confidence intervals to estimate the relative risk of recurrence or death between arms. Per-treatment (PT) analyses excluded patients with early recurrences (before completion of the primary vaccine series) and those who developed second malignancies. Demographics A total of 456 patients were enrolled at 16 sites throughout the United States between 2007 and 2013. Both HLA-A2 positive and negative patients were eligible for enrollment in the AE37 arm. For a portion of the enrollment period, all patients were enrolled in the AE37 arm, while during the remainder, HLA-A2 negative patients were enrolled in the AE37 arm and HLA-A2 positive patients were enrolled in the GP2 arm. In the AE37 arm, patients were randomized to receive either AE37 in combination with GM-CSF (n = 154 total; HLA-A2 positive n = 24, HLA-A2 negative n = 130) or GM-CSF alone (n = 147 total, HLA-A2 positive n = 25, HLA-A2 negative n = 122). A total of 180 HLA-A2 positive patients were randomized to receive either GP2 in combination with GM-CSF (n = 89) or GM-CSF alone (n = 91). Twenty-five HLA-A2 positive GM-CSF only control patients initially enrolled in the AE37 arm were included as control patients in the GP2 arm (Fig. 1). The per-treatment analysis excluded patients with second malignancy and early recurrence. Within the GP2 PT analysis there were 10 patients excluded; 6 from the vaccine group and 4 from the control. Within the AE37 PT analysis there were 17 patients excluded; 12 from the vaccine group and 5 from the control. There were no significant clinical or pathologic differences between the treatment and controls groups for either the GP2 (Table 1) or AE37 (Table 2) arms. Safety The vaccines were well tolerated with no differences between maximal local (GP2 p = 0.558, AE37 p = 0.067) or systemic (GP2 p = 0.898, AE37 p = 0.341) toxicities in either the GP2 (Fig. 2a) or AE37 (Fig. 2b) arms as compared to the controls, this is unchanged from previous reports [7,17]. A majority of the adverse events were grade 1 in nature; there were no related toxicities greater than grade 3. The similar toxicity profiles between the treatment and control groups across both the GP2 and AE37 arms indicate that the majority of the toxicity can be attributed to the immunoadjuvant, GM-CSF. Disease-free survival At the time of the final analysis of the GP2 portion of the trial, the median follow-up was 41.4 (interquartile range [IQR] 24.8-59.2) months for the intention to treat (ITT) and 41.7 (IQR 28.4-59.2) months for the per-treatment (PT), this was approximately 6 months longer than the primary analysis [17]. Similar to the primary analysis, there was no significant difference in 5-year estimated DFS between the vaccine and control arms in either the ITT Fig. 3b) analyses. Upon pre-specified exploratory subgroup analyses of histopathologic, patient, and treatment related characteristics, the HER2 over-expressing patients appeared to derive the greatest benefit from vaccination as there were no recurrences (Fig. 4). In addition, there was a trend toward significant improvement in 5-year estimated DFS among HER2 overexpressing patients receiving GP2 vaccine versus control (100% vs 87.2%, p = 0.052, Fig. 3c). In the final analysis of the AE37 portion of the trial, the median follow-up was 59.8 (IQR 37.5-61.7) months for the ITT and 59.9 (IQR 37.9-63.4) months for the PT groups, this was approximately 30 months longer than the primary analysis [7]. Similar to the primary analysis, there was no significant difference in 5-year estimated DFS between the vaccine and control arms in the ITT (80.1% vs 79.3%, p = 0.968, HR 0.989 CI 0.588-1.665, Fig. 5a) or PT (88.6% vs 82.8%, p = 0.485, HR 0.799 CI 0.425-1.501, Fig. 5b) analyses. Prespecified exploratory subgroups analyses by histopathologic, patient, and treatment related characteristics showed a trend towards benefit in patients with advanced stage (defined as stage IIB or greater) and HER2 under-expression (HER2 UE, defined as HER2 expression IHC 1-2 + and/or positive FISH 1.2-2.0), and triple-negative breast cancer (TNBC, Fig. 4). This trend was likewise present on 5-year estimated DFS within the advanced stage (AE37 85.7% vs Control 72.5%, p = 0.132, HR 0.573 CI 0.275-1.193, Fig. 5c 1.717, Fig. 5e). In a post hoc analysis of patients with both advanced stage and HER2 under-expression there was a significant improvement in DFS favoring the vaccine group (AE37 83.0% vs Control 62.5%, p = 0.039, HR 0.375 CI 0.142-0.988, Fig. 5f). There was a similar trend towards Fig. 1 Consort diagram. § 25 HLA-A2 positive patients were used as controls in both groups. *1 patient with a second malignancy in the AE37 control arm withdrew and is also included in the 7 withdrawals. ^D efined as patients that did not complete the PVS, patients that withdrew, met the primary end point (recurrence, second malignancy, or death from any cause), or chose not to continue on study before completing the PVS Fig. 5g). Discussion Here, we report the results of a multi-center, randomized, blinded, controlled phase II trial of the peptide vaccines, GP2 and AE37, as adjuvant therapy in women with highrisk breast cancer to prevent recurrence. We found that both the CD8+ CTL-eliciting GP2 and the CD4 + T helper celleliciting AE37 vaccines are safe with limited toxicity that is primarily due to the GM-CSF immunoadjuvant and not the individual peptides [7,17]. Furthermore, while the overall DFS does not appear to be improved in the ITT population, multiple subsets may derive some benefit based on prespecified exploratory analyses. Interestingly, the responding patient subsets to GP2 and AE37 are very different suggesting the potential to target specific patients and/or combining the peptides to address a broader patient population. A current limitation of this analysis is the per-treatment nature possibly affecting the external validity of the data; although, the number of patients excluded from the ITT to PT analysis was small (n = 27, 5.9% overall of patients). Even with the exclusion of these early recurrence and second malignancy patients, there was still no significant differences between the group demographics. It has long been known that subtypes of breast cancer have different levels of responsiveness to chemotherapy, hormonal therapy, and HER2-directed therapy. Breast cancer subtypes similarly have distinct immunologic characteristics. The TNBC subtype appears to have the greatest amount of immune infiltration, followed by the highly-proliferative estrogen receptor positive subtype. Meanwhile, the lowgrade, estrogen receptor positive, luminal A subtype appears to have the lowest infiltration rate [20]. This recognition is of increasing clinical importance, not only because of the recent rise of immunotherapy to the forefront of cancer care, but also because tumor immune characteristics can also be prognostic in breast cancer [21]. The data from this analysis suggest that the GP2 peptide vaccine may be beneficial in patients with HER2 overexpressing tumors who received trastuzumab as part of their standard of care treatment. This supports the hypothesis that GP2 may have synergistic clinical efficacy when combined with trastuzumab [22]. Previous preclinical work by Mittendorf et al. demonstrated that HER2 receptors on the tumor cell surface can be saturated by treatment with trastuzumab, promoting internalization in a time and dose-dependent manner. Trastuzumab increased the sensitivity of the tumor cells to CTL-mediated lysis after stimulation with either nelipepimut-S or GP2, even in patients with low levels of HER2 expression. Interestingly, they also found peripheral blood lymphocytes lyse trastuzumab-treated breast cancer cells more efficiently after nelipepimut-S vaccination. We are currently investigating the possibility of a synergistic immunologic effect when nelipepimut-S is given in combination with trastuzumab in a phase II trial in HER2 overexpressing (3+ by IHC; NCT02297698). We recently completed a trial in low-expressing HER2 (IHC 1-2+) patients and found the greatest clinical benefit in DFS in patients with TNBC, suggesting a synergistic mechanism in this population [23]. In addition to the potential synergy with trastuzumab, GP2 may be inherently more effective in the HER2 over-expressing population. These patients have increased HER2 expression, their immune system has greater exposure to this tumor-associated antigen. Given that GP2 is a subdominant epitope of HER2, there may be less immune tolerance to this epitope than a dominant epitope, such as nelipepimut-S. This likely allows GP2 to be more effective in HER2 over-expressing disease; where nelipepimut-S may be more effective in HER2 low-expressing patients [9]. In the AE37 arm of this trial, we found patients with advanced stage, HER2 under-expression, and TNBC may benefit from AE37 vaccination, and those with both advanced stage and HER2 under-expression have a significant clinical benefit to AE37 vaccination. Specifically, demonstrating earlier DFS plateau that was maintained for up to the ten years of follow-up. AE37 has been shown to induce CD4+ T helper cell stimulation which is required for the effective generation of long-term cell-mediated immunity [24,25]. Given that the primary response to AE37 is not a CTL response, but instead a CD4+ T helper cell response, the AE37 vaccine may have more of an immunoadjuvant effect to a pre-existing immune response within the patient. AE37 can also augment a vaccine-induced CTL response. Gates et. al, demonstrated the primary CD4+ T helper cell stimulating AE37 peptide vaccine may increase the number of activated CD8+ CTLs [26]. And Perez et al. demonstrated vaccination with AE37 primes not only the CD4+ T cells, but also primes CD8+ T cells and is able to induce CD8+ responses to both AE36 and AE37 in cancer patients [27]. AE37 is able to directly stimulate the HLA-DR alleles with epitopes present in the HER2 protein. The immunologic effect of AE37 vaccination has also been shown to increase IFN-γ + CD4 + responder cells which in turn assists in strong in vivo and in vitro autologous CTL lysing of tumor cells [28,29]. Thus, the addition of the Ii-Key in AE37 specifically enhances immune responses via the MHC class I pathway [27]. The stimulation of both a CD4+ T helper cell and CD8+ CTL responses suggest that the AE37 peptide vaccine may also have synergistic effect in combination with other short peptide vaccines, which work primarily in a CD8+ CTL mediated fashion. A similar finding was demonstrated in a HER2 peptide derived vaccine on a dendritic cell platform that stimulates both CD8+ and CD4+ T cells. This MHC class 2 vaccine was also given in combination with anti-PD-1 therapy and demonstrated improved survival in a preclinical model [30]. It is worth exploring future trials of combinations of check point inhibitors and peptide-based vaccine strategies to improve DFS. While the potential for CTL-mediated, anti-tumor cytolytic effect via peptide vaccines like nelipepimut-S and GP2 certainly provide promise as a potential stand-alone weapon in the fight against cancer, the CTL effects are limited temporally given the natural transient course of such cytotoxic immune responses. Thus, a vaccine that combines CTL and T helper cell targeted peptides may not only induce the more immediate CTLmediated cytolytic response against any occult residual disease, but also induce T helper cell-mediated long-term immunologic memory to protect against tumor recurrence. Conclusion From checkpoint inhibitors to peptide vaccines, cancer immunotherapy is becoming ever more intricate as our understanding of subtypes of malignancies improves, and we better understand how we can help the body's own defense system to fight active disease and prevent recurrences. Ultimately, our goal is to advance the field of personalized immunotherapies based on a patient's specific disease characteristics. While neither vaccine demonstrated a statistically significant DFS benefit in the overall study population, there are signals of benefit in certain subpopulations of breast cancer patients. This is, perhaps, not surprising given distinct differences in terms of prognosis and treatment response in the different subtypes of breast cancer. This is reflected in our data, which suggests that different peptide vaccine strategies may be required to achieve clinical benefit for distinct subtypes of the same malignancy. Given our encouraging findings, additional randomized trials of the GP2 and AE37 peptide vaccines given independently for specific subsets as well as in combination warrant further investigation.
2020-04-22T14:40:49.448Z
2020-04-22T00:00:00.000
{ "year": 2020, "sha1": "63d19d5429dfc6a26c5eace819547991406c8d4d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s10549-020-05638-x", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "63d19d5429dfc6a26c5eace819547991406c8d4d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
148574546
pes2o/s2orc
v3-fos-license
Differences in colonic crypt morphology of spontaneous and colitis-associated murine models via second harmonic generation imaging to quantify colon cancer development Background Colorectal cancer remains the second leading cause of cancer death in the United States, and increased risk in patients with ulcerative colitis (a subset of inflammatory bowel disease) has motivated studies into early markers of dysplasia. The development of clinically translatable multiphoton imaging systems has allowed for the potential of in vivo label-free imaging of epithelial crypt structures via autofluorescence and/or second harmonic generation (SHG). SHG has been used to investigate collagen structures in various types of cancer, though the changes that colorectal epithelial collagen structures undergo during tumor development, specifically colitis-associated tumors, have not been fully investigated. Methods This study used two murine models, using A/J mice, one for spontaneous carcinoma and one for colitis-associated carcinoma, to investigate and quantify SHG image features that could potentially inform future study designs of endoscopic multiphoton imaging systems. The spontaneous tumor model comprised a series of six weekly injections of azoxymethane (AOM model). The colitis-associated tumor model comprised a single injection of AOM, followed by cycles of drinking water with dissolved dextran sodium sulfate salt (AOM-DSS model). SHG images of freshly resected murine colon were acquired with a multiphoton imaging system, and image features, such as crypt size, shape and distribution, were quantified using an automated algorithm. Results The comparison of quantified features of crypt morphology demonstrated the ability of our quantitative image feature algorithms to detect differences between spontaneous (AOM model) and colitis-associated (AOM-DSS model) murine colorectal tissue specimens. There were statistically significant differences in the mean and standard deviation of nearest neighbor (distance between crypts) and circularity between the Control cohort, AOM and AOM-DSS cohorts. We also saw significance between AOM and AOM-DSS cohorts when calculating nearest neighbor in images acquired at fixed depths. Conclusion The results provide insight into the ability of SHG imaging to yield relevant data about the crypt microstructure in colorectal epithelium, specifically the potential to distinguish between spontaneous and colitis-associated murine models using quantification of crypt shape and distribution, informing future design of translational multiphoton imaging systems and protocols. Electronic supplementary material The online version of this article (10.1186/s12885-019-5639-8) contains supplementary material, which is available to authorized users. Background Inflammatory bowel disease (IBD) is characterized by chronic inflammation of the gastrointestinal tract and is manifested clinically as either ulcerative colitis (UC) or Crohn's disease [1]. While there is currently no specific histological feature that is used to definitively distinguish between UC and Crohn's disease, an important distinction is the distribution pattern of the inflammation. UC is characterized by severe inflammation beginning at the rectum and continuing throughout the colon; Crohn's disease is characterized by lesions over Peyer's patches, and non-continuous regions of inflammation that span the entire depth of the intestinal wall [2][3][4]. Patients with UC exhibit an increased risk for colorectal cancer (CRC); approximately 29% of UC patients develop CRC within 10 years of diagnosis, while 2.9% with Crohn's disease develop CRC after 10 years of disease [5][6][7][8]. There are different types of lesions that may be detected in a UC patient during colonoscopy: two types associated with intraepithelial neoplasia are a dysplasia-associated lesion/mass (DALM) and an adenoma-like mass (ALM) [9,10]. Flat or raised lesions without sharp contrast (delineation) against non-dysplastic epithelium are typically classified as DALMs, and are frequently associated with malignancy [10]. ALMs have features similar to sporadic adenomas that are observed in non-IBD patients [11]. Preliminary studies have suggested that standard polypectomy, a complete endoscopic resection of the ALM mass, followed by endoscopic surveillance is highly successful in patients with an ALM and could spare the patient a colectomy, since of the 34 patients in one study only one patient developed an adenocarcinoma following the initial polypectomy [9][10][11]. Unfortunately, in 50-80% of DALM cases the lesions are not visible during standard endoscopy [10,12]. Several modalities of endoscopic imaging have emerged to improve detection of flat or discolored lesions [13]. Among them, multiphoton microendoscopy uses nonlinear optical processes to yield label-free high-resolution image data and rapid three-dimensional image acquisition up to several hundred microns in depth without the need for tissue resection and subsequent fixation [14]. These systems have recently overcome a number of specific technical challenges, including obtaining uniform scans, low sensitivity, durability and reliability of the scanner, and miniaturization of the distal scan mechanism [15,16]. Miniaturization of multiphoton laser microscopy systems greatly enhances the clinical translatability of this modality, leading towards in vivo imaging applications in gastrointestinal epithelial tissues [17][18][19]. A form of multiphoton imaging, second harmonic generation (SHG) is a nonlinear optical process whereby non-centrosymmetric crystalline structures (such as collagen) interact with two low energy photons nearly simultaneously, resulting in the generation of a new photon at twice the energy (or half the wavelength of the incident photons). Multiphoton endoscopic systems capable of SHG imaging show great potential in improving clinical decision-making for patients with IBD. In the colon, collagen structures can be used to quantify changes in the three-dimensional crypt microstructure in the setting of IBD and associated dysplastic transformation. A systematic, quantitative model of these optical biomarkers of dysplasia progression could guide clinical translation of SHG imaging approaches for IBD patients. Our study was designed to further develop this biomarker model via SHG imaging and quantification of morphological changes that occur in the crypt structures of murine models of spontaneous and colitis-associated colorectal tumors. A series of azoxymethane (AOM) injections was used as a murine model for spontaneous (ALM-like) tumor progression as it induces polypoid growth in the distal colon, similar to human CRC [20]. AOM injections followed by subsequent administration of dextran sodium sulfate salt (DSS) are considered to have long-term effects that produce cancerous flat lesions or dysplasia-associated lesion or mass (DALM-like) similar to those observed in humans [21]. We also acquired SHG images of epithelial tissue over discrete timepoints to investigate if temporal changes in crypt microstructure could be detected and quantified at varying stages of tumor progression. Key SHG image features could serve as optical biomarkers; a translatable measure towards detection of lesions and distinguishing ALM and DALM type lesions endoscopically in patients with known IBD. Murine models and colorectal tissue procurement Two murine models were used for this study: a spontaneous colorectal carcinoma model and a colitis-induced colorectal carcinoma model [20]. Thirty-one A/J mice (The Jackson Laboratory, ME, USA) were used in accordance with the University of Arkansas Institutional Animal Care and Use Committee-approved protocol #18093. Mice were received at 9 weeks of age and allowed to acclimate for one week in a temperature and light controlled facility; mice were maintained on an 12 h day-night cycle, with a bedding mixture of approximately 75% aspen chip bedding (7090A, Envigo, USA) and 25% paper product bedding (7099, Envigo, USA). All procedures were conducted in the light phase. Food (Tekland 8640, Envigo, USA) and water were provided ad libitum (Fig. 1). Mice were anesthetized using 1.5% isoflurane (IsoThesia, Henry Schein Animal Health, OH, USA) and oxygen (Compressed USP Oxygen, Airgas, PA, USA) from a precision vaporizer (911,103, VetEquip, CA, USA), then placed on a heating pad (TP700, Stryker Medical, MI, USA) with a nosecone for constant supply of 1.5% isoflurane. The spontaneous colorectal carcinoma model (referred to as the AOM model), consisted of a series of six weekly subcutaneous injections of AOM (A5486, Sigma-Aldrich, Inc., MO, USA) diluted in sterile saline, at a dose of 10 mg per kg body weight [20]. Control mice underwent a series of six weekly subcutaneous injections of saline (injection site over the shoulders). AOM and Control cohorts acclimated and were injected in parallel. The colitis-induced colorectal carcinoma model (referred to as the AOM-DSS model) consisted of a single AOM injection (at 10 mg/kg body weight, subcutaneous administration), followed by several courses of drinking water supplemented with DSS (36,000-50,000 molecular weight, SKU 0216011080, MP Biomedicals, OH, USA) at a concentration of 1.5% (w/v) ( Fig. 1) [20]. AOM-DSS cohorts acclimated and were injected a few months after the AOM and Control cohorts. Subcutaneous AOM injections were administered with 28 gauge syringe needles (BD329410, VWR, PA, USA) in volumes of less than 200 μL. For the AOM-DSS model, beginning on the day of the AOM injection, mice were provided free access to DSS solution for seven days [20]. On Day 8, the mice were provided untreated drinking water ad libitum for 14 days, and then the DSS-solution and untreated water cycle was repeated at Day 22 and Day 53 ( Fig. 1) [20]. Post treatment, mice were provided food and water ad libitum and were weighed and inspected daily until two weeks after concluding AOM/DSS treatment, after which they were inspected daily and weighed weekly. Mice were euthanized at early (20-23 weeks) or late (24-28 weeks) time points, representative of early or late-stage carcinoma progression ( Fig. 1). Mice within each cohort were randomly selected for euthanizing at either early or late timepoints. Figure 1 notes the number of mice per cohort: Control n = 9, AOM early n = 6, AOM late n = 3, AOM-DSS early n = 4, AOM-DSS late n = 9. AOM-injected mice develop lesions and small polyps at approximately 20 weeks, with tumors progressing in size over the subsequent weeks; mice began to show signs of rectal bleeding past 28 weeks and were therefore euthanized [20]. Mice were euthanized via anesthetized cervical dislocation; administered anesthetic was as previously described (1.5% isoflurane and oxygen). Murine colons were resected within 15 min of euthanasia, and a 1 cm section of distal colon was longitudinally sectioned and placed on a coverslip for imaging. An adjacent section of tissue was fixed in formalin and paraffin-embedded for reference H&E staining. Multiphoton microscopy imaging system The microscopy system comprises a titanium-sapphire ultrafast femtosecond pulsed laser source (Mai Tai eHP, Spectra-Physics, CA, USA), and galvo-resonant scan head scanners which acquires a fixed 15 frames per second (MPM-SCAN64J, Thorlabs, USA). The laser power was controlled via a quarter-wave plate and a polarizing beam splitter, to reduce the incoming excitation power at the sample to the range of tens to hundreds of milliwatts. The light was then circularly polarized, to reduce orientation bias of SHG in collagen, by use of a half-wave plate and a second polarizing beam splitter. Circular polarization was determined by measuring intensity of orthogonal collagen fibers at multiple regions, and selecting the optimal positioning of the half-wave plate for reduction of intensity variation at different orientations. Following the scanning optics, the illumination laser beam (at 800 nm wavelength) passed through the back aperture of a 20x water immersion objective (0.85 NA, Nikon, NY, USA), prior to illuminating the sample, which was placed above the objective on an inverted microscopy stage. Backscattered SHG signal was collected via the objected and traveled through a 635 nm long pass dichroic beamsplitter (Di02-R635, Semrock, NY, USA), a 447/60 nm filter set (447/60 nm 25 mm dia. Filter, Semrock, NY, USA), a 400 nm/40 nm filter set (400/40 nm 25 mm dia. Filter, Semrock, NY, USA), and photomultiplier tube (H7422 PASO, Hamamatsu, Japan). The entire optical system resides inside an enclosure to reduce ambient light. Second harmonic generation image acquisition Freshly resected tissue was placed epithelium-down on a 24 × 40 mm No. 1 coverslip, then imaged using our non-linear optical microscopy imaging system described above. Total image acquisition time was approximately 1.5 s, with total optical power at the sample limited to prevent photo-damage. Individual images were acquired at 512 × 512 pixels at 14-bit depth, yielding a 522 μm × 522 μm (~0.27 mm 2 ) field of view, at approximately 20 μm from the epithelial surface. Image stacks were acquired at consecutive depths from the epithelial surface, in 20 μm steps, from 20 μm to 100 μm below the epithelial surface (Fig. 2). Optimization of automated crypt detection sensitivity In previous work, we developed an algorithm for determining optimal parameters for binarizing grayscale images of colon epithelium [22,23]. The image intensities were re-scaled between 0 and 1, and the thresholds for binarizing were optimized for maximizing the ability of the algorithm to segment crypts can vary depending on the strength of the SHG signal in the tissue (details are included in Additional file 1: Supplemental Information). The selected optimal parameters for the SHG image data set was calculated to have a 67% crypt detection sensitivity (CDS), when compared to manual selection of crypt locations. Quantification of crypt image segmentation Following the optimization of the threshold combinations for the algorithm's % CDS, the optimized algorithm was applied to the image set and the images were converted into a binary image, where white objects were defined as the detected crypt structures [22,23]. Figure 3 summarizes the image features that were quantified, as well provides a visual representation of the calculation. Details of the image segmentation algorithm are included in Additional file 1: Supplemental Information. Early stages of CRC typically include regions of abnormal morphology, for example, aberrant crypt foci (ACF) which are enlarged or non-tubular (branching) crypts, and/or multiple layers of cells lining a crypt [24,25]. We chose to measure various crypt morphology image features related to changes viewed during histopathology, including crypt area, circularity, and distribution. The mean and standard deviation (referred to as variance in the results) of each image feature within each image were compared via a nested one-way ANOVA. The standard deviation of the image features within each image was calculated in order to measure the heterogeneity of the crypt structures; for example, tumor-adjacent regions tend to have both abnormally large and abnormally small crypt structures, which could produce a mean in a normal range, but the standard deviation of crypt areas within an image would retain the heterogeneous nature. The results of the quantification of image features were then compared across cohorts: AOM early time point, AOM late time point, AOM-DSS early time point, AOM-DSS late time point, and Control. Nested one-way ANOVA with Tukey's post-test was used to determine statistically significant differences between the cohorts (JMP Pro 13). Depth section image analysis Each sample was imaged from 20 μm to 100 μm below the epithelial surface in 20 μm increments (Fig. 2); depth stacks were acquired from each sample at one to three locations on the tissue section. Regions in normal epithelium were selected randomly. Regions in AOM and AOM-DSS epithelium were selected adjacent to tumors; tumors were characterized as epithelial regions with SHG signal but no discernable crypt structures. These depth section images were then analyzed with the algorithm to quantify the previously described image features. The results were then compared across depths, as well as across AOM and AOM-DSS cohorts. Nested two-way ANOVA with Tukey's post-test was used to determine statistically significant differences between the cohorts (JMP Pro 13). Results Qualitative comparison of second harmonic generation images Figure 4 shows examples of histology and SHG of both AOM and AOM-DSS murine models. Normal crypt structures tends to be uniform in size and general shape across the field of view, with crypt shape being tubular in transverse view (Fig. 4a) and roughly circular in en face view (Fig. 4b, h, i). Collagen distribution appears relatively even throughout the field of view (Fig. 4b, i). Crypt structures directly adjacent to tumor regions, or in regions of early dysplasia, tend to vary in size across a field of view, with one or more crypts being enlarged (Fig. 4c, d, k), and crypts often being oblong and/or having serrated edges in en face view (Fig. 4e). SHG images of grossly visible tumors do not show any discernable crypt structures (Fig. 4f, k). Increased mean distance between crypts in AOM vs. AOM-DSS cohorts Nearest neighbor and crypt circularity image features showed significant differences between spontaneous CRC tumor and colitis-associated tumor models. Figure 5 shows statistically significant comparisons of the cohorts for the mean of the nearest neighbor (a), and the standard deviation (variance) of the nearest neighbor (b) and circularity (c). Results for all image features, including non-statistically significant, are included as Additional file 1: Supplemental Figures. There was statistical significance when comparing AOM and AOM-DSS cohorts using nearest neighbor quantification (Fig. 5 a, b). Mean nearest neighbor was greater for AOM late and AOM-DSS early cohorts as compared to the Control (Fig. 5a). AOM late cohort mean nearest neighbor was also greater than the AOM-DSS early cohort (Fig. 5a). When comparing the variance of nearest neighbor, once again AOM late and AOM-DSS early cohorts had greater values than the Control (Fig. 5b). The variance of the AOM late group was also significantly greater than the AOM-DSS late group (Fig. 5 b). Measuring the variance of circularity of the cohorts showed a difference between the AOM early cohort and the Control (Fig. 5c). Differences in circularity and mean distance between crypts in tumor-bearing vs. control cohorts at varying acquisition depths As previously stated, images were taken at increasing depths: 20, 40, 60, 80, and 100 μm into the epithelial layer. We applied the image feature quantification algorithm to each depth, per treatment cohort. There were 43 image stacks (5 images in a stack, 20-100 μm) in total between all 5 cohorts. Mean and standard deviation of image features was compared across cohorts, at specific depths of acquisition (Fig. 6). As previously described, the mean and standard deviation (variance) within an image of each image feature were compared via a nested one-way ANOVA. Results for all image features with at least one statistically significant difference between two cohorts are included as Additional file 1: Supplemental Figures. There were significant differences between the Control cohort and AOM cohorts, and Control and AOM-DSS cohorts, for mean nearest neighbor at depth of specifically 40 μm, as well as standard deviation of circularity at a depth of specifically 60 μm, when comparing image stacks. No image feature showed significant differences between the AOM early and AOM late cohorts, nor the AOM early and AOM-DSS early cohorts (Fig. 6a-c). Decreased crypt circularity at increased acquisition depth for AOM late cohort As previously stated, images were taken at increasing depths: 20, 40, 60, 80, and 100 μm into the epithelial layer. Mean and standard deviation of image features were compared across depth of acquisition within each cohort (Fig. 6d) via a nested one-way ANOVA. Results for all image features with at least one statistically significant difference between two cohorts are included as Additional file 1: Supplemental Figures. There was no significance for mean of image features within cohorts; there was significance within the AOM late cohort for standard deviation of circularity (Fig. 6d). There were differences between 100 μm and the more superficial 20 and 40 μm, for standard deviation of circularity of the AOM late cohort. Discussion Various modalities of endoscopic imaging have emerged with the goal of improving early detection of flat or Fig. 3 Summary of quantitative image features. Each image feature is described with an equation and a visual representation of the calculation using an SHG image of normal epithelium; scale bar is 100 μm. For eccentricity, a represents half the length of the major axis, and b represents half the length of the minor axis. For nearest neighbor, (x c ,y c ) represents the centroid coordinate and (x a ,y a ) represent the centroid coordinates of adjacent crypts; the arrows represent the distance to adjacent crypts, the red arrow represents the distance selected as the nearest neighbor. For the centroid distance functions (CDF), (x c ,y c ) represents the centroid coordinate and (x b ,y b ) represent the boundary pixel coordinates. The average CDF was calculated by averaging all the distances to the boundary pixels (red arrows); the minimum CDF was calculated by selecting the distance to the closest boundary pixel (red arrow); the maximum CDF was calculated by selecting the distance to the furthest boundary pixel (red arrow) discolored lesions, such as chromoendoscopy, narrow band imaging, and laser microendoscopy [13,26]. Chromoendoscopy consists of spraying a colorimetric stain, such as indigo carmine, to provide contrast during colonoscopy [27]. Narrow band imaging has been described as label-free chromoendoscopy, as the optical filters only allow imaging of a small range of wavelengths, which enhance contrast in blood vessels and capillary patterns without the need for a contrast agent [26,27]. While chromoendoscopy and narrow band imaging have shown promise for improving detection of dysplastic/neoplastic lesions, they were not able to distinguish between the similar structures of ALM and DALM lesions [13]. Laser confocal microendoscopy features high spatial resolution -able to resolve cellular and subcellular structures -providing more detailed imaging features that relate more closely to histopathological features [10]. The Cellvizio® confocal laser microendoscopy system showed potential in differentiating between DALM lesions and adjacent normal colorectal epithelial crypt structure in a preliminary clinical trial [28]. Inserting the Cellvizio® endoscopic probe into an existing working channel during colonoscopy showed that DALM regions had enlarged crypts, increased irregularly in the distribution of crypts, increased space between crypts, as well as crypt destruction or fusion [28]. In contrast to confocal laser microendoscopy, multiphoton microendoscopy has additional benefits, including the ability to acquire label-free images of structural/morphological information (via SHG imaging) as well as insight into metabolic information (via autofluorescence of intrinsic biomolecules, such as nicotinamide adenine dinucleotide and flavin adenine dinucleotide, among others) up to several hundred microns in depth [29][30][31][32]. SHG imaging, as shown in this manuscript, is well-suited for characterization of collagen structures, especially collagen type I which makes up much of extracellular matrix and connective tissues [29,33]. SHG imaging of colorectal epithelium highlights crypt shape, as the collagen fibers provide structure surrounding the cellular arrangement without the need for any exogenous contrast agent [34]. We have described our analysis of colorectal epithelial microstructure from SHG images of freshly resected murine colons using automated quantification of morphological image features. These results -a comparison of quantified features of crypt morphology (especially nearest neighbor and circularity) -demonstrated the ability of our quantitative image feature algorithms to detect differences between spontaneous (AOM model) and colitis-associated (AOM-DSS model) murine colorectal tissue specimens. Limitations of SHG imaging include the ability to acquire structural but not cellular information, as well as the need for clinician/investigator judgement to recognize some additional parameters during imaging, such as the presence of tumors which are not detected nor measured by the automated quantification algorithm. Future work in this area will include murine models of familial adenomatous polyposis, and other hereditary models, to more thoroughly investigate the morphological features of the most common types of CRC. A common murine model of familial adenomatous polyposis -featuring a deletion of the APC gene -develop spontaneous adenomas in the colon, although these lesions may not fully exhibit the physiological features of the disease as seen in humans [35]. Despite some limitations, the APC/min mouse model and other models will be included in future SHG imaging studies in order to expand the scope of the current work, which is limited to spontaneous and colitis-associated tumor development. The image biomarkers in this study relate to crypt morphology features that are analogous to conventional endoscopic biopsy and histopathology: crypt shape, size and spatial distribution. Statistically significant differences between spontaneous tumor (AOM) and colitis-associated tumor (AOM-DSS) cohorts indicate that different models of tumor progression can potentially be quantified via SHG imaging, possibly distinguishing between early and late tumor progression. However, none of the image features -which are based solely on collagen-derived SHG signal -in this study showed statistical differences within tumor-bearing cohorts, or between early and late time points of AOM cohorts. It is possible that combining SHG data with autofluorescence imaging data could be a promising technique for acquiring intraoperative complementary data in order to improve endoscopic differentiation between ALM and DALM lesions in patients with UC, but this would require additional study. These results provide a quantitative model of the morphological changes that crypts undergo during dysplastic transformation, and can serve to guide future translational investigation and clinical studies in patients with IBD, with the long-term goal of reducing morbidity associated with prophylactic colectomy. Conclusion A number of factors contribute to dysplastic transformation in colon epithelium, and it is important to fully understand disease progression in order to improve techniques and tools for screening, diagnosis, and treatment. While studies of collagen fiber structure using optical imaging in cancers such as cervical have been conducted [17], investigation of collagen structures in murine CRC studies has been understudied [36]. Multiphoton endoscopic imaging has shown to be a feasible technology for clinical translation due to its miniaturization and label-free image acquisition. The introduction of image analysis algorithms into computer-ai diagnostic methods yield complementary data to conventional endoscopic procedures, which could lead to improved clinical decision-making for patients with known IBD. The methods described here provide insight into the ability of SHG imaging to yield relevant data about the crypt microstructure in colorectal epithelium, specifically the potential to distinguish between ALM and DALM murine models using quantification of crypt shape and distribution, informing future design of translational multiphoton imaging systems and protocols.
2019-05-10T13:55:20.432Z
2019-05-09T00:00:00.000
{ "year": 2019, "sha1": "ecff1201786975b3b80ffe29db1361b77e5b3a8a", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-019-5639-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecff1201786975b3b80ffe29db1361b77e5b3a8a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1364237
pes2o/s2orc
v3-fos-license
BUbiNG: Massive Crawling for the Masses Although web crawlers have been around for twenty years by now, there is virtually no freely available, opensource crawling software that guarantees high throughput, overcomes the limits of single-machine systems and at the same time scales linearly with the amount of resources available. This paper aims at filling this gap, through the description of BUbiNG, our next-generation web crawler built upon the authors' experience with UbiCrawler [Boldi et al. 2004] and on the last ten years of research on the topic. BUbiNG is an opensource Java fully distributed crawler; a single BUbiNG agent, using sizeable hardware, can crawl several thousands pages per second respecting strict politeness constraints, both host- and IP-based. Unlike existing open-source distributed crawlers that rely on batch techniques (like MapReduce), BUbiNG job distribution is based on modern high-speed protocols so to achieve very high throughput. INTRODUCTION A web crawler (sometimes also known as a (ro)bot or spider) is a tool that downloads systematically a large number of web pages starting from a seed. Web crawlers are, of course, used by search engines, but also by companies selling "Search-Engine Optimization" services, by archiving projects such as the Internet Archive, by surveillance systems (e.g., that scan the web looking for cases of plagiarism), and by entities performing statistical studies of the structure and the content of the web, just to name a few. The basic inner working of a crawler is surprisingly simple from a theoretical viewpoint: it is a form of traversal (for example, a breadth-first visit). Starting from a given seed of URLs, the set of associated pages is downloaded, their content is parsed, and the resulting links are used iteratively to collect new pages. Albeit in principle a crawler just performs a visit of the web, there are a number of factors that make the visit of a crawler inherently different from a textbook algorithm. The first and most important difference is that the size of the graph to be explored is unknown and huge; in fact, infinite. The second difference is that visiting a node (i.e., downloading a page) is a complex process that has intrinsic limits due to network speed, latency, and politeness-the requirement of not overloading servers during the download. Not to mention the countless problems (errors in DNS resolutions, protocol or network errors, presence of traps) that the crawler may find on its way. In this paper we describe the design and implementation of BUbiNG, our new web crawler built upon our experience with UbiCrawler [Boldi et al. 2004] and on the last ten years of research on the topic. 1 BUbiNG aims at filling an important gap in the range of available crawlers. In particular: -It is a pure-Java, open-source crawler released under the Gnu GPLv3. 1 A preliminary poster appeared in ]. -It is fully distributed: multiple agents perform the crawl concurrently and handle the necessary coordination without the need of any central control; given enough bandwidth, the crawling speed grows linearly with the number of agents. -Its design acknowledges that CPUs and OS kernels have become extremely efficient in handling a large number of threads (in particular, threads that are mainly I/Obound) and that large amounts of RAM are by now easily available at a moderate cost. More in detail, we assume that the memory used by an agent must be constant in the number of discovered URLs, but that it can scale linearly in the number of discovered hosts. This assumption simplifies the overall design and makes several data structures more efficient. -It is very fast: on a 64-core, 64 GB workstation it can download hundreds of million of pages at more than 10 000 pages per second respecting politeness both by host and by IP, analyzing, compressing and storing more than 160 MB/s of data. -It is extremely configurable: beyond choosing the sizes of the various data structures and the communication parameters involved, implementations can be specified by reflection in a configuration file and the whole dataflow followed by a discovered URL can be controlled by arbitrary user-defined filters, that can further be combined with standard Boolean-algebra operators. -It fully respects the robot exclusion protocol, a de facto standard that well-behaved crawlers are expected to obey. -It guarantees that hostwise the visit is an exact breadth-first visit (albeit the global policy can be customized), thus collecting pages in a more predictible and principled manner. -It guarantees that politeness constraints are satisfied both at the host and the IP level, that is, that two data requests to the same host (name) or IP are separated by at least a specified amount of time. The two intervals can be set independently, and, in principle, customized per host or IP. When designing a crawler, one should always ponder over the specific usage the crawler is intended for. This decision influences many of the design details that need to be taken. Our main goal is to provide a crawler that can be used out-of-the-box as an archival crawler, but that can be easily modified to accomplish other tasks. Being an archival crawler, it does not perform any refresh of the visited pages, and moreover it tries to perform a visit that is as close to breadth-first as possible (more about this below). Both behaviors can in fact be modified easily in case of need, but this discussion (on the possible ways to customize BUbiNG) is out of the scope of this paper. We plan to use BUbiNG to provide new data sets for the research community. Datasets crawled by UbiCrawler have been used in hundreds of scientific publications, but BUbiNG makes it possible to gather data orders of magnitude larger. MOTIVATION There are four main reasons why we decided to design BUbiNG as we described above. Principled sampling. Analyzing the properties of the web graph has proven to be an elusive goal. A recent large-scale study [Meusel et al. 2015] has shown, once again, that many alleged properties of the web are actually due to crawling and parsing artifacts instead. By creating an open-source crawler that enforces a breadth-first visit strategy, altered by politeness constraints only, we aim at creating web snapshots providing more reproducible results. While breadth-first visits have their own artifacts (e.g., they can induce an apparent indegree power law even on regular graphs [Achlioptas et al. 2009]), they are a principled approach that has been widely studied and adopted. A more detailed analysis, like spam detection, topic selection, and so on, can be per-formed offline. A focused crawling activity can actually be detrimental to the study of the web, which should be sampled "as it is". Coherent time frame. Developing a crawler with speed as a main goal might seem restrictive. Nonetheless, for the purpose of studying the web, speed is essential, as gathering large snapshots over a long period of time might introduce biases that would be very difficult to detect and undo. Pushing hardware to the limit. BUbiNG is designed to exploit hardware to its limits, by carefully removing bottlenecks and contention usually present in highly parallel distributed crawlers. As a consequence, it makes performing large-scale crawling possible even with limited hardware resources. Consistent crawling and analysis. BUbiNG comes along with a series of tools that make it possible to analyze the harvested data in a distributed fashion, also exploiting multicore parallelism. In particular, the construction of the web graph associated with a crawl uses the same parser as the crawler. In the past, a major problem in the analysis of web crawls turned out to be the inconsistency between the parsing as performed at crawl time and the parsing as performed at graph-construction time, which introduced artifacts such as spurious components (see the comments in [Meusel et al. 2015]). By providing a complete framework that uses the same code both online and offline we hope to increase the reliability and reproducibility of the analysis of web snapshots. RELATED WORKS Web crawlers have been developed since the very birth of the web. The first-generation crawlers date back to the early 90s: World Wide Web Worm [McBryan 1994], RBSE spider [Eichmann 1994], MOMspider [Fielding 1994], WebCrawler [Pinkerton 1994]. One of the main contributions of these works has been that of pointing out some of the main algorithmic and design issues of crawlers. In the meanwhile, several commercial search engines, having their own crawler (e.g., AltaVista), were born. In the second half of the 90s, the fast growth of the web called for the need of large-scale crawlers, like the Module crawler [Burner 1997] of the Internet Archive (a non-profit corporation aiming to keep large archival-quality historical records of the world-wide web) and the first generation of the Google crawler [Brin and Page 1998]. This generation of spiders was able to download efficiently tens of millions of pages. At the beginning of 2000, the scalability, the extensibility, and the distribution of the crawlers become a key design point: this was the case of the Java crawler Mercator [Najork and Heydon 2002] (the distributed version of [Heydon and Najork 1999]), Polybot [Shkapenyuk and Suel 2002], IBM WebFountain [Edwards et al. 2001], and UbiCrawler [Boldi et al. 2004]. These crawlers were able to produce snapshots of the web of hundreds of millions of pages. Recently, a new generation of crawlers was designed, aiming to download billions of pages, like [Lee et al. 2009]. Nonetheless, none of them is freely available and open source: BUbiNG is the first open-source crawler designed to be fast, scalable and runnable on commodity hardware. For more details about previous works or about the main issues in the design of crawlers, we refer the reader to [Olston and Najork 2010;Mirtaheri et al. 2013]. Open-source crawlers Although web crawlers have been around for twenty years by now (since the spring of 1993, according to [Olston and Najork 2010]), the area of freely available ones, let alone open-source, is still quite narrow. With the few exceptions that will be discussed below, most stable projects we are aware of (GNU wget or mngoGoSearch, to cite a few) do not (and are not designed to) scale to download more than few thousands or tens of thousands pages. They can be useful to build an intranet search engine, but not for web-scale experiments. Heritrix [Her 2003;Mohr et al. 2004] is one of the few examples of an open-source search engine designed to download large datasets: it was developed starting from 2003 by Internet Archive [Int 1996] and it has been since actively developed. Heritrix (available under the Apache license), although it is of course multi-threaded, is a single-machine crawler, which is one of the main hindrances to its scalability. The default crawl order is breadth-first, as suggested by the archival goals behind its design. On the other hand, it provides a powerful checkpointing mechanism and a flexible way of filtering and processing URLs after and before fetching. It is worth noting that the Internet Archive proposed, implemented (in Heritrix) and fostered a standard format for archiving web content, called WARC, that is now an ISO standard [Iso 2009] and that BUbiNG is also adopting for storing the downloaded pages. Nutch [Khare et al. 2004] is one of the best known existing open-source web crawlers; in fact, the goal of Nutch itself is much broader in scope, because it aims at offering a full-fledged search engine under all respects: besides crawling, Nutch implements features such as (hyper)text-indexing, link analysis, query resolution, result ranking and summarization. It is natively distributed (using Apache Hadoop as task-distribution backbone) and quite configurable; it also adopts breadth-first as basic visit mechanism, but can be optionally configured to go depth-first or even largest-score first, where scores are computed using some scoring strategy which is itself configurable. Scalability and speed are the main design goals of Nutch; for example, Nutch was used to collect TREC ClueWeb09 dataset 2 , the largest web dataset publicly available as of today consisting of 1 040 809 705 pages, that were downloaded at the speed of 755.31 pages/s [ Clu 2009]; to do this they used a Hadoop cluster of 100 machines [Callan 2012], so their real throughput was of about 7.55 pages/s per machine. This poor performance is not unexpected: using Hadoop to distribute the crawling jobs is easy, but not efficient, because it constrains the crawler to work in a batch 3 fashion. It should not be surprising that using a modern job-distribution framework like BUbiNG does increases the throughput by orders of magnitude. ARCHITECTURE OVERVIEW BUbiNG stands on a few architectural choices which in some cases contrast the common folklore wisdom. We took our decisions after carefully comparing and benchmarking several options and gathering the hands-on experience of similar projects. -The fetching logic of BUbiNG is built around thousands of identical fetching threads performing only synchronous (blocking) I/O. Experience with recent Linux kernels and increase in the number of cores per machine shows that this approach consistently outperforms asynchronous I/O. This strategy simplifies significantly the code complexity, and makes it trivial to implement features like HTTP/1.1 "keepalive" multiple-resource downloads. -Lock-free [Michael and Scott 1996] data structures are used to "sandwich" fetching threads, so that they never have to access lock-based data structures. This approach is particularly useful to avoid direct access to synchronized data structures with log- arithmic modification time, such as priority queues, as contention between fetching threads can become very significant. -URL storage (both in memory and on disk) is entirely performed using byte arrays. While this approach might seen anachronistic, the Java String class can easily occupy three times the memory used by a URL in byte-array form (both due to additional fields and to 16-bit characters) and doubles the number of objects. BUbiNG aims at exploiting the large memory sizes available today, but garbage collection has a linear cost in the number of objects: this factor must be taken into account. -Following UbiCrawler's design [Boldi et al. 2004], BUbiNG agents are identical and autonomous. The assignment of URLs to agents is entirely customizable, but by default we use consistent hashing as a fault-tolerant, self-configuring assignment function. In this section, we overview the structure of a BUbiNG agent: the following sections detail the behavior of each component. The inner structure and data flow of an agent is depicted in Figure 1. The bulk of the work of an agent is carried out by low-priority fetching threads, which download pages, and parsing threads, which parse and extract information from downloaded pages. Fetching threads are usually thousands, and spend most of their time waiting for network data, whereas one usually allocates as many parsing threads as the number of available cores, because their activity is mostly CPU bound. Fetching threads are connected to parsing threads using a lock-free result list in which fetching threads enqueue buffers of fetched data, and wait for a parsing thread to analyze them. Parsing threads poll the result list using an exponential backoff scheme, perform actions such as parsing and link extraction, and signal back to the fetching thread that that the buffer can be filled again. As parsing threads discover new URLs, they enqueue them to a sieve that keeps track of which URLs have been already discovered. A sieve is a data structure similar to a queue with memory: each enqueued element will be dequeued at some later time, with the guarantee that an element that is enqueued multiple times will be dequeued just once. URLs are added to the sieve as they are discovered by parsing. In fact, every time a URL is discovered it is checked first against a high-performance approximate LRU cache containing 128-bit fingerprints: more than 90% of the URLs discovered are discarded at this stage. The cache avoids that frequently found URLs put the sieve under stress, and it has also another important goal: it avoids that frequently found URLs assigned to another agent are retransmitted several times. URLs that come out of the sieve are ready to be visited, and they are taken care of (stored, organized and managed) by the frontier, which is actually itself decomposed into several modules. The most important data structure of the frontier is the workbench, an in-memory data structure that keeps track of the visit state of each host currently being visited and that can check in constant time whether some host can be accessed for download without violating the politeness constraints. Note that to attain the goal of several thousands downloaded pages per second without violating politeness constraints it is necessary to keep track of the visit state of hundreds of thousands of hosts. When a host is ready for download, its visit state is extracted from the workbench and moved to a lock-free todo queue by a suitable thread. Fetching threads poll the todo queue with an exponential backoff, fetch resources from the retrieved visit state 4 and then put it back onto the workbench. Note that we expect that once a large crawl has started, the todo queue will never be empty, so fetching threads will never have to wait. Most of the design challenges of the frontier components are actually geared towards avoiding that fetching threads ever wait on an empty todo queue. The main active component of the frontier is the distributor: it is a high-priority thread that processes URLs coming out of the sieve (and that must therefore be crawled). Assuming for a moment that memory is unbounded, the only task of the distributor is that of iteratively dequeueing a URL from the sieve, checking whether it belongs to a host for which a visit state already exists, and then either creating a new visit state or enqueuing the URL to an existing one. If a new visit state is necessary, it is passed to a set of DNS threads that perform DNS resolution and then move the visit state onto the workbench. Since, however, breadth-first visit queues grow exponentially, and the workbench can use only a fixed amount of in-core memory, it is necessary to virtualize a part of the workbench, that is, writing on disk part of the URLs coming out of the sieve. To decide whether to keep a visit state entirely in the workbench or to virtualize it, and also to decide when and how URLs should be moved from the virtualizer to the workbench, the distributor uses a policy that is described later. Finally, every agent stores resources in its store (that may possibly reside on a distributed or remote file system). The native BUbiNG store is a compressed file in the Web ARChive (WARC) format (the standard proposed and made popular by Heritrix). This standard specifies how to combine several digital resources with other information into an aggregate archive file. In BUbiNG compression happens in a heavily parallelized way, with parsing threads compressing independently pages and using concurrent primitives to pass compressed data to a flushing thread. The sieve A sieve is a queue with memory: it provides enqueue and dequeue primitives, similarly to a standard queue; each element enqueued to a sieve will be eventually dequeued later. However, a sieve guarantees also that if an element is enqueued multiple times, it will be anyway dequeued just once. Sieves (albeit not called with this name) have always been recognized as a fundamental basic data structure for a crawler: their main implementation issue lies in the unbounded, exponential growth of the number of discovered URLs. While it is easy to write enqueued elements to a disk file, checking that an element is not returned multiple times requires ad-hoc data structures-standard dictionaries would use too much in-core memory. The actual sieve implementation used by BUbiNG can be customized, but the default one, called MercatorSieve, is similar to the one suggested in [Heydon and Najork 1999] (hence its name). Each element known to the sieve is stored as a 64-bit hash in a disk file. Every time a new element is enqueued, its hash is stored in an in-memory array, and the element is saved in an auxiliary file. When the array is full, it is sorted and compared with the set of elements known to the sieve. The auxiliary file is then scanned, and previously unseen elements are stored for later dequeueing. All these operations require only sequential access to all files involved, and the sizing of the array is based on the amount of in-core memory available. Note that the output order is guaranteed to be the same of the input order (i.e., elements are appended in the order of their first appearance). A generalization of the idea of a sieve, with the additional possibility of associating values with the elements, is the DRUM (Disk Repository with Update Management) structure used by IRLBot and described in [Lee et al. 2009]. A DRUM provides additional operations to retrieve or update the values associated with the elements. From an implementation viewpoint, DRUM is a Mercator sieve with multiple arrays, called buckets, in which a careful orchestration of in-memory and on-disk data makes it possible to sort in one shot sets that are an order of magnitude larger than what the Mercator sieve would allow using the same quantity of in-core memory. However, to do so DRUM must sacrifice breadth-first order: due to the inherent randomization of the way keys are placed in the buckets, there is no guarantee that URLs will be crawled in breadth-first order, not even per host. Finally, the tight analysis in [Lee et al. 2009] about the properties of DRUM is unavoidably bound to the single-agent approach of IRLBot: for example, the authors conclude that a URL cache is not useful to reduce the number of insertions in the DRUM, but the same cache reduces significantly network transmissions. Based on our experience, once the cache is in place the Mercator sieve becomes much more competitive. There are several other implementations of the sieve logic currently used. A quite common choice is to adopt an explicit queue and a Bloom filter [Bloom 1970] to remember enqueued elements. Albeit popular, this choice has no theoretical guarantees: while it is possible to decide a priori the maximum number of pages that will ever be crawled, it is very difficult to bound in advance the number of discovered URLs, and this number is essential in sizing the Bloom filter. If the discovered URLs are significantly more than expected, several pages are likely to be lost because of false positives. A better choice is to use a dictionary of fixed-size fingerprints obtained from URLs using a suitable hash function. The disadvantage is that the structure would no longer use constant memory. The workbench The workbench is an in-memory data structure that contains the next URLs to be visited. It is one of the main novel ideas in BUbiNG's design, and it is one of the main reasons why we can attain a very high throughput. It is a significant improvement over IRLBot's two-queues approach [Lee et al. 2009], as it can detect in constant time whether a URL is ready for download without violating politeness limits. First of all, URLs associated with a specific host 5 are kept in a structure called visit state, containing a FIFO queue of the next URLs to be crawled for that host along with a next-fetch field that specifies the first instant in time when a URL from the queue can be downloaded, according to the per-host politeness configuration. Note that inside a visit state we only store a byte-array representation of the path and query of a URL: this approach significantly reduces object creation, and provides a simple form of compression by prefix omission. Visit states are further grouped into workbench entries based on their IP address; every time the first URL for a given host is found, a new visit state is created and then the IP address is determined (by one of the DNS threads): the new visit state is either put in a new workbench entry (if no known host was as associated to that IP address yet), or in an existing one. A workbench entry contains a queue of visit states (associated with the same IP) prioritized by their next-fetch field, and an IP-specific next-fetch, containing the first instant in time when the IP address can be accessed again, according to the per-IP politeness configuration. The workbench is the queue of all workbench entries, prioritized on the next-fetch field of each entry maximized with the next-fetch field on the top element of its queue of visit states. In other words, the workbench is a priority queue of priority queues of FIFO queues. We remark that due to our choice of priorities there is a host that can be visited without violating host or IP politeness constraints if and only if the first URL of the top visit state of the top workbench entry can be visited. Moreover, if there is no such host, the delay after which a host will be ready is given by the priority of the top workbench entry minus the current time. Therefore, the workbench acts as a delay queue: its dequeue operation waits, if necessary, until a host is ready to be visited. At that point, the top entry E is removed from the workbench and the top visit state is removed from E. Both removals happen in logarithmic time (in the number of visit states). The visit state and the associated workbench entry act as a token that is virtually passed between BUbiNG's components to guarantee that no component is working on the same workbench entry at the same time (in particular, this forces both kinds of politeness). In practice, as we mentioned in the overview, dequeueing is performed by a high-priority thread, the todo thread, that constantly dequeues visit states from the workbench and enqueues them into a lockfree todo queue, which is then accessed by fetching threads. This approach, besides avoiding contention by thousands of threads on a relatively slow structure, makes the number of visit states that are ready for downloads easily measurable: it is just the size of the todo queue. The downside is that, in principle, using very skewed per-host or per-IP politeness delays might cause the order of the todo queue not to reflect the actual priority of the visit states contained therein. Fetching threads A fetching thread is a very simple thread that iteratively extracts visit states from the todo queue. If the todo queue is empty, a standard exponential backoff procedure is used to avoid polling the list too frequently, but the design of BUbiNG aims at keeping the todo queue nonempty and avoiding backoff altogether. Once a fetching thread acquires a visit state, it tries to fetch the first URL of the visit state FIFO queue. If suitably configured, a fetching thread can also iterate the fetching process on more URLs for a fixed amount of time, so to exploit the "keepalive" feature of HTTP 1.1. Each fetching thread has an associated fetch data instance in which the downloaded data are buffered. Fetch data include a transparent buffering method that keeps a fixed amount of data in memory and dumps on disk the remaining part. By sizing the fixed amount suitably, most requests can be completed without accessing the disk, but at the same time rare large requests can be handled without allocating additional memory. After a resource has been fetched, the fetch data is put in the results queue so that one of the parsing threads can parse it. Once this process is over, the parsing thread sends a signal back so that the fetching thread is able to start working on a new URL. Once a fetching thread has to work on a new visit state, it puts the current visit state in a done queue, from which it will be dequeued by a suitable thread that will then put it back on the workbench together with its associated entry. Most of the time, a fetching thread is blocked on I/O, which makes it possible to run thousands of them in parallel. Indeed, the number of fetching threads determines the amount of parallelization BUbiNG can achieve while fetching data from the network, so it should be chosen as large as possible, compatibly with the amount of bandwidth available and with the memory used by fetch data. Parsing threads A parsing thread iteratively extracts from the results queue the fetch data that have been previously enqueued by a fetching thread. Then, the content of the HTTP response is analyzed and possibly parsed. If the response contains a HTML page, the parser will produce a set of URLs that will be first checked against the URL cache, and then, if not already seen, either sent to another agent, or enqueued to the same agent's sieve. During the parsing phase, a parsing thread computes a digest of the response content. The signature is stored in a Bloom filter [Bloom 1970] and it is used to avoid saving several times the same page (or near-duplicate pages). Finally, the content of the response is saved to the store. Since two pages are considered (near-)duplicates whether they have the same signature, the digest computation is responsible for content-based duplicate detection. In the case of HTML pages, in order to collapse near-duplicates, some heuristic is used. In particular, an hash fingerprint is computed on a summarized content, which is obtained by stripping HTML attributes, and discarding digits and dates from the response content. This simple heuristic allows for instance to collapse pages that differs just for visitor counters or calendars. In a post-crawl phase, there are several more sophisticated approaches that can be applied, like shingling [Broder et al. 1997], simhash [Charikar 2002], fuzzy fingerprinting [Fetterly et al. 2003;Chakrabarti 2003], and others, e.g., [Manku et al. 2007]. For the sake of description, we will refer to duplicates as the pages which are (near-)duplicates of some other page previously crawled according to the above definition, while we will call archetypes the set of pages which are not duplicates. DNS threads DNS threads are used to solve host names of new hosts: a DNS thread continuously dequeues from the list of newly discovered visit states and resolves its host name, adding it to a workbench entry (or creating a new one, if the IP address itself is new), and putting it on the workbench. In our experience, it is essential to run a local recursive DNS server to avoid the bottleneck caused by an external server. The workbench virtualizer The workbench virtualizer maintains on disk a mapping from hosts to FIFO virtual queues of URLs. Conceptually, all URLs that have been extracted from the sieve but have not yet been fetched are enqueued in the workbench visit state they belong to, in the exact order in which they came out of the sieve. Since, however, we aim at crawling with an amount of memory that is constant in the number of discovered URLs, part of the queues must be written on disk. Each virtual queue contains a fraction of URLs from each visit state, in such a way that the overall URL order respects, per host, the original breadth-first order. Virtual queues are consumed as the visit proceeds, following the natural per-host breadth-first order. As fetching threads download URLs, the workbench is partially freed and can be filled with URLs coming from the virtual queues. This action is performed by the same thread emptying the done queue (the queue containing the visit states after fetching): as it puts visit states back on the workbench, it selects visit states with URLs on disk but no more URLs on the workbench and puts them on a refill queue that will be later read by the distributor. Initially, we experimented with virtualizers inspired by the BEAST module of IRLbot [Lee et al. 2009], although many crucial details of their implementation were missing (e.g., the treatment of HTTP and connection errors); moreover, due to the static once-for-all distribution of URLs among a number of physical on-disk queues, it was impossible to guarantee adherence to a breadth-first visit in the face of unpredictable network-related faults. Our second implementation was based on the Berkeley DB, a key/value store that is also used by Heritrix. While extremely popular, Berkeley DB is a general-purpose storage system, and in particular in Java it has a very heavy load in terms of object creation and corresponding garbage collection. While providing in principle services like URL-level prioritization (which was not one of our design goals), Berkeley DB was soon detected to be a serious bottleneck in the overall design. We thus decided to develop an ad-hoc virtualizer oriented towards breadth-first visits. We borrowed from Berkeley DB the idea of writing data in log files that are periodically collected, but we decided to rely on memory mapping to lessen the I/O burden. In our virtualizer, on-disk URL queues are stored in log files that are memory mapped and transparently thought of as a contiguous memory region. Each URL stored on disk is prefixed with a pointer to the position of the next URL for the same host. Whenever we append a new URL, we modify the pointer of the last stored URL for the same host accordingly. A small amount of metadata associated with each host (e.g., the head and tail of its queue) is stored in main memory. As URLs are dequeued to fill the workbench, part of the log files become free. When the ratio between the used and allocated space goes below a threshold (e.g., 50%), a garbage-collection process is started. Due to the fact that URLs are always appended, there is no need to keep track of free space: we just scan the queues in order of first appearance in the log files and gather them at the start of the memory-mapped space. By keeping track (in a priority queue) of the position of the next URL to be collected in each queue, we can move items directly to their final position, updating the queue after each move. We stop when enough space has been freed, and delete the log files that are now entirely unused. Note that most of the activity of our virtualizer is caused by appends and garbage collections (reads are a lower-impact activity that is necessarily bound by the network throughput). Both activities are highly localized (at the end of the currently used region in the case of appends, and at the current collection point in the case of garbage collections), which makes a good use of the caching facilities of the operating system. The distributor The distributor is a high-priority thread that orchestrates the movement of URLs out of the sieve, and loads URLs from virtual queues into the workbench as necessary. As the crawl proceeds, URLs get accumulated in visit states at different speeds, both because hosts have different responsiveness and because websites have different sizes and branching factors. Moreover, the workbench has a (configurable) limit size that cannot be exceeded, since one of the central design goals of BUbiNG is that the amount of main memory occupied cannot grow unboundedly in the number of the discovered URLs, but only in the number of hosts discovered. Thus, filling the workbench blindly with URLs coming out of the sieve would soon result in having in the workbench only URLs belonging to a limited number of hosts. The front of a crawl, at any given time, is the number of visit states that are ready for download respecting the politeness constraints. The front size determines the overall throughput of the crawler-because of politeness, the number of distinct hosts currently being visited is the crucial datum that establishes how fast or slow the crawl is going to be. One of the two forces driving the distributor is, indeed, that the front should always be large enough so that no fetching thread has ever to wait. To attain this goal, the distributor enlarges dynamically the required front size: each time a fetching thread has to wait, albeit the current front size is larger than the current required front size, the latter is increased. After a warm-up phase, the required front size stabilizes to a value that depends on the kind of hosts visited and on the amount of resources available. At that point, it is impossible to have a faster crawl given the resources available, as all fetching threads are continuously downloading data. Increasing the number of fetching threads, of course, may cause an increase of the required front size. The second force driving the distributor is the (somewhat informal) requirement that we try to be as close to a breadth-first visit as possible. Note that this force works in an opposite direction with respect to enlarging the front-URLs that are already in existing visit states should be in principle visited before any URL in the sieve, but enlarging the front requires dequeueing more URLs from the sieve to find new hosts. The distributor is also responsible for filling the workbench with URLs coming either out of the sieve, or out of virtual queues (circle numbered (1) in Figure 1). Once again, staying close to a breadth-first visit requires loading URLs in virtual queues, but keeping the front large might call for reading URLs from the sieve to discover new hosts. The distributor privileges refilling the queues of the workbench using URLs from the virtualizer, because this makes the visit closer to an exact breadth-first. However, if no refill has to be performed and the front is not large enough, the distributor will read from the sieve, hoping to find new hosts to make the front larger. When the distributor reads a URL from the sieve, the URL can either be put in the workbench or written in a virtual queue, depending on whether there are already URLs on disk for the same host, and on the number of URLs per IP address that should be in the workbench to keep it full, but not overflowing, when the front is of the required size. Configurability To make BUbiNG capable of a versatile set of tasks and behaviors, every crawling phase (fetching, parsing, following the URLs of a page, scheduling new URLs, storing pages) is controlled by a filter, a Boolean predicate that determines whether a given resource should be accepted or not. Filters can be configured both at startup and at runtime allowing for a very fine-grained control. The type of objects a filter considers is called the base type of the filter. In most cases, the base type is going to be a URL or a fetched page. More precisely, a prefetch filter is one that has a BUbiNG URL as its base type (typically: to decide whether a URL should be scheduled for later visit, or should be fetched); a postfetch filter is one that has a fetched response as base type and decides whether to do something with that response (typically: whether to parse it, to store it, etc.). Heuristics Some classes of BUbiNG contain the distillation of heuristics we developed in almost twenty years of work with web crawling. One important such class is BURL (a short name for "BUbiNG URL"), which is the class responsible for parsing and normalizing URLs found in web pages. The topic of parsing and normalization is much more involved than one might expect-very recently, the failure in building a sensible web graph from the ClueWeb09 collection stemmed in part from the lack of suitable normalization of the URLs involved. BURL takes care of fine details such as escaping and de-escaping (when unnecessary) of nonspecial characters, case normalization of percent-escape. Distributed crawling BUbiNG crawling activity can be distributed by running several agents over multiple machines. Similarly to UbiCrawler [Boldi et al. 2004], all agents are identical instances of BUbiNG, without any explicit leadership: all data structures described above are part of each agent. URL assignment to agents is entirely configurable. By default, BUbiNG uses just the host to assign a URL to an agent, which avoids that two different agents can crawl the same host at the same time. Moreover, since most hyperlinks are local, each agent will be himself responsible for the large majority of URLs found in a typical HTML page [Olston and Najork 2010]. Assignment of hosts to agents is by default performed using consistent hashing [Boldi et al. 2004]. Table I. Comparison between BUbiNG and the main existing open-source crawlers. Resources are HTML pages for ClueWeb09 and IRLBot, but include other data types (e.g., images) for ClueWeb12. For reference, we also report the throughput of IRLbot [Lee et al. 2009], although the latter is not open source. Note that ClueWeb09 was gathered using a heavily customized version of Nutch. Resources Resources Communication of URLs between agents is handled by the message-passing methods of the JGroups Java library; in particular, to make communication lightweight URLs are by default distributed using UDP. More sophisticated communications between the agents rely on the TCP-based JMX Java standard remote-control mechanism, which exposes most of the internal configuration parameters and statistics. Almost all crawler structures are indeed modifiable at runtime. EXPERIMENTS Testing a crawler is a delicate, intricate, arduous task: on one hand, every real-world experiment is obviously influenced by the hardware at one's disposal (in particular, by the available bandwidth). Moreover, real-world tests are difficult to repeat many times with different parameters: you will either end up disturbing the same sites over and over again, or choosing to visit every time a different portion of the web, with the risk of introducing artifacts in the evaluation. Given these considerations, we ran two kinds of experiments: one batch was performed in vitro with a HTTP proxy 6 simulating network connections towards the web and generating fake HTML pages (with a configurable behavior that includes delays, protocol exceptions etc.), and another batch of experiments was performed in vivo. In vitro experiments: BUbiNG To verify the robustness of BUbiNG when varying some basic parameters, such as the number of fetching threads or the IP delay, we decided to run some in vitro simulations on a group of four machines sporting 64 cores and 64 GB of core memory. In all experiments, the number of parsing and DNS threads was fixed and set respectively to 64 and 10. The size of the workbench was set to 512MB, while the size of the sieve was set to 256MB. We always set the host politeness delay equal to the IP politeness delay. Every in vitro experiment was run for 90 minutes. Fetching threads. The first thing we wanted to test was that increasing the number of fetching threads yields a better usage of the network, and hence a larger number of requests per second, until the bandwidth is saturated. The results of this experiment are shown in Figure 3 and have been obtained by having the proxy simulate a network that saturates quickly, using no politeness delay. The behavior visible in the plot tells us that the increase in the number of fetching threads yields a linear increase in speed until the available (simulated) bandwidth is reached; after that, the number of requests stabilizes to a plateau. Also this part of the plot tells us something: after saturating the bandwidth, we do not see any decrease in the throughput, witnessing the fact that our infrastructure does not cause any hindrance to the crawl. Politeness. The experiment described so far uses a small number of fetching threads, because the purpose was to show what happens before saturation. Now we show what happens under a heavy load. Our second in vitro experiment keeps the number of fetching threads fixed but increases the amount of politeness, as determined by the IP delay. We plot BUbiNG's throughput as the IP delay (hence the host delay) increases in Figure 4 (top): to maintain the same throughput, the front size (i.e., the number of hosts being visited in parallel) must increase, as expected. Moreover, this is independent on the number of threads (of course, until the network bandwidth is saturated). In the same figure we show that the average throughput is independent from the politeness (and almost independent from the number of fetching threads), and the same is true of the CPU load. Even if this fact might seem surprising, this is the natural consequence of two observations: first, even with a small number of fetching threads, BUbiNG always tries to fill the bandwidth and to maximize its usage of computational resources; second, even varying the IP and host delay, BUbiNG modifies the number of hosts under visit to tune the interleaving between their processing. Raw speed. We wanted to test the raw speed of a cluster of BUbiNG agents. We thus ran four agents using 1 000 fetching threads, until we gathered one billion pages, averaging 40 600 pages per second on the whole cluster. We also ran the same test on a single machine, obtaining essentially the same per-machine speed, showing that BUbiNG scales linearly with the number of agents in the cluster. Testing for bottlenecks: no I/O. Finally, we wanted to test whether our lock-free architecture was actually able to sustain a very high parallelism. To do so, we ran a no-I/O test on a 40-core workstation. The purpose of the test was to stress the computation and contention bottlenecks in absence of any interference from I/O: thus, input from the network was generated internally using the same logic of our proxy, and while data was fully processed (e.g., compressed) no actual storage was performed. After 100 million pages, the average speed was 16 000 pages/s (peak 22 500) up to 6 000 threads. We detected the first small decrease in speed (15 300 pages/s, peak 20 500) at 8 000 threads, which we believe is physiological due to increased context switch and Java garbage collection. Fig. 4. The average size of the front, the average number of requests per second, and the average CPU load with respect to the IP delay (the host delay is set to eight times the IP delay). Note that the front adapts linearly to the growth of the IP delay, and, due to the essentially unlimited bandwidth of the proxy, the number of fetching threads is almost irrelevant. In vitro experiments: Heritrix To provide a comparison of BUbiNG with another crawler in a completely equivalent setting, we ran a raw-speed test using Heritrix 3.2.0 on the same hardware as in the BUbiNG raw-speed experiment, always using a proxy with the same setup. We configured Heritrix to use the same amount of memory, 20% of which was reserved for the Berkeley DB cache. We used 1 000 threads, locked the politeness interval to 10 seconds regardless of the download time (by default, Heritrix uses an adaptive scheme), and enabled content-based duplicate detection. 7 The results obtained will be presented and discussed in Section 5.4. In vivo experiments We performed a number of experiments in vivo at different sites. The main problem we had to face is that a single BUbiNG agent on sizable hardware can saturate a 1 Gb/s geographic link, so, in fact, we were not initially able to perform any test in which the network was not capping the crawler. Finally, iStella, an Italian commercial search engine provided us with a 48-core, 512 GB RAM with a 2 Gb/s link. The results confirm the knowledge we have gathered with our in vitro experiment: in the iStella experiment we were able to keep a steady download speed of 1.2 Gb/s using a single BUbiNG agent crawling the .it domain. The overall CPU load was about 85%. Comparison When comparing crawlers, many measures are possible, and depending on the task at hand, different measures might be suitable. For instance, crawling all types of data (CSS, images, etc.) usually yields a significantly higher throughput than crawling just HTML, since HTML pages are often rendered dynamically, sometimes causing a significant delay, whereas most other types are served statically. The crawling policy has also a huge influence on the throughput: prioritizing by indegree (as IRLBot does [Lee et al. 2009]) or alternative importance measure shifts most of the crawl on sites hosted on powerful servers with large-bandwidth connection. Ideally, crawlers should be compared on a crawl with given number of pages in breadth-first fashion from a fixed seed, but some crawlers are not available to the public, which makes this goal unattainable. In Table I we gather some evidence of the excellent performance of BUbiNG. Part of the data is from the literature, and part has been generated during our experiments. First of all, we report performance data for Nutch and Heritrix from the recent crawls made for the ClueWeb project (ClueWeb09 and ClueWeb12). The figures are those available in [Callan 2012] along with those found in [Clu 2009] and http: //boston.lti.cs.cmu.edu/crawler/crawlerstats.html: notice that the data we have about those collections are sometimes slightly contradictory (we report the best figures). The comparison with the ClueWeb09 crawl is somewhat unfair (the hardware used for that dataset was "retired search-engine hardware"), whereas the comparison with ClueWeb12 is more unbiased, as the hardware used was more recent. We report the throughput declared by IRLBot [Lee et al. 2009], too, albeit the latter is not open source and the downloaded data is not publicly available. Then, we report experimental in vitro data about Heritrix and BUbiNG obtained, as explained in the previous section, using the same hardware, a similar setup, and a HTTP proxy generating web pages. 8 This figures are the ones that can be compared more appropriately. Finally, we report the data of the iStella experiment. The results of the comparison show quite clearly that the speed of BUbiNG is several times that of IRLBot and one to two orders of magnitude larger than that of Heritrix or Nutch. All in all, our experiments show that BUbiNG's adaptive design provides a very high throughput, in particular when a strong politeness is desired: indeed, from our comparison, the highest throughput. The fact that the throughput can be scaled linearly just by adding agents makes it by far the fastest crawling system publicly available. THREE DATASETS As a stimulating glimpse into the capabilities of BUbiNG to collect interesting datasets, we describe the main features of three snapshots collected with different criteria. All snapshots contain about one billion unique pages (the actual crawls are significantly larger, due to duplicates). uk-2014: a snapshot of the .uk domain, taken with a limit of 10 000 pages per host starting from the BBC website. eu-2015: a "deep" snapshot of the national domains of the European Union, taken with a limit of 10 000 000 pages per host starting from europa.eu. gsh-2015: a general "shallow" worldwide snapshot, taken with a limit of 100 pages per host, always starting from europa.eu. The uk-2014 snapshot follows the tradition of our laboratory of taking snapshots of the .uk domain for linguistic uniformity, and to obtain a regional snapshot. The second and third snapshot aims at exploring the difference in the degree distribution and in website centrality in two very different kinds of data-gathering activities. In the first case, the limit on the pages per host is so large that, in fact, it was never reached; it is a quite faithful "snowball sampling" due to the breadth-first nature of BUbiNG's visits. In the second case, we aim at maximizing the number of collected hosts by downloading very few pages per host. One of the questions we are trying to answer using the latter two snapshots is: how much is the indegree distribution dependent on the cardinality of sites (root pages have an indegree usually at least as large as the site size), and how much is it dependent on inter-site connections? The main data, and some useful statistics about the three datasets, are shown in Table II. Among these, we have the average number of links per page (average outdegree) and the average number of links per page whose destination is on a different host (average external outdegree). Moreover, concerning the graph induced by the pages of our crawls, we also report the average distance, the harmonic diameter (e.g., the harmonic mean of all the distances), and the percentage of reachable pairs of pages in this graph (e.g., pairs of nodes (x, y) for which there exists a directed path from x to y). Degreee distribution The indegree and outdegree distributions are shown in Figures 5, 6, 7 and 8. We provide both a degree-frequency plot decorated with Fibonacci binning [Vigna 2013], and a degree-rank plot 9 to highlight with more precision the tail behaviour. From Table II, we can see that pages at low depth tend to have less outlinks, but more external links than inner pages. The content is similarly smaller (content lives deeper in the structure of websites). Not surprisingly, moreover, pages of the shallow snapshot are closer to one another. The most striking feature of the indegree distribution is an answer to our question: the tail of the indegree distribution is, by and large, shaped by the number of intrahost inlinks of root pages. This is very visible in the uk-2014 snapshot, where limiting the host size at 10 000 causes a sharp step in the degree-rank plot; and the same happens at 100 for gsh-2015. But what is maybe even more interesting is that the visible curvature of eu-2015 is almost absent from gsh-2015. Thus, if the latter (being mainly shaped by inter-host links) has some chance of being a power-law, as proposed by the class of "richer get richer" models, the former has none. Its curvature clearly shows that the indegree distribution is not a power-law (a phenomenon already noted in the analysis of the Common Crawl 2012 dataset [Meusel et al. 2015]): fitting it with the method by Clauset, Shalizi and Newman ] gives a p-value < 10 −5 (and the same happens for the top-level domain graph). Table III, IV and V report centrality data about our three snapshots. Since the pagelevel graph gives rise to extremely noisy results, we computed the host graph and the top-level domain graph. In the first graph, a node is a host, and there is an arc from host x to host y if some page of x points to some page of y. The second graph is built similarly, but now a node is a set of hosts sharing the same top-level domain (TLD). The TLD of a URL is determined from its host using the Public Suffix List published by the Mozilla Foundation, 10 and it is defined as one dot level above that the public suffix of the host: for example, a.com for b.a.com (as .com is on the public suffix list) and c.co.uk for a.b.c.co.uk (as .co.uk is on the public suffix list). 11 For each graph, we display the top ten nodes by indegree, PageRank (with constant preference vector and α = 0.85) and by harmonic centrality , the harmonic mean of all distance towards a node. PageRank was computed with the highest possible precision in IEEE format using the LAW library, whereas harmonic centrality was approximated using HyperBall [Boldi and Vigna 2013]. Centrality Besides the obvious shift of importance (UK government sites for uk-2014, government/news sites in eu-2015 and large US companies in gsh-2015), we con confirm the results of [Meusel et al. 2015]: on these kinds of graphs, harmonic centrality is much more precise and less prone to spam than indegree or PageRank. In the host graphs, almost all results of indegree and most results of PageRank are spam or service sites, whereas harmonic centrality identifies sites of interest (in particular in uk-2014 and eu-2015). At the TLD level, noise decreases significantly, but the difference in behavior is still striking, with PageRank and indegree still displaying several service sites, hosting providers and domain sellers as top results. CONCLUSIONS In this paper we have presented BUbiNG, a new distributed open-source Java crawler. BUbiNG is orders of magnitudes faster than existing open-source crawlers, scales linearly with the number of agents, and will provide the scientific community with a reliable tool to gather large data sets. The main novel ideas in the design of BUbiNG are: -a pervasive usage of modern lock-free data structures to avoid contention among I/Obound fetching threads; -a new data structure, the workbench, that is able to provide in constant time the next URL to be fetched respecting politeness both at the host and IP level; -a simple but effective virtualizer-a memory-mapped, on-disk store of FIFO queues of URLs that do not fit into memory. BUbiNG pushes software components to their limits by using massive parallelism (typically, several thousand fetching threads); the result is a beneficial fallout on all related projects, as witnessed by several enhancements and bug reports to important software libraries like the Jericho HTML parser and the Apache Software Foundation HTTP client, in particular in the area of object creation and lock contention. In some cases, like a recent regression bug in the ASF client (JIRA issue 1461), it was exactly BUbiNG's high parallelism that made it possible to diagnose the regression. Future work on BUbiNG includes integration with spam-detection software, and proper handling of spider traps (especially, but not only, those consisting in infinite non-cyclic HTTP-redirects); we also plan to implement policies for IP/host politeness throttling based on download times and site branching speed, and to integrate BUb-iNG with different stores like HBase, HyperTable and similar distributed storage systems. As briefly mentioned, it is easy to let BUbiNG follow a different priority order than breadth first, provided that the priority is per host and per agent; the latter restriction can be removed at a moderate inter-agent communication cost. Prioritization at the level of URLs requires deeper changes in the inner structure of visit states and may be implemented using, for example, the Berkeley DB as a virtualizer: this idea will be a subject of future investigations. Another interesting direction is the integration with recently developed libraries which provides fibers, a user-space, lightweight alternative to threads that might further increase the amount of parallelism available using our synchronous I/O design.
2014-10-01T00:00:00.000Z
2014-04-07T00:00:00.000
{ "year": 2016, "sha1": "1380770ef5a4f479c4920e6678858883dc027f14", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0ee5abec0c7002c759d70e4d75921b65a6d8666a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
43219317
pes2o/s2orc
v3-fos-license
Binding of interleukin-8 to heparan sulphate enhances cervical maturation in rabbits Cervical ripening is a cytokine-trigged process with substantial remodelling of the cervical extracellular matrix. Interleukin-8 (IL-8) is an important cytokine in cervical maturation. Glycosaminoglycans are also included in this process, but their role in not clearly understood. The effects of heparan sulphate (HS), hyaluronic acid (HA), IL-8, HS (cid:70) IL-8 and HA (cid:70) IL-8 on biochemical properties of the cervix were examined in non-pregnant rabbits. The changes in vascular pattern with collagen structure of the cervices and immunohistochemical studies, together with the relative collagen concentrations, were determined. A reduction in relative collagen concentration was significant after HS (cid:70) IL-8 , IL-8 and HA (cid:70) IL-8 treatment (all P < 0.0001). Gel electrophoresis analysis showed that IL-8 bound preferentially to HS than to HA. Neutrophils were significantly increased in number (P < 0.0001) and located predominantly beneath the glandular epithelium and around the blood vessels after HS (cid:70) IL-8 treatment. HS (cid:70) IL-8 treatment caused cervices to increase their water content and become oedematous. The collagen fibres were considerably dissociated, the interfibrillar spaces markedly dilated, and the blood vessels notably increased and dilated. We conclude that binding to HS enhances the activity of IL-8 in inducing cervical maturation. computer analyser system; ARGUS-100, Hamamatsu Photonics, Japan). The principle of picrosirius red staining is the greater the collagen concentration, the greater the birefrin-gence and hence the greater the percentage of light Introduction The non-pregnant cervix is a fibrous structure which undergoes modifications during pregnancy to allow sufficient softening and growth for the passage of the fetus at birth (Harkness and Harkness, 1959;Friedman, 1980;Calder, 1981;Fitzpatrick, 1981). It is well established that the transformation of the uterine cervix to a soft and compliant structure during pregnancy is crucial for a normal delivery. This ripening process can be explained mainly in terms of connective tissue biology (Huszar and Walsh, 1991). Investigations of the structural alterations of cervical connective tissue during pregnancy and labour have focused predominantly on quantitative changes of the collagen content and the role of collagen-degraded enzymes (Ito et al., 1979;Danforth, 1980;Uldbjerg et al., 1983). However, there are strong indications that other macromolecular components of the cervix, such as proteoglycans (PG), may also be involved (von Maillot et al., 1979;Kitamura et al., 1980). PG are core proteins with covalently attached glycosaminoglycan (GAG) chains (Ruoslahti, 1988). In addition to heparin/ heparan sulphate (HS), chondroitin sulphate/dermatan sulphate, keratan sulphate and hyaluronic acid (HA) have also been isolated from the human cervix (von Maillot et al., 1979;Cabrol et al., 1980;Kitamura et al., 1980;Uldbjerg et al., 1983). A marked increase in the GAG content of the cervix during late pregnancy has been demonstrated in rats, rabbits, sheep and humans (Danforth et al., 1974;Golichowski et al., 1980;Fosang et al., 1984;Maradny et al., 1997). In particular, the cervical content of HS increases markedly during late pregnancy. It has been shown that HS is associated with the cell surface, together with various glycoproteins, and thought to be located predominantly in vessel walls (Caplan and Hascall, 1980;Kitamura et al., 1980;Scott, 1986;Isemura et al., 1987). Consequently, an increase in this GAG may reflect improved cervical vascularization in the fully ripened cervix during labour (Rath et al., 1988). Recently, several reports have emphasized the role of interleukin-8 (IL-8), which is a proinflammatory cytokine (also known as a chemokine) that is produced by a variety of cells, mainly monocytes/macrophages, but also by fibroblasts and choriodecidual cells (Uchiyama et al., 1992). IL-8 attracts and activates neutrophils and lymphocytes and stimulates the release of storage enzymes and toxic metabolites from neutrophils (Peveri et al., 1988;Kelly et al., 1994). IL-8 has been detected in increasing amounts from the amniotic fluid in term pregnancies, while its production has also been demonstrated in vitro in the human cervix (Romero et al., 1991;Barclay et al., 1993;Axemo et al., 1996). IL-8 is one of the factors which can ripen the cervix in a similar manner to the physiological process at term (Maradny et al., 1994). There is also evidence that IL-8 plays an important role in promoting cervical modification in late pregnancy (Barclay et al., 1993). One class of molecules likely to play a role in mediating tissue-specific associations of chemokines is the GAG. Chemokines such as IL-8 are known to bind to GAG, especially heparin (Talpas et al., 1993), and it has been suggested that heparin-related (heparan sulphate) GAG on endothelial cell surfaces localize and present chemokines to selectin-bound leukocytes (Butcher, 1991). A recent study has demonstrated that chemokines may be bound to cell-surface heparan sulphate in vitro (Huber et al., 1991), while an earlier report suggested that HS-which is present on the endothelial cell surface and in the basement membrane-may promote IL-8-dependent transmigration of neutrophils and thus enhance the activity of IL-8 (Webb et al., 1993). Based on these reports it seems important to evaluate the binding effect of HS to IL-8 in the cervical ripening process. The purpose of this study was to determine the effect of HS as an enhancer of IL-8 activity on the cervical ripening. Materials and methods Forty sexually mature Japanese White female non-pregnant rabbits (SLC, Hamamatsu, Japan) of bodyweight 3.0-3.5 kg were used. Rabbits were caged under controlled light and temperature and were given Purina rabbit chow (Clea Japan Inc., Tokyo, Japan) and water ad libitum. The rabbits were comparable in terms of age and bodyweight. Vaginal suppositories were prepared with 500 µl Witepsol-50 bases (WT-50, Adeps solidus: Mitsuba Co., Tokyo, Japan), the products of cocoa butter. The suppositories thus prepared were conical-shaped, quick to melt at 37°C, and showed poor attachment to the tissues. Release of drug from the suppository, when measured by the recommended method (Shintani, 1975), was found to occur rapidly. All animal experiments described were approved by the Research Committee of Laboratory Animals of Hamamatsu University School of Medicine. Experimental design Three approaches were used in this protocol. In the first approach, experiments were performed to determine the binding affinity of HS or HA to IL-8 using gel electrophoresis. In the second, the rabbits were treated either with placebo or recombinant IL-8, HS, HA, HS ϩ IL-8 and HA ϩ IL-8 to investigate the effects on cervical maturation. In the third approach, experiments were conducted to assess the dose-dependent effect of HS binding to IL-8 on cervical ripening. Binding effects of HS or HA to IL-8 on cervical maturation A series of five experiments was performed in rabbits (n ϭ 5 per group). Suppositories were administered vaginally once daily for 3 days to each animal to examine the effect of: (i) IL-8 on cervical maturation (suppositories contained 200 ng IL-8); (ii) HS on cervical growth (suppositories contained 1 mg HS); (iii) HS ϩ IL-8 on cervical growth (suppositories contained 1 mg HS ϩ 200 ng IL-8); (iv) HA on cervical dilatation (suppositories contained 1 mg HA); and (v) HA ϩ IL-8 on cervical ripening (suppositories contained 1 mg HA ϩ 200 ng IL-8). A control experiment was performed in five rabbits given suppositories containing 500 µl-WT base given once daily for 3 days. Dose-dependent effect of HS binding to IL-8 on cervical maturation Rabbits were allocated to two groups (n ϭ 5 per group) and treated once daily for 3 days with suppositories containing either 0.1 mg HS ϩ 200 ng IL-8 or 1 mg HS ϩ 200 ng IL-8. Animals were killed 24 h after receipt of the last suppository. The reproductive tract of each animal was immediately located and the cervices excised. One cervix from each animal was used to measure water content, followed by measurement of collagenase, elastase and gelatinase activities; the other cervix was fixed in 10% buffered formalin for 24 h. Following paraffin embedding (Technovit 7100; Kulzer GmbH, Germany), blocks were serially sectioned at a thickness of 5 µm using a Reidhert Jung 2050 motorized automatic microtome. Sections were stained with either haematoxylin followed by a 1% eosin counterstain (H/E), or with picrosirius red. Measurement of water content, collagenase, elastase and gelatinase activity, neutrophil infiltration in the cervical tissue and histological changes were performed to assess cervical maturation. Determination of water content The water content of each cervix was measured using an IM-3SCV device (Fuji Technica Co., Osaka, Japan), which is an infrared spectrophotometric technique that compares absorbance at a wavelength of 1.45 nm with that at reference wavelengths of 1.3 µm and 1.6 µm (Sumimoto and Terau, 1993). Five different points on the cervix were measured and the mean was calculated. Measurement of collagenase, elastase and gelatinase activity Cervices were homogenized in 1 ml ice-cold phosphate-buffered solution (PBS), pH 7.6. After extraction twice by freeze-thawing, samples were sonicated (30 W, 120 pulses, 30% duty, W-220 type; Heat Systems Ultrasonics, NY, USA) and centrifuged at 10 000 g for 20 min at 4°C. The supernatant was used to measure collagenase elastase and gelatinase activity as described previously (Maradny et al., 1997). Collagenase activity was estimated using a highly specific kit (collagenase type 1 activity measurement; Yagai Co., Cosmo-Bio, Tokyo, Japan), whereas elastase activity was determined by a specific chromogenic substrate for granulocyte elastase S-2484 (L-pyroglutamyl-L-prolyl-L-valine-p-nitranilide; KABI Diagnostic, Mölndal, Sweden). Gelatinase activity was measured using highly specialized kits (gelatinase activity measurement; Yagai Co.). One unit of activity was defined as the quantity of enzyme that digested 1 mg in 1 min. Immunohistochemistry of the cervix Paraffin-embedded tissue was deparaffinized in xylene baths, rehydrated through graded 95% alcohol, and finally rinsed in PBS, pH 7.2. Endogenous peroxidase was blocked by fixation in 3% hydrogen peroxide in methanol for 20 min at 23°C. Bovine serum albumin (2%) dissolved in PBS (BSA-PBS) was applied to the slides for 1 h. Anti-rabbit RT2 (1:100) monoclonal antibodies (Cedarlane Laboratories Ltd, Hornby, Canada) were added to the sections and incubation continued overnight at 4°C, followed by PBS rinsing (Ponsard et al., 1986). The secondary antibody (goat anti-rabbit; DAKO, California, USA) was applied for 2 h at room temperature followed by a PBS rinse. Avidin-biotin-peroxidase complex (DAKO) was added for 1 h, followed by washing with PBS. The sections were counterstained in haematoxylin, dehydrated and then examined under light microscopy (ϫ200 magnification). Negative controls received the same treatment, but using non-immune mouse serum instead of primary antibody. Neutrophils in the tissue were used as a positive control, the number of neutrophils being estimated by counting the number of stained extravascular cells within a lined grid (10ϫ10 squares) occupying an area on the section of 0.125 mm 2 using a ϫ20 objective and ϫ10 eyepiece. Neutrophils were counted in one cervix from each animal (five to seven randomly chosen areas) and the mean was calculated. Assay of relative collagen concentration The relative collagen concentration was assessed histologically after staining with picrosirius red (Sirius red F3BA; Chroma-Gesellschaft Schmid GmbH, Köngen, Germany) as described previously and validated as a histological method to determine the polymerized collagen concentration of tissues, including the cervix (Junqueira et al., 1979). The histological analysis was performed by measuring the optical density (percentage polarized light transmission) from five random fields of the connective tissue of each biopsy and the mean optical density calculated. An image analyser was employed for all histological measurements (microscope; Olympus IMT-2, Videocamera SIT C2400-80: computer analyser system; ARGUS-100, Hamamatsu Photonics, Japan). The principle of picrosirius red staining is that, the greater the collagen concentration, the greater the birefringence and hence the greater the percentage of light transmission. Statistical analysis Data are expressed as mean Ϯ SD. Analysis of variance (ANOVA) for repeated measurement followed by Scheff's F analysis was used for multiple comparisons. A P value of Ͻ 0.05 was considered significant for all comparisons. Figure 1a shows the agarose electrophoretic pattern of IL-8 (band 1), HS (band 2), HA (band 3), HS (1 mg) ϩ IL-8 (band 4), HA (1 mg) ϩ IL-8 (band 5); Figure 1b shows patterns for HS (0.05 mg) ϩ IL-8 (band 1), HS (0.1 mg) ϩ IL-8 (band 2) and HS (1 mg) ϩ IL-8 (band 3). The molecular weights of HS, HA and IL-8 are 400, 800 and 8 kDa, respectively. Therefore, 1 nmol of HS has bound~0.5 nmol of IL-8 under our experimental conditions. The binding of HA to IL-8 was weak compared with that of HS. Moreover, 0.05 mg and 0.1 mg of HS and HA respectively were insufficient to bind 10 µg IL-8 (Figure 1b, bands 1 and 2). The migration of HSϩIL-8 was delayed compared with that of HA ϩ IL-8, and demonstrates that IL-8 exhibits substantial selectivity in its binding to HS. Cervix water content The influence of various treatments on cervix water content is shown in Figure 2. When compared with controls, the water The affinity of 1 mg HS to IL-8 was markedly higher than that of 1 mg HA. The movement of HS (1 mg) ϩ IL-8 (band 4) was delayed compared with HA (1 mg) ϩ IL-8 (band 5), and demonstrates that IL-8 exhibits substantial selectivity in its binding to HS. Low concentrations (0.05 and 0.1 mg) of HS were insufficient to bind IL-8 (10 µg). Figure 2. Water content (%) in rabbit cervices treated with HS ϩ IL-8, HA ϩ IL-8, IL-8, HS, HA, and in controls. Compared with controls, the water content was significantly increased with HS ϩ IL-8, HA ϩ IL-8 and IL-8 (P Ͻ 0.001, Ͻ 0.003 and Ͻ 0.01, respectively). The content in cervices treated with HS (P Ͻ 0.05) and HA (P Ͻ 0.05) was significantly increased compared with controls. Percent hydration was higher in cervices treated with HS ϩ IL-8 compared with IL-8 and HA ϩ IL-8. *, significantly different compared with controls; †, significantly different compared with HA treatment (n ϭ 5, P Ͻ 0.05). content was significantly increased in rabbits treated with IL-8, HS and HA (P Ͻ 0.01, Ͻ 0.05 and Ͻ 0.05 respectively), but was more significantly increased when the cervix was treated with HS ϩ lL-8 (P Ͻ 0.001) and HA ϩ IL-8 (P Ͻ 0.003). Notably, the cervix water content was higher in HS ϩ IL-8-than HA ϩ IL-8-treated rabbits. The cervix water content of rabbits treated with 200 ng IL-8, 0.1 mg HS ϩ 200 ng IL-8 and 1 mg HS ϩ 200 ng IL-8 is shown in Figure 3. Compared with controls, water content was significantly increased in animals given 200 ng IL-8 (P Ͻ 0.009), 0.1 mg HS ϩ 200 ng IL-8 (P Ͻ 0.009) and 1 mg HS ϩ 200 ng IL-8 (P Ͻ 0.002). Cervical water content was greater in rabbits given 1 mg HS ϩ 200 ng IL-8 than in those given either 0.1 mg HS ϩ 200 ng IL-8 (P Ͻ 0.01) or 200 ng IL-8 alone (P Ͻ 0.05), though no significant difference was found between the latter two groups. Number and localization of neutrophils Increased numbers of neutrophils were found in cervical biopsies from rabbits treated with either HS or HA (P Ͻ 0.004 Compared with controls, the number of neutrophils was significantly increased in cervices treated with HS ϩ IL-8, HA ϩ IL-8 and IL-8 (P Ͻ 0.0001, Ͻ 0.0001 and Ͻ 0.0001, respectively). Increased numbers of neutrophils were found to be more significant in cervices treated with HS ϩ IL-8 (P Ͻ 0.05) than with HA ϩ IL-8. Cervices treated with HS and HA also showed significant increases in neutrophils (P Ͻ 0.004 and Ͻ 0.01) compared with controls. *, significantly different compared with control; †, significantly different compared with HA treatment (n ϭ 5, P Ͻ 0.05). or Ͻ 0.01) compared with controls ( Figure 4). A remarkable increase in neutrophil number was seen in rabbits that had received HS ϩ IL-8, HA ϩ IL-8 and IL-8 by suppository (P Ͻ 0.0001, Ͻ 0.0001 and Ͻ 0.0001) compared with controls. Neutrophils were also increased in IL-8-treated cervices compared with HS (P Ͻ 0.05) or HA (P Ͻ 0.02). HS ϩ IL-8 treatment resulted in a greater increase in neutrophils than did HA ϩ IL-8 treatment (P Ͻ 0.05). Figure 5 shows changes in the number of neutrophils identified by immunohistochemical staining. Stained neutrophils were located predominantly around the blood vessels and beneath the glandular epithelium (not shown in figure) in cervices treated with HS ϩ IL-8 and IL-8 (Figure 5a and b) and HA ϩ IL-8 (not shown), compared with controls ( Figure 264 5d). Neutrophils were abundantly located in cervices treated with HS ϩ IL-8 compared with IL-8 alone. The overall number of neutrophils was considerably reduced with both HS ( Figure 5c) and HA treatment (not shown). Relative collagen concentrations As shown in Figure 6, the relative collagen concentration in the HS ϩ IL-8 treatment group was reduced by 69.2% (P Ͻ 0.0001) compared with controls, whereas HA ϩ IL-8 treatment reduced collagen by 51.8% (P Ͻ 0.0001) and IL-8 treatment by 49.9% (P Ͻ 0.001). The collagen concentration in rabbits treated with HS ϩ IL-8 was significantly reduced compared with HA ϩ IL-8 treatment (P Ͻ 0.03), while concentrations in rabbits treated with HS and HA were each also significantly reduced compared with controls (P Ͻ 0.002 and Ͻ 0.01). Collagenase, gelatinase and elastase activities Changes in cervical collagenase, gelatinase and elastase activities are summarized in Table I. Collagenase activities were significantly increased in rabbits treated with HS ϩ IL-8, HA ϩ IL-8 and IL-8 (P Ͻ 0.0001, Ͻ 0.0001 and Ͻ 0.0001, respectively), compared with controls. Collagenase activity in HS ϩ IL-8-treated rabbits was markedly increased compared with that in rabbits treated with HA ϩ IL-8 (P Ͻ 0.04), as was activity in HS-and HA-treated rabbits (P Ͻ 0.005 and Ͻ 0.02). Gelatinase activities were significantly increased in cervices treated with HS ϩ IL-8, HA ϩ IL-8 and IL-8 (P Ͻ 0.0001, Ͻ 0.0001 and Ͻ 0.0001, respectively), compared with controls. However, gelatinase activities in HS ϩ IL-8treated cervices were markedly increased compared with HA ϩ IL-8-treated tissues (P Ͻ 0.04). The difference in activity of cervical gelatinase was also statistically significant when the HS-and HA-treated rabbits were compared with controls (P Ͻ 0.002 and Ͻ 0.03). Cervical granulocyte elastase activities in HS ϩ IL-8-, HA ϩ IL-8-and IL-8-treated rabbits were significantly increased (P Ͻ 0.0001, Ͻ 0.0001 and Ͻ 0.0001, respectively), compared with controls. Furthermore, this activity was significantly greater in animals treated with HS ϩ IL-8 than with HA ϩ IL-8 (P Ͻ 0.009). Elastase activities were increased in rabbits treated with either HS or HA (P Ͻ 0.002 and Ͻ 0.04) when compared with controls. Collagen study with picrosirious red staining Examination of picrosirious red-stained sections of cervices disclosed various features (Figure 8). In HS ϩ IL-8-treated cervices, the collagen fibres appeared to be much thinner and more spread out, but were irregular with less compact and low-density collagen. The collagen fibres were not oriented in Downloaded from https://academic.oup.com/molehr/article-abstract/5/3/261/1404588 by guest on 27 July 2018 Concentrations were significantly reduced with HS ϩ IL-8 (P Ͻ 0.0001), HA ϩ IL-8 (P Ͻ 0.0001) and IL-8 (P Ͻ 0.0001), compared with controls. Collagen concentrations in rabbits after HS ϩ IL-8 treatment was markedly reduced compared with HA ϩ IL-8-treated animals (P Ͻ 0.03). Cervices treated with HS and HA showed significant reductions in collagen concentration (P Ͻ 0.002 and Ͻ 0.01) compared with controls. *, significantly different compared with control; †, significantly different compared with HA treatment (n ϭ 5, P Ͻ 0.05). an orderly fashion, were irregularly separated one from another, and the interfibrillar spaces were markedly dilated due to oedematous changes (Figure 8a). Comparatively contained orderly fibres were observed in cervices treated with IL-8 ( Figure 8b) and HA ϩ IL-8 (not shown). No significant changes in collagen structure were found in rabbits treated with HS (Figure 8c). In control cervices (Figure 8d), the collagen fibres visualized by picrosirius red staining were well organized and densely packed in bundles. The collagen fibres were regularly separated one from another and oriented in an orderly fashion. Morphology with haematoxylin and eosin staining Remarkable changes cervical histology after haematoxylin and eosin staining were found in rabbits after various treatments (Figure 9). HS ϩ IL-8 treatment (Figure 9a) resulted in the cervical lumens being dilated and the diameters of the whole cervices increased due to massive dilation of the blood vessels. The cervix walls were thin and the density of collagenous Figure 8. Picrosirius red-stained sections of rabbit cervices treated with HS ϩ IL-8, IL-8, HS, and in controls. In HS ϩ IL-8-treated cervices (a), the collagen fibres appeared to be much thinner and less compact, and a low density of collagen was observed. Comparatively more defined fibres were apparent in cervices treated with . No significant changes were observed in cervices treated with HS (c). In control cervices (d), the collagen fibres were well organized and densely packed as bundles. (Picrosirius staining; scale bar ϭ 0.05 mm). Figure 9. Histological findings in rabbit cervices treated with HS ϩ IL-8, IL-8, HS, and in controls. In HS ϩ IL-8-treated cervices, the cervical lumens are dilated due to massive dilation of blood vessels (a). Cervices in control rabbits (d) were thick-walled, with the collagenous network lying close together. The connective tissues were compact and the blood vessels small and non-dilated. More subtle histological changes were seen in animals treated with IL-8 (b); no remarkable changes were seen in HS-treated cervices (c). (Haematoxylin and eosin staining; scale bar ϭ 0.05 mm). network was decreased and markedly loosened. The cervices of control rabbits (Figure 9d) showed dense and firmly closed cervical rings, with the collagenous networks in the lamina propria (as well as of the inner circular and outer longitudinal smooth muscle layers) close together. The connective tissues were compact and the blood vessels small and non-dilated. More subtle changes were observed after treatment with IL-8 ( Figure 9b) and HA ϩ IL-8 (not shown) than after HS ϩ IL-8. No significant changes were found after HS treatment alone (Figure 9c). Discussion This study demonstrated that the binding of IL-8 to HS enhances its activity in cervical connective tissue in rabbits. The most striking finding of the present study was that the changes in biochemical components in the cervix of rabbits were markedly influenced by HS ϩ IL-8. In cervices treated with HS ϩ IL-8, neutrophils, water content and collagenase, gelatinase and elastase activities were significantly increased, whereas the relative collagen concentration in the cervical fluid was markedly reduced. Moreover, in binding to IL-8, HS also influenced the growth and vascular pattern of the cervices in rabbits. More subtle changes in cervical connective tissue components, together with histological findings, were observed in rabbits treated with HA ϩ IL-8. HS has an important role in the structural alterations of the cervical connective tissue during pregnancy and parturition. HS is thought to be located predominantly in vessel walls (Caplan and Hascall, 1980), and dramatically increases during dilatation of the cervix at labour (Kitamura et al., 1980;Osmers et al., 1995). The increased content of HS may also play a role in dispersal of collagen fibrils, since they have been shown to bind to collagen and may prevent fibril growth (Obrink, 1973;Scott and Orford, 1981). There is growing evidence that a cytokine network is present in gestational tissues and plays an important role during preterm and term parturition. Various uterine and embryonic cells, including the decidua and chorion (Kelly et al., 1994) and cervix fibroblasts (Uchiyama et al., 1992), are capable of producing inflammatory cytokines such as IL-8, IL-1 and tumour necrosis factor-alpha (TNF-α). These cytokines were also shown to increase the production of both IL-8 mRNA and IL-8 in endometrial stromal cells in vitro (Arici et al., 1996). It is well known that IL-8 is a potent chemotactic 267 activator of neutrophils and induces the migration of neutrophils from the vessels to surrounding connective tissue (Peveri et al., 1988). Previous studies have shown that the application of inflammatory cytokines IL-8, IL-1β and TNF-α induce cervical ripening (Chwalisz et al., 1994;Maradny et al., 1994Maradny et al., , 1996, the effects of IL-8 having been shown to be more specific for the uterine cervix (Chwalisz et al., 1994). HS did not induce random or directed neutrophil migration and did not influence neutrophil activity; however, enhanced responsiveness resulted from its selective interaction with IL-8 (Webb et al., 1993). HS has a function in the promotion of IL-8-dependent transmigration of neutrophils across the endothelial barrier (Webb et al., 1993). It has been shown previously that IL-8 binds endothelial cells, possibly via surface proteoglycans, and it has been suggested that this interaction is important for IL-8-dependent neutrophil migration (Rot, 1992a,b). The C terminus of IL-8 has been implicated in heparin binding (Talpas et al., 1993), and also contains several basic residues. An acidic residue within a HS-binding site might be expected to increase the affinity for IL-8 (Witt and Lander, 1994). Gallagher and colleagues have shown that HS molecules have a domain structure with a higher degree of sulphation that permits tighter binding to IL-8 (Gallagher et al., 1986;Turnbull and Gallagher, 1991). It has been found that the backbone structure, sulphation and the arrangement of the charges are important in determining the capacity of GAG to bind to IL-8 and to modulate its activity on the cells. The high heterogeneity in HS structure may allow a more refined tailoring of selective binding regions that may influence the biological activity and bioavailability of HS-binding chemokines (Presta et al., 1998). Decreased collagen concentrations and increased collagenase, gelatinase and elastase activities were found in rabbits treated with HS ϩ IL-8. Furthermore, an increased water content in cervices after HS ϩ IL-8 treatment accounts for the soft, swollen and fragile consistency of the ripened cervix (Osmers et al., 1995). An increased number of neutrophils was identified in cervices treated with HS ϩ IL-8. In binding to IL-8, HS could play a key role in increasing the specificity to the activation step of neutrophil recruitment. Gel electrophoresis analysis indicated that IL-8 migrated as a coherent band at all protein concentrations. The failure of the HA front to form a tight band suggests binding heterogeneity; that is, a fraction of HS has substantially higher affinity to IL-8. This result demonstrates that IL-8 exhibits substantial selectivity in its binding to HS (Witt and Lander,1994). Moreover, IL-8 binds HA considerably less strongly than it binds HS. Our findings demonstrate that HA is an important factor in the process of cervical ripening and regulates the biochemical changes occurring in cervical tissues at term (Maradny et al., 1997). It was shown recently that the interactions of decorin (a small, sulphated proteoglycan) and collagen may be important for cervical dilatation. It has been suggested that cervical ripening is mainly due to changes in the decorin/collagen ratio, while increased hydration is due to HA (Rechberger et al., 1996). Nitric oxide (NO), a potent uterine relaxant, represents a powerful autocrine and/or paracrine mediator of cervical ripening. It is therefore likely that NO may be a component in the pathway of cervical ripening which acts in concert with prostaglandins (mainly prostaglandin E 2 ) by activating metalloproteinases and other molecules involved in extracellular matrix remodelling and modulation of proteoglycan synthesis (Chwalisz et al., 1997). The present findings show that HS ϩ IL-8-induced changes in the biochemical composition and physical properties of the cervix tend to be much more dramatic. The binding effects of IL-8 to HS persists for a long time due to the long-chain structure of HS; moreover, the receptor expression of IL-8 to HS may be stronger. Therefore, HS ϩ IL-8 may be considered as complementary approaches to cervical ripening, and the action of HS ϩ IL-8 more pronounced in the uterine cervix. In conclusion, this study indicates that, in rabbits, binding to HS enhances neutrophil responses to IL-8 and plays an important role in promoting a decrease in the collagen concentration and increases in neutrophil number, water content and collagenase, gelatinase and elastase activities through IL-8dependent neutrophil migration. Moreover, we suggest that the HS ϩ IL-8-induced changes in cervical connective tissue components may account-at least in part-for cervical maturation during a normal delivery.
2018-04-03T05:30:59.224Z
1999-03-01T00:00:00.000
{ "year": 1999, "sha1": "7248182574c72f673037464667e77289eb508e19", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/molehr/article-pdf/5/3/261/9894191/050261.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "7c9e30b577a35cb6444ffa025044651aeed63972", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
253454451
pes2o/s2orc
v3-fos-license
Paranasal mucoceles in children without cystic fibrosis: A case report A mucocele is a slowly progressive cystic lesion of paranasal sinuses secondary to obstruction of the sinus ostium. It is an extremely rare condition in the pediatrics age group. The symptoms usually result from lesion expansion, inflammation, or compression of the adjacent structures. We report a case of an 11-year-old boy who presented with a right-side ethmoid mucocele with no known etiology and no history of cystic fibrosis. The patient underwent endoscopic sinus surgery for mucocele excision and abscess drainage. Clinicians are recommended to suspect paranasal mucoceles in patients presenting with progressive non-specific headache and orbital manifestations. Introduction Mucoceles are benign, gradually growing sacs filled with mucus and lined by the upper respiratory epithelium. When the sinus ostium is obstructed, a mucocele develops and causes a gradual accumulation of secretions in the sinus cavity. [1] Mucoceles are capable of expansion via bony resorption and erosion, leading to displacement of the neighboring structures. The clinical picture of mucoceles depends on their extent of expansion and location. [2] Even though the exact etiology of a mucocele is not fully understood, several contributing factors have been found in the literature. A pediatric mucocele is very rare and may go under-recognized. [3,4] We report a case of an 11-year-old-boy with a right ethmoid sinus mucocele presenting as progressive headache and orbital symptoms. We discussed case management in light of the related cases in the literature. air cells, measuring 3.4 X 2.2 X 3 cm (AP X SS X CC), showing internal fluid-fluid levels with areas of bony erosion, protruding into the right orbital cavity, compressing the medial rectus muscle, and displacing the globe anterolaterally (mild proptosis), medially reaching the nasal septum, likely obstructing the right maxillary sinus drainage with subsequent sinus opacification. The roof of the involved ethmoid sinus is thinned out with possible erosion [ Figure 1]. The CT findings gave an impression of a mucocele. Magnetic resonance imaging (MRI) on T2 and T1 showed right ethmoid air cell expansion with an intermediate signal on T2 [Figures 2 and3]. Also, the MRI showed compression on the right medial orbital wall with subsequent compression of the right medial rectus muscle as well as flattening of sclera [ Figure 4]. The patient underwent endoscopic endonasal surgery. Using Kerrison forceps (right), this mass was opened, and a greenish discharge was drained through the mucocele. Complete ethmoidectomy was performed along with middle antrostomy and sphenoidotomy. The lesion extended more superiorly and laterally, making erosions to the fovea and lamina papyracia, but dura and periorbital facia were intact, and no cerebrospinal fluid leak was encountered. Samples from the mucocele and discharge were taken and sent for histopathological examination. Fragments of fibrotic and bone trabeculae were observed, with mucus extravasation into the surrounding tissues. Also, the examination revealed pseudocyst with epithelioid macrophages (muciphages) forming the peripherymucous secreting glands. Extra-cellular and interstitial mucin spaces were filled with the mucous material. Granulation tissue pieces, foreign body giant cell reaction, and a few leukocytes were also seen. These findings were consistent with an ethmoid mucocele. On post-operative follow-up 2 months after surgery, CT showed no residual masses and the overall clinical picture improved [ Figure 5]. Discussion Paranasal mucoceles are benign, epithelial-lined pseudocysts filled with mucus. When the sinus ostium is obstructed, a mucocele develops and causes a gradual accumulation of secretions in the sinus cavity. [1] The resultant expansion of the affected sinus occurs via bony resorption and erosion and is considered a key finding in mucocele diagnosis. Local, orbital, and even intra-cranial complications are commonly seen in patients with mucoceles and may be caused by the lesion itself, compression of adjacent organs, or inflammation. [2] The case reported in this study presented mainly with orbital symptoms with no detectable etiology. Several risk factors are implicated in the etiology of the mucocele and include chronic sinusitis, facial trauma, surgery, neoplasia, cranio-facial malformations, allergy, and systemic diseases such as cystic fibrosis. [3,4] The underlying process of mucocele development is most likely long-term inflammation in an entirely blocked paranasal sinus. [5] In their case series of ten children with mucoceles managed over 10 years, Nicollas et al. found that a mucocele was associated with cystic fibrosis in six patients, with trauma in two patients, with chronic sinusitis in one patient and with no etiological factor in one patient. [6] About 90% of mucoceles are unilateral and involve the frontal and ethmoid paranasal sinuses. The maxillary and sphenoid sinuses are affected less frequently (5-10%). The most affected population are young adults aged 20-40 years. [1] The pediatric age group is rarely affected, with cystic fibrosis being the main predisposing factor. Although not fully known, cystic fibrosis is hypothesized to cause mucoceles through impairment of mucociliary transport and stasis of mucus. [7] The literature on mucoceles in children shows that the most common location of mucoceles is the anterior ethmoid sinus owing to the poor formation of the frontal sinuses in children. [5,6,8] This case replicates these findings. Imaging is an essential part of mucocele management. It provides diagnostic information as well as the origin and extension of the lesion. CT is the modality of choice, and it demonstrates a mass with surrounding bony erosion of the sinus wall. MRI, despite having little significance regarding bony details, could be used to rule out other tumors such as lymphoma and meningocele. [6] Our patient underwent successful endoscopic endonasal surgery. For its minimal invasion and low incidence of complication and morbidity and the absence of facial scarring, the endoscopic approach is now the gold standard in childhood endonasal surgery. [2,6] Besides eradication of the mucocele, surgical treatment also prevents recurrence. There is growing evidence that the recurrence of a sinus mucocele is extremely rare after endoscopic management, with recurrence rates approaching 0%. [9][10][11][12] Because of varying follow-up durations, however, these findings should be considered with caution. No recurrence was observed in our case after 2 months following endoscopic management, and this short interval is a limitation to this study. Our patient improved significantly after surgery, and no evidence of recurrence was found on short-term follow-up by CT. Conclusion This case report is among a few in the literature on childhood mucoceles without cystic fibrosis. Our patient presented with progressive headache and orbital symptoms which resolved after endoscopic endonasal removal of ethmoid mucoceles. A high index of suspicion is critical in the early detection and management of pediatric mucoceles. CT and MRI are useful tools in diagnosis and treatment, and endoscopic surgery is effective with minimal post-operative sequelae. Declaration of patient consent After discussion with the parents of the patient, written, informed consent was taken for possible case publication. The authors declare that they have obtained all required patient consent forms, which included statements regarding possible publication of clinical data and graphs. The authors explained the process of de-identification to the parents and ensured the anonymity. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2022-11-11T16:30:49.142Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "06e2ff846aba5bf6ef57c45f49d29a6152aa9227", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_649_22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f8e4f42ebb22f9231713e58dc732bd04609c969", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
134040051
pes2o/s2orc
v3-fos-license
Fog Water Collection: Challenges beyond Technology The Sustainable Development Goal (SDG) 6, calling for access to safe water and sanitation for all by the year 2030 supports the efforts in water-scarce countries and regions to go beyond conventional resources and tap unconventional water supplies to narrow the water demand-supply gap. Among the unconventional water resources, the potential to collect water from the air, such as fog harvesting, is by far the most under-explored. Fog water collection is a passive, low maintenance, and sustainable option that can supply fresh drinking water to communities where fog events are common. Because of the relatively simple design of fog collection systems, their operation and maintenance are minimal and the associated cost likewise; although, in certain cases, some financially constrained communities would need initial subsidies. Despite technology development and demonstrated benefits, there are certain challenges to fog harvesting, including lack of supportive policies, limited functional local institutions, inexpert communities, gender inequality, and perceived high costs without undertaking comprehensive economic analyses. By addressing such challenges, there is an opportunity to provide potable water in areas where fog intensity and duration are sufficient, and where the competition for clean water is intensifying because water resources are at a far distance or provided by expensive sources. Introduction Freshwater scarcity has increased over time [1,2] and is expected to further intensify [3,4], due to: uneven distribution of water resources and population densities; increasing demand for water due to population growth and mobility; changing diets; impacts of social change and economic growth on consumption preferences and lifestyles; and, changing climate and rainfall patterns [5,6]. A plethora of options are available to improve water-use efficiencies and productivity [7], but these may not be sufficient to make the conventional water resources-surface water in rivers and lakes, reservoirs, and aquifers-to meet the human needs in many water-scarce areas. Thus, water-scarce countries, regions, and communities should increasingly consider alternate, unconventional water resources in order to narrow the water demand-supply gap [8], as water scarcity forms a risk to the global economy [9] and water is increasingly considered as an instrument for international cooperation to achieve sustainable development [10]. Among the various unconventional water resources, the potential to recover water from air is by far the most under-explored. As part of the natural global water cycle, at any given time, the amount of water in the atmosphere is 12,900 km 3 , which represents 0.001% of total water and 0.04% of freshwater existing in the planet [11]. Under specific conditions, the air at ground level may contain fog, which refers to the presence of suspended liquid water droplets with diameters typically from 1 to 50 µm [12]. Fog originates from the accumulation and suspension of these tiny droplets of water in the air, creating masses of humid air over land or sea. As an important source of water in desert environments, fog collection is achieved by the collision of suspended droplets on a vertical mesh, where they coalesce, after which the water runs down into a collecting drain and a tank or distribution system [13]. Gathering atmospheric moisture is far from a new idea: evidence shows that the original inhabitants of the Canary Islands dug holes under trees with large foliage to collect fog water condensing on leaves [14]. While the general concept can thus be found in indigenous heritage and practice, 20th-and 21st-century technologies have enabled fog harvesting to be considered in more systemic, mainstream water supply approaches. Modern fog interception technology was first introduced in the mid-20th century [15], and there have been major developments in fog collection systems in recent decades. One of the earliest documented experiments that was aimed at determining the volume of fog deposition was undertaken from 1901 to 1904 in South Africa, in order to investigate the feasibility and productivity of fog as a natural supplement to otherwise limited water resources [16]. The volume of fog water that was intercepted was measured with two rain gauges-one was left open in the usual manner, while a cluster of reeds was suspended above the other gauge [16]. Later studies focused on the material composition of fog collection nets and their sizes, direction, and angle of installation of fog collectors, wind intensity, and climate and topography of the area. A timeline on the history of several fog and dew water collection methods that were practiced in arid and semi-arid areas is available elsewhere [17]. While local communities continue to collect fog and dew in a multitude of ways using custom-made materials or ancient techniques, the most readily available devices are Standard Fog Collectors (SFC) and Large Fog Collectors (LFC), which are made up of polypropylene mesh nets, usually Raschel nets. One unit of the SFC has an area of 1 m × 1 m (1 m 2 ), while the size of LFC varies from 40 m 2 (4 m × 10 m) to 48 m 2 (4 m × 12 m) per unit with the ratio of width to height for the fog collection mesh around 2.5-3.0 [18,19]. In areas where Raschel nets are too fragile to withstand heavy wind loads, other fabrics and configurations have been put in place. For example, an array of three-dimensional spacer fabric is being used as a replacement for Raschel nets that tore in the harsh environment [20]. The number, size, and type of fog collectors to be installed in a specific location depends on the fog characteristics, such as fog thickness, duration and frequency of occurrence, as well as the climate and topography of the area, water demand, and financial and human capacity of the associated community to run and maintain the fog collection system. With different dimensions of fog water harvesting research and practice, extensive reviews are available on the history, characteristics, and technical features of fog collection [21][22][23]. Beyond technological developments, the information on trade-offs of policy and institutions, economics, education and capacity building, community participation, and gender equity aspects of fog water collection is scattered and fragmented. These aspects are the focus of this paper. Economics of Fog Collection The costs of a fog collection system are usually expressed in terms of per m 2 of the mesh installed for fog collection. The cost of commonly used two-dimensional Raschel mesh fog collection systems may range from $25 to $50 per m 2 of mesh [19,24,25]. For example, for a LFC with mesh size of 40 m 2 , the cost may range from $1000 to $2000, with the cost ranging between $1200 and $2400 for mesh size of 48 m 2 . However, these costs depend on the type of material (mesh), as well as piping, water tanks, and other equipment and supplies that are used in a fog collection system, and its availability and price in the area. In places where low-cost material is suitable and available for fog collection systems, the cost could be lower than the above estimations. For example, in Dar Si Hmad's Fogwater Project in Morocco, the cost per LFC (40 m 2 ) for Raschel mesh nets was estimated at $200 per unit [26]; i.e., $5 per m 2 of mesh, which is one-fifth of the estimated $25 per m 2 of mesh. The economics of the high-efficiency three-dimensional (3-D) spacer fabric nets are radically different as the fabric is substantially more expensive ($830 per m 2 ) than simple Raschel nets; but, are expected to produce double or triple the amount of water collected and to resist harsh environments, such as winds around 120 km h −1 [27,28]. The selection of mesh depends on its durability, price, availability, and water draining properties. In addition to material price, labor availability and associated expenses also affect the cost of establishing fog collection systems [29]. The cost of large fog collection systems would increase significantly if labor to build the systems is not volunteer or is subsidized [19,24]. Because of the relatively simple design of fog collection systems, their operation and maintenance are minimal and easy to manage, and the associated cost, likewise. Although permanent operators are not typically required due to the passive nature of the system, there would be a need for periodic maintenance and monitoring to make sure that system efficiency is preserved. The maintenance includes inspecting meshes, wires, and distribution systems, as well as repairing minor tears or ensuring that the collection system is free of any type of surface dust-accumulation or algae growth [23]. The cost of operation and maintenance depends on the local cost of labor, number of fog collectors, and specific repairs that are needed. The expected lifespan on the fog collector system ranges between 5 to 10 years for Raschel-based net systems while high-efficiency 3-D spacer fabric nets may last for more than 20 years. While considering the cost of water collection from fog, it is important to compare it with the cost of other sources of water that are available to the communities. For example, the prices charged for water in six cities of the Atacama Desert in northern Chile in 2011 ranged from $1.96 to $3.06 per m 3 [19]; while in Eritrea, prices fluctuate between $1.7 and $3.3 per m 3 [24]. Because of the large variation in the material and labor costs, presence or absence of subsidies, and the efficiency of fog collection systems in different locations, there will inevitably be a wide range of costs that are associated with producing water through fog collection: $1.4 per m 3 to $16.6 per m 3 (Table 1). It is important to compare the cost of producing water from fog, but equally important is the fact that such costs cannot be compared with some other sources of water, such as desalination plants, because of the differences in scale of the two technologies. Additionally, fog harvesting technologies are resource-intense during the installation phase, but then require very little maintenance or additional resources, while systems like desalination and wastewater treatment require the continuous input of energy, chemicals, and labor. Volunteer labor; production and distribution costs [24] It is important to note as well that even when water has been collected from nearby wells or cisterns without direct financial barrier, the cost has often been paid in the form of women's time, labor, and poor health. Besides satisfying water demand in water scarce areas, the value of fog collection projects is also reflected in the form of social and human capital, and by the marked reduction in the time that is spent on water collection and transportation from long distances [30]. Other indirect economic benefits are associated with relief from paying private water providers, avoiding exposure to waterborne diseases that are related to conventional storage and distribution systems, and the potential to use the increased water and time availability on other income-generating activities. In this way, fog harvesting systems may effectively pay for themselves over time. However, the technologies and materials generally involve a high start-up cost. In certain cases, communities will never have adequate cash flow to afford the materials and the installation of a fog water collection system. Thus, subsidies, or at least short-term loans, may well be required by low-income communities to implement fog harvesting, as it was the case in Falda Verde, Chile, where the community was highly supported by multiple stakeholders [31,32]. Ensuring that these subsidies or short-term loans are available and well targeted would be the key for fog water collection as a sustainable unconventional water source. Community Development and Gender Equity and Equality Although the technology of fog collection is not complex, it needs the involvement of the associated communities from early stages of the project. By engaging community members from the planning stage, throughout the installation and operation of the fog collection systems, a project team contributes to community capacity to operate the water collection system in a sustainable manner. Addressing community perspectives on fog harvesting can support the acceptance, improvement, and possible adaptation of the technology to community needs. Thus, an effective local implementing partner, such as a related government institution, a community-based organization, a non-governmental organization or a suitable combination of them, should be in place to assist the community with implementing the fog collection system and technology transfer. The local partner(s) must be committed for a period after the system is handed over to the community to provide occasional technical support when the community may need assistance that is beyond their means. Studies have demonstrated that fog collection technology works effectively at a local level when a high sense of ownership has been created by community involvement and sensitization [17]. There are examples of effective community-based fog collection systems. In a fog water collection project in Tojquia, Guatemala, there are currently 28 LFCs that are producing a daily average of over 5 m 3 (5000 L) of clean water for 27 families, benefiting 127 people and their animals. The approach of providing individual LFCs for families has proven to be a sound methodology for Tojquia area [33]. There is community-based support towards maintaining LFCs at the household level [34]. Owing to community involvement, all of the original LFCs from 2006 are still operational. The water is stored in new 3 m 3 water tanks on the families' own property, which is a short distance from the house. There is a waiting list of families who wish to have their own fog collector systems [33]. Contrary to this, lack of, or limited, community involvement resulted in the partial or complete failure of the fog collection systems at other places, such as Serra Malgagueta, Cape Verde; El Tofo and Padre Hurtado, Chile; and Pachamama Grande, Ecuador; among others [17]. Socio-political instability also interferes with the implementation of fog water collection systems, which can prevent external investments and discourages volunteer work [35,36]. Recently, the participatory approach towards implementation of engineering projects is gaining substantial traction. While funding organizations are needed for the initial investment and continuous monitoring of activities and financial aspects, direct user's actions and commitment can help make projects sustainable over time [31]. Participatory project management can positively affect the financial, social, and human capital of the community. Family's financial capital may increase business, work and trade opportunities [29]; community's social capital may increase with the improvement of collaborative work and communal values; and human capital may be positively affected by improvements in health, education, and capacity building, either in water-related sectors or other learning opportunities. This evinces the need for strong community engagement in fog water collection projects. Furthermore, the pursuit of gender equality and gender mainstreaming are often key to the sustainability of fog collection systems. Rosato et al. [37] reported that ensuring women's participation and capacity building led to the proper operation and maintenance of the fog collection system and its sustainability in the long-term project in the Western Highlands of Guatemala. To overcome cultural challenges and to support gender mainstreaming, the project team developed relationships with the community prior to the initiation of the project and incorporated women in all culturally accepted activities that are related to fog water collection and use. Women in the community had traditional roles as caregivers responsible for cooking and other domestic duties, thus the team and the male members of the community recognized and respected the importance of women's participation in the project. As they became familiar with the project, women were increasingly involved in the fog collection duties that they were comfortable with, including management activities and other decision-making actions, as they have extra time to assume new tasks. They were also happy to undertake these tasks as their contributions were recognized by the community at large. The world's largest fog water collection and distribution system is in Southwest Morocco, in the Anti-Atlas Mountains near Sidi Ifni. Local non-profit organization Dar Si Hmad operates the system, which has involved a decade's worth of meteorological observation and partnerships with a variety of overseas researchers and engineers [20,38]. From the project's outset, women and children were identified as the key beneficiaries. In the project area, water collection is primarily the responsibility of women and girls who previously spent several hours each day collecting water from depleted and polluted wells. Thanks in large part to the involvement of community members in different phases of the project execution, along with local training about how the system and the associated processes work, and attention paid to the cultural, spiritual, and physical meanings and the experiences of fog, the increase in water availability to the community has resulted in multiple benefits [39]. Since the installation of pipes with running water directly to homes, the organization has observed improvements in public health, community stability, and increased livelihood opportunities. A key outcome has been the transformation of women's duties, with a decrease in time spent for water collection. This has enhanced the chances for women to do other tasks, for girls to remain in school, and it has increased women's participation in natural resource management and enabled the implementation of a water, sanitation, and hygiene awareness program for women and children [26]. Long-term benefits have yet to be fully evaluated, given the project's timescale, but are being monitored and are expected to produce more systemic changes in the communities. The organization and affiliated researchers are currently collecting data on demographic changes, water consumption levels, well water levels, and changes to the built environment. In 2016, the project was awarded the Momentum for Change Lighthouse Award by the United Nations Framework Convention on Climate Change (UNFCCC) at COP22 for its sustainable progress in women's empowerment and climate change adaptation [40]. When considering the social and cultural sensitivities in a community-based fog water collection system, it is important to consider several factors that may affect gender roles and responsibilities, along with other issues that are related to equity and mainstreaming, such as lifestyles and customs, education status and presence of educational institutions, poverty levels, health conditions, economic activities, and the distribution of workload for male and female segments of the community to do time-consuming and laborious water collecting tasks. Education and Capacity Building Although fog collection systems are based on relatively simple designs and their operation and maintenance are minimal and easy to manage, their sustainability also depends on sound professional support during the planning, financing, implementing, and regulatory stages of a project. This is important as fog collection systems are community-based, and community members are usually not fully knowledgeable of the basics or the specific operational aspects unless their capacity to manage these systems independently or with minimal support is developed [24]. As an example of effective capacity building and progressive handover, after ten years of fog collection in Tojquia in Guatemala, the community created a water committee that was elected by the community members. The committee is responsible for the operation and maintenance of the fog collectors and communication with the community and external organizations that are involved in the project [41]. This knowledge transfer between stakeholders is crucial, and it applies to both male and female segments of the community, as all are usually involved in fog collection and system's management. Therefore, it is appropriate to run capacity needs assessment of the local community and local institutions, followed by need-based capacity development. Some NGOs, which are working on multiple projects around the world, accept volunteers to train local communities and institutions to implement fog water harvesting projects [42]. Given the current technological developments and access to mobile networks, it is important to highlight the role of communication technologies to support training and capacity building activities. In Dar Si Hmad's Fogwater Project in Morocco, ICT4D (information and communication technologies for development) have been used to track fog patterns and monitor the distribution system [39]. Furthermore, low-literate women were trained to use their mobile phones to send maintenance requests. These activities paved the way for literacy workshops using the phone as learning tools. The resulting infrastructure and community partnerships have had further positive impacts through the establishment of a children's water and environmental education school and an e-learning program with rural youth. Such integration increased the local functional and technological literacy rates and incorporated a scientific perspective in water management by combining community participation and engineering innovation to serve the local water needs for marginalized rural Berber populations [26]. Policies and Institutions Despite the growing importance of fog water collection in fog-prone dry areas, its potential has not been fully explored in water scarce regions due to several governmental and institutional constraints and challenges. Among them is a lack of national water policies and action plans that consider fog water collection as a means of addressing local water shortages in water scarce areas where there is abundant fog. Where available, most national water policies and action plans do not mention atmospheric moisture harvesting on the public policy agenda. In addition to the lack of emphasis on fog collection as one policy constraint, there are institutional and human capacity challenges that are contributing to the slow uptake of the potential of fog water collection. Consequently, for the successful implementation of research-based technical interventions for fog water collection systems, there is a need to promote flexible policy frameworks with gender responsive policies, while also enhancing human and institutional capacity to maximize the benefits from fog collection in dry areas. To achieve successful implementation of fog water harvesting projects and their sustainability in the future, it is necessary to integrate the local institutions and associated communities as stakeholders to promote their involvement and commitment. Government bodies or non-government organizations are needed to support these projects and to provide leadership to develop and operate fog water collection projects along with community participation until the community is ready to take over the control and management of the project. However, the timeframe from project inception to completion and handover to the local community depends on the project dynamics, such as the status of technology transfer and need for technical assistance, the level of local community satisfaction of the project's outcome, and the community's ability to manage the project without much external supervision. For example, the transfer of project control to the community without required management capacity could cause system malfunctions and lead to disappointment and a lack of motivation to continue with the project. Instead, local institutions can extend the time that it takes to implement and transfer a project to the local community only once the required level of skills and motivation are reached, and the adequate sustainability training and capacity are in place [30]. A major cause of underperforming or failed fog water harvesting projects is weak institutions coupled with limited inter-institutional collaboration that is reflected through largely unclear and overlapping responsibilities. This situation can be further aggravated by the lack of strong involvement and commitment by local communities in the operation and maintenance of the fog collection systems [43]. The planning and initial stages of projects are usually supported by external organizations that rely on local institutions with required expertise and involvement in the activities to ensure sustainability over time. Therefore, project teams should be composed of community members alongside of experts with technical and financial knowledge to support the biophysical aspects of the project, as well as professionals with social science backgrounds to address the social and cultural aspects with beneficiary communities [17]. Multiple stakeholders would benefit from fog water harvesting, while contributing to the achievement of SDG 6. Beyond local institutions, NGOs, and communities, some international NGOs and foundations-FogQuest (http://www.fogquest.org/), Creating Water (http://www.creatingwater. nl), Wasserstiftung (http://www.wasserstiftung.de/)-are studying, testing, and implementing fog water collection systems in different parts of the world [26]. As well, some companies have been working on the mesh designs, various coatings, and other materials [29,44], and academia continue researching innovative theoretical and practical options to improve efficiency and reduce costs [45,46]. Conclusions and Future Perspectives There is an increasing number of cases using fog water collection as a community-based intervention to address local water shortages in areas where fog events are frequent. Besides a range of social, cultural, and livelihoods benefits for the associated communities through improvement in their quality of life, fog projects also have the potential to support greater gender equity, gender mainstreaming, and multi-stakeholder involvement with and between local governments, communities, NGOs, and professionals. As well, in addition to low operational and maintenance costs in providing water that meets drinking quality standards, fog water collection is an environmentally friendly intervention that does not rely on energy consumption; i.e., fog water harvesting is a green technology. Such water collection options provide ecosystem services as well [47]. As a water source, fog water collection has biophysical and socio-economic challenges. It cannot be used in all dry areas, but only in specific zones where the topographic and geographical conditions drive fog intensity and duration substantially enough to provide water to local communities. Where meteorological conditions are conducive, it can be a sustainable and cost-effective means of providing water for human and ecosystem needs. On the socio-economic front, low-income communities need initial subsidies or loans. The analyses of long-term economic benefits suggest that the start-up costs are worthwhile insofar as water is a fundamental human right and health and dignity are of incalculable importance. Furthermore, cultural barriers and socio-political instability often restrict the participation of local communities, especially women, in decision-making processes. Lack of capacity building and planning for sustainability can directly impact on fog collection equipment life and efficiency as well as overall project implementation. These factors make it vital to include diverse local stakeholders and authorities from the beginning in order to maximize potential direct and indirect benefits (such as spin-off projects). With a comprehensive approach to overcoming these challenges, investments in a new fog water collection system can help to address community concerns. Equity aspects, other than gender equity, are also important, as these communities are typically isolated and vulnerable and an additional water source can improve their resilience. As global efforts to achieve sustainable development advance, fog water collection has the potential to contribute to water-related efforts to achieve SDG 6 targets in water-scarce areas by the year 2030. The wider application and scope of fog water collection systems may go beyond the SDG 6 to apply to other sustainable goals. For example, in urban areas, smog (a mixture of smoke and fog in the air) is emerging as a significant air quality challenge, impacting respiratory systems and overall human and animal health, and threatening human lives by increasing the chance of vehicle accidents due to low visibility [48]. Smog interception using fog collectors may be utilized to decrease air pollution in smog-prone areas. This would offer smog collection as complementary to the use of cleaner-burning fuels in industrial, residential, and commercial sectors, and fuel-efficient vehicles. In this way, fog collection systems would also contribute to SDG 3.9 to reduce the number of deaths and illnesses from air pollution and contamination; and, SDG 11, which focuses on making cities and human settlements safe, resilient, and sustainable. In addition, implementation of fog collection systems can contribute to SDG 5 addressing gender equality and empower all women and girls. The needed paradigm shift from an attitude that considers water scarcity as a permanent and chronic challenge in remote water-scarce areas, to one that looks beyond conventional water supplies, is an outstanding backdrop in which to address the problem of local water scarcity. Thus, this is the time to explore and harness the full potential of fog water collection in water-scarce countries with frequent fog events in specific areas. This can be triggered by (1) revisiting national water management action plans and assessing the value that fog water collection brings; (2) undertaking situation analysis to achieve the maximum potential of fog water collection vis-à-vis associated economic, societal, social, environmental, and health trade-offs; (3) updating national water policies and budgets, water pricing, and subsidies options, and cost recovery mechanisms to include fog collection systems; (4) building capacity in local water institutions and communities to support community based implementation of fog collection systems; (5) facilitating associated communities by implementing participatory actions and projects, and considering social and cultural sensitivities, gender roles and responsibilities, lifestyles and customs, education status, poverty levels, and economic activities; (6) establishing a base of fog harvesting knowledge and best practices, identifying and testing innovations, and showcasing functional examples to inform policy makers for supportive actions and investment decisions; and, (7) undertaking public awareness activities to encourage uptake of fog collection systems in areas with potential for fog water collection.
2019-04-27T13:09:10.995Z
2018-03-24T00:00:00.000
{ "year": 2018, "sha1": "7c6ca36098ad1cd5babebc6380ab6d5c4947ca4d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/10/4/372/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a4bb5f7afc04886f446e50aca89e6f1e6a81fc03", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
266259310
pes2o/s2orc
v3-fos-license
Resveratrol and Resveratrol-Loaded Galactosylated Liposomes: Anti-Adherence and Cell Wall Damage Effects on Staphylococcus aureus and MRSA Antibiotic resistance due to bacterial biofilm formation is a major global health concern that makes the search for new therapeutic approaches an urgent need. In this context,, trans-resveratrol (RSV), a polyphenolic natural substance, seems to be a good candidate for preventing and eradicating biofilm-associated infections but its mechanism of action is poorly understood. In addition, RSV suffers from low bioavailability and chemical instability in the biological media that make its encapsulation in delivery systems necessary. In this work, the anti-biofilm activity of free RSV was investigated on Staphylococcus aureus and, to highlight the possible mechanism of action, we studied the anti-adherence activity and also the cell wall damage on a MRSA strain. Free RSV activity was compared to that of RSV loaded in liposomes, specifically neutral liposomes (L = DOPC/Cholesterol) and cationic liposomes (LG = DOPC/Chol/GLT1) characterized by a galactosylated amphiphile (GLT1) that promotes the interaction with bacteria. The results indicate that RSV loaded in LG has anti-adherence and anti-biofilm activity higher than free RSV. On the other side, free RSV has a higher bacterial-growth-inhibiting effect than encapsulated RSV and it can damage cell walls by creating pores; however, this effect can not prevent bacteria from growing again. This RSV ability may underlie its bacteriostatic activity. Introduction Antibiotic resistance and biofilm-associated infections are currently a global public health problem.Furthermore, some bacterial strains, called "superbugs", are more of a concern than others because they can be resistant to more than one antibiotic and are able to develop a biofilm.They are responsible for many infections in hospital and community environments and are associated with high mortality rates.Bacteria that hold this sad record include Staphylococcus aureus and methicillin-resistant Staphylococcus aureus (MRSA) [1]: they are the etiological agents of many infections, including endocarditis, postoperative infections, sepsis, pneumonia, bacteremia, and bone and joint infections [2,3].The increasingly widespread resistance to conventional antibiotics makes the treatment of these diseases extremely difficult and, thus, there is an urgent need for new drugs or new strategies for effective therapies. In this regard, bioactive molecules in plants are gaining special attention since plants are constantly exposed to many abiotic and biotic environmental stressors, such as nutrient and water shortages, heavy metals, bacteria, fungi, viruses, and insects; they have faced natural threats for more than 350 million years, co-evolving with their enemies and, thus, developing increasingly effective bioactive compounds to defend themselves [4].Among them, the phenolic compound trans-resveratrol (trans-3,5,4 -trihydroxystilbene, RSV) has a wide spectrum of antimicrobial activities, being effective on many Gram-positive and Gram-negative bacterial species.It is a secondary metabolite of plants, produced precisely to fight external pathogen attacks.Notably, RSV is a stilbenoid polyphenol synthesized by seventy-two different plant species and it is particularly abundant in red wine, soy beans, grapes, peanuts, mulberries, blueberries, and pomegranate [5].In nature, RSV exists in the trans form, the most pharmacologically active one, but can be converted into the less active cis form by UV-light irradiation or direct sunlight [6]. RSV exhibits antimicrobial and anti-biofilm activity on several S. aureus strains, both resistant and non-resistant to diverse antibiotics [5,[7][8][9].It has been suggested that the RSV mechanism of action is related to its ability to interfere with the bacterial cell cycle [8,10] but, to date, the mechanism of bacterial growth inhibition induced by RSV is not yet fully understood [5].In addition, RSV has the ability to reduce the virulence of S. aureus because it decreases hemolysis, reducing the production of α-haemolysin, a protein responsible for both hemolysis and the biofilm formation of S. aureus [9,11]. Although it seems well-established that the antimicrobial action of RSV on S. aureus is bacteriostatic and not bactericidal, discordant results are reported in the literature, even on the same bacterial strain.In fact, not only are different Minimal Inhibitory Concentration (MIC) values reported but, also, in some experiments, no antibacterial activity is even found.Regarding the anti-biofilm activity, we are faced with the same scenario: some authors report that RSV does not inhibit S. aureus biofilm formation [9,11], others report that it reduces the ability of S. aureus to form biofilm and that it is able to disrupt mature biofilms [12,13].Also, in the case of biofilm, it is unclear whether RSV anti-biofilm activity is attributable to its anti-adhesive or bacterial-growth-inhibiting properties.These discrepancies may be due to the different solubility of RSV in the media used for experimental treatments and to the different working culture broths (Mueller-Hinton and Luria-Bertani, respectively).Therefore, conflicting results between studies on RSV susceptibility require further investigation, as recommended in a recent review by M. Vestergaard et al. [5]. Furthermore, it is important to emphasize that, apart from the discrepancies in the in vitro data reported in the literature, all the experimental data show RSV antimicrobial properties at concentrations that are higher than those obtainable in plasma following oral administration [14][15][16], as a consequence of the low bioavailability and high biodegradability of RSV.Therefore, there is a limit for the treatment of infections by RSV dictated by the bioavailability of the drug itself [15].In order to increase RSV bioavailability and, thus, to give a future perspective to the positive results of the in vitro studies, it is necessary to turn to nanotechnology, which provides the techniques for developing suitable nanosystems for RSV delivery.In this context, liposomes have been widely shown to improve the antimicrobial efficacy of many drugs against S. aureus and MRSA and remain one of the most promising delivery systems, even for combination therapies [1].To this end, liposomes can be formulated with different kinds of lipids: natural lipids, such as PCs; cholesterol, often used to stabilize the liposome membrane; and synthetic lipids, used to confer liposomes definite properties [17][18][19][20][21]. Therefore, further investigation of RSV-loaded liposomes is a very important topic, particularly for the treatment of S. aureus and MRSA infections, on which, to the best of our knowledge, very few scientific articles exist. In this work, we aimed to contribute to shed light on the effective bacteriostatic and anti-biofilm activity of RSV on two strains of S. aureus, wild type (ATCC 25923) and the methicillin-resistant one (or MRSA, ATCC 33591), paying particular attention to the added value brought by liposomal delivery on the antimicrobial properties of RSV.In fact, given the poor solubility of RSV in biological fluids, it is important to study the physico-chemical properties and the actual biological effects of the liposome formulation, which may differ from that of free RSV; in fact, liposomes can enhance or reduce the biological effect or even give rise to a new activity. RSV was embedded in liposomes formulated with a natural phospholipid (1,2-dioleoylsn-glycero-3-phosphocholine, DOPC) and cholesterol (Chol), in the presence and in the absence of a cationic galactosylated amphiphile, GLT1, that has been already proven to enhance RSV-loaded liposome activity in the demolition experiments of mature MRSA biofilm [13,22] (Figure 1). the treatment of S. aureus and MRSA infections, on which, to the best of our knowledge, very few scientific articles exist. In this work, we aimed to contribute to shed light on the effective bacteriostatic and anti-biofilm activity of RSV on two strains of S. aureus, wild type (ATCC 25923) and the methicillin-resistant one (or MRSA, ATCC 33591), paying particular attention to the added value brought by liposomal delivery on the antimicrobial properties of RSV.In fact, given the poor solubility of RSV in biological fluids, it is important to study the physico-chemical properties and the actual biological effects of the liposome formulation, which may differ from that of free RSV; in fact, liposomes can enhance or reduce the biological effect or even give rise to a new activity. RSV was embedded in liposomes formulated with a natural phospholipid (1,2dioleoyl-sn-glycero-3-phosphocholine, DOPC) and cholesterol (Chol), in the presence and in the absence of a cationic galactosylated amphiphile, GLT1, that has been already proven to enhance RSV-loaded liposome activity in the demolition experiments of mature MRSA biofilm [13,22] (Figure 1).The colloidal stability of the liposomes was monitored over time by Dynamic Light Scattering (DLS) and Dynamic Electrophoretic Light Scattering (DELS) measurements while the interaction between liposomes and both bacterial pathogens was investigated by DELS measurements to reveal any preferential interaction of the galactosylated cationic liposomes with the bacterial surface.The Entrapment Efficiency and the release profile of RSV in vitro were evaluated by UPLC analysis. The anti-biofilm activity of RSV, free and liposome-embedded, was investigated against the S. aureus wild type strain, following a mature biofilm experimental protocol assessed in a previous study on MRSA [13].Furthermore, the anti-biofilm activity was investigated by analyzing the antiadherence effects induced by free or liposomeembedded RSV, against both bacterial pathogens. Finally, due to the discordant data reported in the literature, we also estimated the bacteriostatic effect induced by free and liposome-embedded RSV on the bacteria strains; moreover, to clarify whether the antimicrobial effect might have been associated with damages to the bacterial cell wall, we stained the bacteria with propidium iodide (PI) in the presence of RSV.The colloidal stability of the liposomes was monitored over time by Dynamic Light Scattering (DLS) and Dynamic Electrophoretic Light Scattering (DELS) measurements while the interaction between liposomes and both bacterial pathogens was investigated by DELS measurements to reveal any preferential interaction of the galactosylated cationic liposomes with the bacterial surface.The Entrapment Efficiency and the release profile of RSV in vitro were evaluated by UPLC analysis. Materials and Methods The anti-biofilm activity of RSV, free and liposome-embedded, was investigated against the S. aureus wild type strain, following a mature biofilm experimental protocol assessed in a previous study on MRSA [13].Furthermore, the anti-biofilm activity was investigated by analyzing the antiadherence effects induced by free or liposome-embedded RSV, against both bacterial pathogens. Finally, due to the discordant data reported in the literature, we also estimated the bacteriostatic effect induced by free and liposome-embedded RSV on the bacteria strains; moreover, to clarify whether the antimicrobial effect might have been associated with damages to the bacterial cell wall, we stained the bacteria with propidium iodide (PI) in the presence of RSV. Liposome Preparation Liposomes were formulated with an unsaturated phosphocholine (DOPC) and cholesterol (Chol) in the presence or absence of the cationic galactosylated amphiphile GLT1. Liposomes, both empty and RSV-loaded, were prepared according to the lipid film hydration protocol, coupled with the freeze-thaw procedure [23], followed by extrusion [24].Briefly, proper volumes of lipid components dissolved in CHCl 3 (DOPC and Chol) and MeOH (GLT1) were added in a round bottom flask; the solution was then dried first by a rotary evaporator (Rotavapor R-200, BÜCHI Labortechnik AG, Flawil, Switzerland) and then under high vacuum conditions (5 h) to remove any traces of organic solvents and obtain a thin lipid film.For RSV-loaded liposomes, the proper amount of a RSV solution in absolute ethanol was added to the lipid mixture, before the film formation, to have a molar ratio of 1:8 RSV/lipids.The film was hydrated with 150 mM phosphate buffer saline (PBS) solution to have a final liposomal suspension of 20 mM in lipid concentration.The suspension was vortex-mixed to completely detach the lipid film from the flasks and freeze-thawed five times from liquid nitrogen at up to 50 • C to reduce multilamellarity.RSVloaded liposomal suspensions were sonicated (probe tip sonicator Sonics Vibra-Cell, 3 mm diameter tip, Sonics & Materials, Newtown, CT, USA) at 40 W (10 cycles of 10 s) to improve their entrapment in the lipid bilayer by breaking aggregates that usually form in aqueous solutions [25].Size reduction was carried out by the extrusion of liposomal dispersions, ten times (10 mL Lipex Biomembranes, Vancouver, Canada), under high pressure through a 100 nm pore size polycarbonate membrane (Whatman Nucleopore, Clifton, NJ, USA) at a temperature higher than T m to obtain small unilamellar vesicles (SUVs). Finally, liposome purification from unentrapped RSV was performed by dialysis against PBS under slow magnetic stirring, using a buffer volume 25 times larger than the sample volume. During all preparation steps (film formation, extrusion, dialysis), samples were protected from light to avoid trans-resveratrol isomerization.Particle size and PDI were measured in backscatter detection, at an angle of 173 • ; this condition represents a significant advantage because it is less sensitive to multiple scattering effects compared to the more conventional 90 • configuration and the contribution of micrometric larger particles is considerably reduced [26].The measured intensity autocorrelation function was analyzed by using the cumulant fit [27].The first cumulant was used to obtain the apparent diffusion coefficients D of the particles, further converted into apparent hydrodynamic diameters, D h , by using the Stokes-Einstein relationship: where K b T is the thermal energy and η is the solvent viscosity. The ζ-potential of liposome formulations was determined by DELS measurements.Low voltages have been applied to avoid the risk of Joule heating effects.Analysis of the Doppler shift to determine the electrophoretic mobility was completed by using phase analysis light scattering (PALS) [28], a method that is particularly useful at high ionic strengths, where mobilities are usually low.The mobility µ of the liposomes was converted into ζ-potential using the Smoluchowski relation ζ = µ η/ε, where ε and η are the permittivity and the viscosity of the solution, respectively.DLS and DELS measurements were performed after the dilution of liposomal suspensions to 1 mM in total lipids in PBS or diluted PBS (15 mM), respectively.Data were collected soon after preparation, after dialysis (for RSV-loaded liposomes), and in the following 14 days (samples stored in the dark at room temperature) in order to evaluate the stability over time and to highlight any aggregation phenomena. The data reported for hydrodynamic diameter, D h , PDI, and ζ-potential correspond to the average of at least three different independent experiments. Determination of RSV Entrapment Efficiency in Liposomes The content of RSV loaded in liposomes was evaluated by UPLC measurements.Chromatographic analyses were carried out by an Aquity TM UPLC H-Class Bio System (Waters, Milford, MA, USA) equipped with a quaternary pump, a sample manager, an autosampler, a column temperature controller, and a PDA detector. Before UPLC measurements, samples were properly diluted with methanol to obtain liposome disruption and complete the solubilization of the formulation components.Transstilbene (TSB) was added (20 µM) to each sample as an internal standard and then all the samples were filtered on PTFE membranes (4 mm × 0.2 µm; Sartorius) before the injection. Method validation was determined by assessing linearity, sensitivity, and precision.RSV stock standard solution (305 µM) was prepared in MeOH.Calibration standard solutions in the concentration range between 0.024 and 305 µM (n = 7) were prepared by diluting the stock standard solution of RSV with a methanol-water (80:20, v/v) mixture.The RSV calibration curve was constructed by triple injections of calibration standard solutions.Each concentration level was spiked with TSB 20 µM.The calibration curve obtained was linear over the concentration range studied, with a correlation coefficient of 0.9995. The Limit of Detection (LOD) and Limit of Quantitation (LOQ) calculated for RSV were determined by using signal-to-noise ratios of 3 and 10, respectively.The LOD and LOQ were found to be 0.008 µM and 0.024 µM, respectively. The precision of the method was evaluated in terms of repeatability and reproducibility.Intra-and inter-day precisions, expressed as the relative standard deviation (RSD) of migration time and peak area, were assessed by performing six consecutive injections of the same solution on the same day (RSD < 1%) and over three days (RSD < 2%). The entrapment efficiency (EE%) of RSV in liposomes was calculated using the following Equation (1): where [RSV] pd indicates RSV concentration after dialysis and [RSV] 0 is the concentration soon after extrusion. In Vitro Release Study of RSV from Liposomes The release of RSV from the DOPC/Chol and DOPC/Chol/GLT1 liposomes was evaluated by dialysis against PBS (phosphate buffer volume 50 times larger than the sample volume), keeping the systems under constant magnetic stirring.After liposome purification by dialysis, samples were collected every 1 h over a period of 24 h and analyzed by UPLC to study the releasing profile of RSV previously encapsulated.Each liposomal aliquot was diluted with MeOH (1:1 v/v) to break lipid aggregates and to enhance the release of RSV and the resulting solution was filtered on PTFE membranes (4 mm × 0.2 µm; Sartorius). In order to determine the percentage of RSV released over the time period, a chromatographic analysis was carried out, as described above, for the EE% determination. During the whole experiment, samples were protected from light to avoid transresveratrol isomerization. Bacterial Strains The antimicrobial activity of RSV, both free and loaded in liposomes, was examined against two American Type Culture Collection (ATCC) strains of Staphylococcus aureus: ATCC 25923 (wild type strain) and ATCC 33591 (methicillin-resistant Staphylococcus aureus or MRSA).Both strains were retrieved from the titrated frozen stocks and put in culture in fresh Mueller-Hinton (MH) broth, incubated at 37 • C for 18 h, and then sub-cultured on fresh MH agar plates to have fresh and single colony-forming units (CFUs). Evaluation of the Liposome-Bacteria Interaction The binding affinity of neutral and cationic galactosylated liposomes to bacteria cells was investigated by DELS measurements, determining the ζ-potential variation of the solution analyzed. S. aureus and MRSA were grown overnight in MH broth medium; then, they were centrifuged to obtain a bacteria pellet, which was washed 3 times with diluted PBS (15 mM) to remove all traces of culture broth.Afterward, bacteria suspensions in PBS (15 mM) were diluted to reach the desired final density, assessed by optical density (OD) measurements. Measurements were carried out at 25 • C in 15 mM PBS at different liposome concentrations (0, 0.1, 0.3, 0.5, 0.7, and 1 mM in total lipid concentration) in the presence or absence of 10 5 CFU/mL of bacteria. 2.6.In Vitro Biofilm Formation and Crystal Violet Assay S. aureus was grown in MH medium, supplemented with 0.25 % D-glucose at 1 × 10 7 CFU/mL by serial dilutions, and submerged biofilm was established in flat-bottom 96-well microtiter plate wells (Thermo Scientific, NUNCLONE-Delta surface, Milan, Italy) at 37 • C for 96 h. The biofilm formation was measured by Crystal Violet (CV) assay [13].In brief, wells were washed with DPBS to remove nonattached bacteria and stained with 100 µL of 0.1% CV solution.After 45 min at room temperature, plates were emptied and extensively washed with distilled water to remove the excess CV.For biofilm quantification, 50 µL of 95% ethanol was added to the wells to solubilize all biofilm-associated dye and the absorbance at 530 nm was determined by a microplate reader (VICTOR-NIVOTM, Perkin Elmer, Milan, Italy). Demolition Assay on S. aureus Biofilm The biofilm demolition activity of RSV, both free and loaded in neutral and galactosylated liposomes, has been evaluated on the S. aureus wild type strain.RSV-loaded liposomes and free RSV (solubilized in DMSO) were sterilized through a 0.22 µm PES filter and diluted with DPBS to have concentrations ranging from 25 µg/mL to 200 µg/mL.S. aureus biofilm was left to grow for 96 h, as described previously.Plates were then incubated at 37 • C overnight with different concentrations of free or liposome-embedded RSV.On the fifth day, wells were emptied, washed with DPBS, and then washed again with DPBS before 100 µL of 0.1% CV solution was added.After 45 min at room temperature, plates were emptied and washed with distilled water to remove excess CV.For biofilm quantification, 50 µL of 95% ethanol was added to the wells to solubilize all biofilmassociated dye and the absorbance at 530 nm was determined by a microplate reader (VICTOR-NIVOTM, Perkin Elmer). Anti-Adherence Assay MH agar plates were prepared by spreading 500 µL of the S. aureus or MRSA overnight (ON) cultures brought to a density between 1 × 10 8 and 2 × 10 8 CFU/mL, by serial dilutions in MH broth, and dried under the sterile flow bench for 15 min.Then, 10 µL of 1.5 µg/mL RSV solution, solubilized in DMSO or loaded in DOPC/Chol/GLT1 liposomes, was dropped on the inoculated agar surface and dried as described above.Furthermore, 10 µL of DMSO and empty liposomes was dropped on control plates.Finally, plates were incubated for 16 h at 37 • C and the anti-adherence activity was evaluated by the naked eye. For those plates showing a transparent halo, the diameter of the area of transparency was measured to estimate the inhibition of bacteria adherence associated with the investigated sample. Determination of Minimum Inhibitory Concentration (MIC) The MIC of the antimicrobials was determined using the standard broth micro-dilution method with slight modifications [29].Briefly, to assess the MIC of free RSV (solubilized in DMSO) and RSV loaded in DOPC/Chol/GLT1 liposomes, on Day 1, few (5-6) CFUs of fresh S. aureus and MRSA plates were grown in 10 mL of MH medium ON at 37 • C. On Day 2, the ON culture was brought to a density of 0.8 × 10 5 -1.2 × 10 6 bacteria/mL by serial dilutions in MH broth.The final density was estimated by measuring the OD at 600 nm.These diluted cultures were put in a 96-well plate (flat bottom, one plate for each strain studied) where RSV, free or liposome-loaded, was added in triplicates at different concentrations, ranging from 50 µg/mL to 200 µg/mL, and grown ON at 37 • C. The starting bacterial density of the inoculum was verified by plating the final diluted cultures on fresh MH agar and determining the CFUs. Cell Wall Damage Assay by Propidium Iodide Uptake S. aureus and MRSA strains were put in culture for 16 h in chamber slides (µ-Slide VI 0.4 ibiTreat, 80606, IBIDI GmbH, Gräfelfing, Germany) with free RSV at the respective MIC concentrations identified above (200 µg/mL for S. aureus and 100 µg/mL for MRSA).The following day, a propidium iodide solution (1 mg/mL,) was diluted 1:1000 in each chamber slide well to reach a 1 µg/mL final concentration.This dye cannot pass through intact cell membranes but may freely enter cells with compromised cell membranes.After 5 min, the samples were analyzed with an Olympus FV1200 confocal laser scanning microscope with a 20× air objective with an optical pinhole at 1AU and a multiline argon laser at 488 nm, HeNe ion laser at 543 nm, and blue diode laser at 405 nm as excitation sources.Propidium iodide was acquired using excitation at 515 nm and emission at a maximum of 615 nm.Confocal images were processed with ImageJ (National Institutes of Health, NIH, https://imagej.nih.gov/ij;https://imagej.net/software/fiji/accessed on 29 November 2023). Statistical Analysis All statistical analyses were performed with the support of GraphPad prism version 5.0 and Stata software.Data were analyzed by a one-way ANOVA and post hoc Bonferroni comparison tests (p < 0.05). Liposome Preparation Liposomes as delivery systems of RSV were formulated, as previously described by some of us, with a natural unsaturated phospholipid (DOPC) and cholesterol (Chol) in the presence or in the absence of the cationic galactosylated amphiphile (GLT1) (Figure 1).Moreover, empty liposomes, both neutral and galactosylated were prepared as reference samples. DOPC was chosen because-on the basis of our experience-a phosphocholine with unsaturated chains allows us to easily obtain monodisperse RSV-loaded liposomes functionalized with amphiphiles [13,30].As already explained by Aiello et al. [13], cholesterol in the lipid mixture enhances the stability of the lipid bilayer through the bilayer-tightening effect, inducing dense packing and increasing the orientation order of lipid chains.This leads, in general, to a more compact bilayer, with reduced permeability to water-soluble molecules, and increases the retention of entrapped drugs [31].In this case, in particular, the presence of cholesterol allows the addition of a larger amount of GLT1 to the formulation; in fact, in the absence of cholesterol, large quantities of GLT1 lead to micellar aggregates rather than liposomes [32][33][34].We underline that the amount of cholesterol present in liposomes (20%) is well below the solubility of cholesterol in DOPC, that is, the threshold at which it is able to form microcrystals within the lipid bilayer [35].The cationic galactosylated amphiphile GLT1 was designed to enhance the interactions of liposomes with the bacterial cells of S. aureus in two ways: on one hand, the cationic charge of the quaternary ammonium group allows an electrostatic interaction with the negatively charged bacterial membrane; on the other, the galactosyl moiety promotes a specific liposome binding to carbohydrate-specific adhesins (i.e., lectins) overexpressed on both bacteria cells and biofilm [36]. For all liposomes, the values of the hydrodynamic diameter (D h ), polydispersity index (PDI), ζ-potential, and RSV Entrapment Efficiency (EE%) were investigated.Results (Table 1) are consistent with those reported by Aiello et al. [13], demonstrating the reproducibility and reliability of the experimental procedures. Stability Studies To investigate the stability over time of neutral and galactosylated liposomes, both empty and RSV-loaded, samples were stored at room temperature and protected from light sources to avoid the isomerization of trans-resveratrol to the less active cis isomer.The physical stability of liposomes was investigated following the particles' hydrodynamic diameters, PDIs, and ζ-potential values at different times, up to 14 days of storage.All collected data are reported in Figure 2. Stability Studies To investigate the stability over time of neutral and galactosylated liposomes, both empty and RSV-loaded, samples were stored at room temperature and protected from light sources to avoid the isomerization of trans-resveratrol to the less active cis isomer.The physical stability of liposomes was investigated following the particles hydrodynamic diameters, PDIs, and ζ-potential values at different times, up to 14 days of storage.All collected data are reported in Figure 2. The formulations were stable for two weeks during storage, without any significant change in size and PDI, except for LGR liposomes, for which a decrease in ζ-potential was observed.This effect may be due to the migration of RSV towards the surface, thus towards a more hydrated environment: in these conditions, RSV can partially undergo ionization, with the negatively charged hydroxyl group oriented towards the lipid water interface, thus making the potential less positive [30]. In Vitro Release Studies With the goal of evaluating the release profiles of RSV from neutral and galactosylated liposomes, an in vitro release study was performed by the dialysis method.Samples were examined by UPLC analysis to determine the % leakage of RSV over time, monitored for a period of 24 h. Figure 3 shows the amount of RSV released from DOPC/Chol and DOPC/Chol/GLT1 liposomes.The % leakage of RSV from DOPC/Chol liposomes was higher than that recorded for galactosylated liposomes.This result could be related to the possible interaction between RSV and the galactosylated amphiphile [37], resulting in a reduction in drug release rate compared to the data collected from the release study of RSV from DOPC/Chol liposomes.However, both release curves highlighted a quite similar trend characterized by a quick release during the first 7 h followed by a gradual slow release up to 24 h.Final RSV leakages of ~60% and ~40% from the DOPC/Chol and DOPC/Chol/GLT1 liposomes were observed, respectively.liposomes was higher than that recorded for galactosylated liposomes.This result could be related to the possible interaction between RSV and the galactosylated amphiphile [37], resulting in a reduction in drug release rate compared to the data collected from the release study of RSV from DOPC/Chol liposomes.However, both release curves highlighted a quite similar trend characterized by a quick release during the first 7 h followed by a gradual slow release up to 24 h.Final RSV leakages of ∼60% and ∼40% from the DOPC/Chol and DOPC/Chol/GLT1 liposomes were observed, respectively. Liposomes-Bacteria Interaction The electrostatic interaction of neutral and cationic galactosylated liposomes with bacteria was investigated by ζ-potential measurements to highlight the formulation s ability to interact more efficiently with bacteria cells.To this purpose, ζ-potential analysis is considered a simple and powerful tool thanks to the different ζ-potential values that bacteria and liposomes feature, which are determined by both surface charge and ions associated with the surface [38].Indeed, bacteria always display a negative ζ-potential Liposomes-Bacteria Interaction The electrostatic interaction of neutral and cationic galactosylated liposomes with bacteria was investigated by ζ-potential measurements to highlight the formulation's ability to interact more efficiently with bacteria cells.To this purpose, ζ-potential analysis is considered a simple and powerful tool thanks to the different ζ-potential values that bacteria and liposomes feature, which are determined by both surface charge and ions associated with the surface [38].Indeed, bacteria always display a negative ζ-potential value while liposomes show variable values according to the lipid components included in the formulation.Neutral liposomes composed of DOPC and Chol are characterized by a slightly negative ζ-potential while cationic galactosylated liposomes composed of DOPC, Chol, and GLT1 show positive values.The interaction between bacteria and liposomes results in different ζ-potential values compared to those of bacteria and liposomes before mixing. All the experiments were carried out keeping the bacteria density at 10 5 CFU/mL and varying liposomal concentrations from 0.1 mM to 1 mM in total lipids.In this concentration range, the number of liposomes present in the solution is between 10 11 -10 12 liposomes/mL [39], always larger than that of bacteria.Thus, in the absence of specific interactions, the ζpotential value of the mixture will reflect the ζ-potential of the liposomes.The results of the ζ-potential measurements are reported in Figure 4; the ζ-potential values of liposomes alone, as a function of concentration, are also reported as a reference.As expected, both S. aureus and MRSA showed a negative zeta potential, around −4 mV and −7 mV, respectively. After the addition of a low concentration of DOPC/Chol liposomes, the ζ-potential of the mixed liposome-bacteria suspensions is slightly decreased, probably due to the mixing of liposomes and less negative bacteria, both for MRSA and for S. aureus.In fact, the behavior of mixed bacteria-liposome suspensions is very similar to that of liposomes alone, thus suggesting that the main contribution to the measured value of ζ-potential stems from neutral liposomes, which do not interact with bacteria.At 1 mM, no difference in ζ-potential can be observed because of the large excess of liposomes in the suspension. On the contrary, for DOPC/Chol/GLT1 formulation, while in the absence of bacteria, the ζ-potential of the reference liposomal suspension increases with concentration; the different trend measured in the mixed liposome-bacteria suspension suggests the presence of an interaction between bacteria and cationic galactosylated liposomes.The electrostatic interaction between galactosylated liposomes and negatively charged bacteria is evident in the progressive reduction in the measured value of ζ-potential, which reaches a maximum close to 0.3 mM in total lipids then decreases again for both the bacteria strains.Above 0.3 mM, a saturation in the liposome-bacteria interaction probably occurs and liposomes are mostly associated so that a further liposome addition does not promote any variation in the measured value.The observed slight decrease above 0.3 mM could be attributed to the screening effect. liposomes results in different ζ-potential values compared to those of bacteria and liposomes before mixing. All the experiments were carried out keeping the bacteria density at 10 5 CFU/mL and varying liposomal concentrations from 0.1 mM to 1 mM in total lipids.In this concentration range, the number of liposomes present in the solution is between 10 11 -10 12 liposomes/mL [39], always larger than that of bacteria.Thus, in the absence of specific interactions, the ζ-potential value of the mixture will reflect the ζ-potential of the liposomes.The results of the ζ-potential measurements are reported in Figure 4; the ζpotential values of liposomes alone, as a function of concentration, are also reported as a reference.As expected, both S. aureus and MRSA showed a negative zeta potential, around −4 mV and −7 mV, respectively.After the addition of a low concentration of DOPC/Chol liposomes, the ζ-potential of the mixed liposome-bacteria suspensions is slightly decreased, probably due to the mixing of liposomes and less negative bacteria, both for MRSA and for S. aureus.In fact, the behavior of mixed bacteria-liposome suspensions is very similar to that of liposomes alone, thus suggesting that the main contribution to the measured value of ζ-potential stems from neutral liposomes, which do not interact with bacteria.At 1 mM, no difference in ζ-potential can be observed because of the large excess of liposomes in the suspension. On the contrary, for DOPC/Chol/GLT1 formulation, while in the absence of bacteria, the ζ-potential of the reference liposomal suspension increases with concentration; the different trend measured in the mixed liposome-bacteria suspension suggests the presence of an interaction between bacteria and cationic galactosylated liposomes.The electrostatic interaction between galactosylated liposomes and negatively charged bacteria is evident in the progressive reduction in the measured value of ζ-potential, which reaches a maximum close to 0.3 mM in total lipids then decreases again for both the bacteria strains.Above 0.3 mM, a saturation in the liposome-bacteria interaction probably occurs and liposomes are mostly associated so that a further liposome addition does not promote any variation in the measured value.The observed slight decrease above 0.3 mM could be attributed to the screening effect. Antimicrobial Properties of RSV 3.3.1. Anti-Biofilm RSV Activity on S. aureus Wild Type To assess whether RSV, free or delivered by liposomes, has a different efficacy in reducing biofilm in two different S. aureus strains, the wild type or methicillin-resistant one, the biofilm reduction induced by RSV was evaluated on the S. aureus wild type strain, following the same protocol described for MRSA biofilm by Aiello et al. [13] (Figure 5a).To assess whether RSV, free or delivered by liposomes, has a different efficacy in reducing biofilm in two different S. aureus strains, the wild type or methicillin-resistant one, the biofilm reduction induced by RSV was evaluated on the S. aureus wild type strain, following the same protocol described for MRSA biofilm by Aiello et al. [13] (Figure 5a).Firstly, RSV anti-biofilm activity was evaluated at different concentrations, ranging from 25 µg/mL to 200 µg/mL, determining the biofilm reduction by Crystal Violet assay. Figure 6 shows the CV absorbances related to S. aureus wild type biofilm demolition in the presence of RSV (free or liposome-embedded) at the concentration that gave the Firstly, RSV anti-biofilm activity was evaluated at different concentrations, ranging from 25 µg/mL to 200 µg/mL, determining the biofilm reduction by Crystal Violet assay. Figure 6 shows the CV absorbances related to S. aureus wild type biofilm demolition in the presence of RSV (free or liposome-embedded) at the concentration that gave the highest biofilm reduction (50 µg/mL).A higher value of absorbance indicates a higher amount of biofilm and, thus, a lower demolition capacity of the system investigated.The CV absorbance obtained for untreated mature biofilm was used as a reference.The experimental data demonstrate that empty liposomes (L and LG formulations) show only a slight reduction in biofilm, which is not statistically significant.In contrast, free RSV induces a biofilm reduction of about 38% compared to that of untreated biofilm (p < 0.05), confirming its ability to inhibit biofilm formation.The ability of free RSV to impair mature S. aureus wild type biofilm increases when encapsulated in galactosylated LG liposomes, LG-RSV being more effective than the L-RSV formulation.In fact, LG-RSV liposomes reduce the biofilm formation of S. aureus wild type to about half the value of untreated bacteria (representing 54% of untreated cells (p < 0.01)) while L-RSV liposomes have an even slightly lower biofilm-reducing capacity (34%, p > 0.05) than that of free RSV.These results are consistent with those obtained by S. Aiello et al. [13] for MRSA; additionally, in that case, the galactosylated liposomal formulation LG-RSV displayed a good demolition capacity against mature MRSA biofilm, compared to other liposomal formulations and free RSV. For this reason, the galactosylated formulation LG-RSV was selected to carry out more in-depth investigations on its antibacterial properties against the two S. aureus strains. Anti-Adherence Activity of RSV on S. aureus and MRSA In order to assess whether the observed anti-biofilm effect could be ascribed to the anti-adherence or bacterial-growth-inhibiting properties of RSV, we performed a Zone Of Inhibition (ZOI) agar-based assay [40] (Figure 5b).This assay allowed measuring the adherence inhibition halo s diameter induced by LG-RSV liposomes after overnight bacterial incubation on MH agar plates (see Materials and Methods).Figure 7 shows that LG-RSV liposomes (10 µL per plate of 1.5 µg/mL RSV loaded in LG liposomes) were able to induce a clear anti-adherence effect on both bacterial strains, which was slightly higher in S. aureus wild type (mean inhibition halos diameter was 21 mm for S. aureus vs. 15.5 mm for MRSA, in two experiments) than in MRSA; on the contrary, free RSV induced only a faint inhibition halo.It is also worth noting that the empty LG liposome did not induce any inhibition in adhesion to the medium agar. In the literature, it was frequently found that a compound displaying anti-adherence The ability of free RSV to impair mature S. aureus wild type biofilm increases when encapsulated in galactosylated LG liposomes, LG-RSV being more effective than the L-RSV formulation.In fact, LG-RSV liposomes reduce the biofilm formation of S. aureus wild type to about half the value of untreated bacteria (representing 54% of untreated cells (p < 0.01)) while L-RSV liposomes have an even slightly lower biofilm-reducing capacity (34%, p > 0.05) than that of free RSV.These results are consistent with those obtained by S. Aiello et al. [13] for MRSA; additionally, in that case, the galactosylated liposomal formulation LG-RSV displayed a good demolition capacity against mature MRSA biofilm, compared to other liposomal formulations and free RSV. For this reason, the galactosylated formulation LG-RSV was selected to carry out more in-depth investigations on its antibacterial properties against the two S. aureus strains. Anti-Adherence Activity of RSV on S. aureus and MRSA In order to assess whether the observed anti-biofilm effect could be ascribed to the anti-adherence or bacterial-growth-inhibiting properties of RSV, we performed a Zone Of Inhibition (ZOI) agar-based assay [40] (Figure 5b).This assay allowed measuring the adherence inhibition halo's diameter induced by LG-RSV liposomes after overnight bacterial incubation on MH agar plates (see Materials and Methods).Figure 7 shows that LG-RSV liposomes (10 µL per plate of 1.5 µg/mL RSV loaded in LG liposomes) were able to induce a clear anti-adherence effect on both bacterial strains, which was slightly higher in S. aureus wild type (mean inhibition halos diameter was 21 mm for S. aureus vs. 15.5 mm for MRSA, in two experiments) than in MRSA; on the contrary, free RSV induced only a faint inhibition halo.It is also worth noting that the empty LG liposome did not induce any inhibition in adhesion to the medium agar. MIC Determination of S. aureus and MRSA In light of the results obtained on the biofilm, we aimed to verify whether the encapsulation of RSV could improve not only the anti-adherence capacity but also its inhibiting activity on bacterial growth. In these experiments, unlike previous ones, none of the formulated LG-RSV concentrations assayed induced any inhibiting effect on both bacterial strains growth while free RSV displayed bacteriostatic activity on both bacterial strains (Figure 8 shows a representative one of the three experiments performed).This result could be due to the slow release of RSV by LG liposomes in the bacterial culture, resulting in a failure to achieve an effective concentration, as compared with that of free RSV, which is administered all at once in the well. It is also worth noting that the minimal concentration of free RSV able to inhibit the bacterial growth was lower for MRSA (MIC 100 µg/mL) as compared to S. aureus wild type (MIC 200 µg/mL, Table 2). Table 2. Determination of the Minimum Inhibitory Concentration (MIC) of RSV, both free and loaded in liposomes (LG-RSV), on S. aureus wild type and MRSA bacteria.The table summarizes the results obtained after 24 h culture in 96-well plates; the very same aliquots of the transparent wells were put on MH agar plates and, after another 24 h, the bacterial growth was examined. MIC Value (μg/mL) Bacterial Strain LG This different susceptibility to free RSV was confirmed also by the respective CFU counts of the samples that we found inhibited by 200 µg/mL RSV.After another 24 h of growth on MH agar, the CFU count of these transparent wells showed roughly a ratio of 10-fold less MRSA vs. S. aureus colonies (Figure 8). Actually, in our hands, in a very long series of overnight cultures in MH broth, the 600 nm OD of MRSA was constantly lower, by almost 30%, than that of the S. aureus wild type .This finding is only apparently surprising because we must carefully consider the overall bacterial fitness of the two S. aureus strains.While it is certainly advantageous for MRSA to be in vivo resistant to methicillin, in order to survive in hospitalized patients infections, it is also true that, in normal growth conditions, its fitness may be lower than the S. aureus wild type strain, as observed in our experiments.The selective advantage of bacterial growth under specific stress conditions (like the use of antibiotics, nutrient In the literature, it was frequently found that a compound displaying anti-adherence activity is also able to effectively reduce the bacterial biofilm.Such a result was found by many authors with many bacterial species examined and by using natural products [41][42][43].Therefore, we could figure out that the biofilm initially formed in the presence of RSV encapsulated in LG liposomes might be less stable and more prone to demolition.Additionally, our finding that the treatment with RSV loaded in LG liposomes is also able to reduce an already mature S. aureus biofilm, most probably by affecting the bacterial adherence to the substrate, likely enforces this working hypothesis. MIC Determination of S. aureus and MRSA In light of the results obtained on the biofilm, we aimed to verify whether the encapsulation of RSV could improve not only the anti-adherence capacity but also its inhibiting activity on bacterial growth. In these experiments, unlike previous ones, none of the formulated LG-RSV concentrations assayed induced any inhibiting effect on both bacterial strains' growth while free RSV displayed bacteriostatic activity on both bacterial strains (Figure 8 shows a representative one of the three experiments performed).This result could be due to the slow release of RSV by LG liposomes in the bacterial culture, resulting in a failure to achieve an effective concentration, as compared with that of free RSV, which is administered all at once in the well. It is also worth noting that the minimal concentration of free RSV able to inhibit the bacterial growth was lower for MRSA (MIC 100 µg/mL) as compared to S. aureus wild type (MIC 200 µg/mL, Table 2). Table 2. Determination of the Minimum Inhibitory Concentration (MIC) of RSV, both free and loaded in liposomes (LG-RSV), on S. aureus wild type and MRSA bacteria.The table summarizes the results obtained after 24 h culture in 96-well plates; the very same aliquots of the transparent wells were put on MH agar plates and, after another 24 h, the bacterial growth was examined. Bacterial Strain LG-RSV RSV hands RSV loaded in LG liposomes behaves exactly the opposite; it reduces the biofilm of S. aureus wild type and MRSA bacteria but does not inhibit their planktonic growth.In order to clarify this finding it will be necessary to deepen the analysis of the cell wall phenotype and the ultrastructure differences existing between planktonic and biofilm staphylococci and focus on the RSV interaction with these two diverse cell wall structures [46][47][48].The search for new antimicrobials often focuses on the possible damage induced by these compounds on the bacterial cell wall, as documented for many new synthetic and natural products [49,50].Therefore, we asked whether free RSV might also be able to induce damage to the bacterial cell wall, which would explain its anti-adherence and antibiofilm effects.To this aim, after overnight incubation with free RSV, S. aureus wild type and MRSA were stained with the non-viable dye propidium iodide (PI), which enters only into cells with damaged walls. According to the results of MIC determination, we treated the MRSA strain with a free RSV solution at half the concentration used for S. aureus wild type (100 vs. 200 µg/mL) and analyzed the slide with a confocal microscope. Figure 9A,B show that RSV-treated bacteria were all positive for PI staining, on the contrary, the untreated controls were positive approximatively for 10-20% of the total bacteria in the wells.This staining possibly says that RSV induces in the bacterial cell wall breaks or pores, through which the fluorochrome may enter.This different susceptibility to free RSV was confirmed also by the respective CFU counts of the samples that we found inhibited by 200 µg/mL RSV.After another 24 h of growth on MH agar, the CFU count of these transparent wells showed roughly a ratio of 10-fold less MRSA vs. S. aureus colonies (Figure 8). Actually, in our hands, in a very long series of overnight cultures in MH broth, the 600 nm OD of MRSA was constantly lower, by almost 30%, than that of the S. aureus wild type.This finding is only apparently surprising because we must carefully consider the overall bacterial fitness of the two S. aureus strains.While it is certainly advantageous for MRSA to be in vivo resistant to methicillin, in order to survive in hospitalized patients' infections, it is also true that, in normal growth conditions, its fitness may be lower than the S. aureus wild type strain, as observed in our experiments.The selective advantage of bacterial growth under specific stress conditions (like the use of antibiotics, nutrient starvation, the host's immune response, etc.) could be paid with the toll of a lower fitness in normal conditions of growth.Such a phenomenon has been defined as "fitness cost" and it was described several times for antibiotic resistance in bacteria [44,45]. However, the question remains as to why RSV loaded in LG liposomes inhibits biofilm better than free RSV while it is not able to inhibit the planktonic bacteria like, instead, free RSV does.Biofilms are known to cause chronic infections that are difficult to treat.Most antibiotics are developed and tested against bacteria in the planktonic state, on which they are effective, while they are ineffective against bacterial biofilms.This highlights the importance of investigating the anti-biofilm activity of candidate antibacterial agents, rather than extrapolating from the results of planktonic assays.In our hands RSV loaded in LG liposomes behaves exactly the opposite; it reduces the biofilm of S. aureus wild type and MRSA bacteria but does not inhibit their planktonic growth.In order to clarify this finding it will be necessary to deepen the analysis of the cell wall phenotype and the ultrastructure differences existing between planktonic and biofilm staphylococci and focus on the RSV interaction with these two diverse cell wall structures [46][47][48]. Cell Wall Damage Induced by RSV in S. aureus and MRSA The search for new antimicrobials often focuses on the possible damage induced by these compounds on the bacterial cell wall, as documented for many new synthetic and natural products [49,50].Therefore, we asked whether free RSV might also be able to induce damage to the bacterial cell wall, which would explain its anti-adherence and anti-biofilm effects.To this aim, after overnight incubation with free RSV, S. aureus wild type and MRSA were stained with the non-viable dye propidium iodide (PI), which enters only into cells with damaged walls. According to the results of MIC determination, we treated the MRSA strain with a free RSV solution at half the concentration used for S. aureus wild type (100 vs. 200 µg/mL) and analyzed the slide with a confocal microscope. Figure 9A,B show that RSV-treated bacteria were all positive for PI staining, on the contrary, the untreated controls were positive approximatively for 10-20% of the total bacteria in the wells.This staining possibly says that RSV induces in the bacterial cell wall breaks or pores, through which the fluorochrome may enter. In order to assess whether the positive staining was associated with bacterial cell death, after the microscope analysis, we plated aliquots of the bacteria on fresh MH agar plates.The CFU determination (Figure 10) correctly reflects the bacterial number observed in the relative wells (Figure 9A,B upper panel, phase contrast) of the slide and shows, for S. aureus wild type, a roughly three-fold (p = 0.02) and, for MRSA, a five-fold (p = 0.012) reduction in live bacteria, as compared to the untreated ones.recently observed an increase in the S. aureus membrane s permeability after treatment with an Essential Oil (EO) [50].The authors show that the EO produces changes in lipid membrane packing, increases the fluidity, and increases the access of water to the interior of the membrane, also affecting the planktonic bacteria.As it is known, EOs contain hundreds of compounds; we are describing the effect of a single component.Nevertheless, further investigation of the S. aureus membrane and cell wall after treatment with RSV is required. Conclusions The results presented in this paper are useful building blocks to elucidate the antibacterial activity of RSV, both free and delivered in cationic galactosylated liposomes, on two strains of S. aureus, wild type (ATCC 25923) and methicillin-resistant (ATCC 33591).We confirmed that free RSV has weak anti-adhesive activity and good bacteriostatic activity.These results are important for clarifying, as recommended by a recent review [5], the conflicting data reported in the literature about RSV activity.The bacteriostatic and anti-adhesive activities of RSV are presumably related to its ability to form pores in bacteria cell walls.However, the presence of such pores does not impair the ability of bacteria to replicate in the absence of RSV. We improved with new investigations of the physico-chemical characterization of galactosylated liposomes, empty and RSV-loaded; in particular, we studied the colloidal stability of the formulations and the release of RSV over time.Studying the liposomebacteria interaction, we pointed out the importance of the galactosylated moiety for the interaction with S. aureus and MRSA.This is a confirmation of the growth-inhibiting properties of RSV and of the capacity of this stilbenoid polyphenol to affect the bacterial cell wall integrity. Regarding this result, we are comforted by the extensive literature documenting the ability of propidium iodide to specifically identify the bacterial cells whose membranes have been damaged [51].Many articles indeed focus precisely on the membrane of S. aureus [52]. However, it is worth noting that these cell wall breaks do not prevent RSV-treated bacteria from growing when transferred to fresh medium, possibly being a reversible, or transient, phenomenon.The ability to alter the permeability of the bacterial membrane found for RSV has also been observed for other natural compounds; in fact, Cutro et al. recently observed an increase in the S. aureus membrane's permeability after treatment with an Essential Oil (EO) [50].The authors show that the EO produces changes in lipid membrane packing, increases the fluidity, and increases the access of water to the interior of the membrane, also affecting the planktonic bacteria.As it is known, EOs contain hundreds of compounds; we are describing the effect of a single component.Nevertheless, further investigation of the S. aureus membrane and cell wall after treatment with RSV is required. Conclusions The results presented in this paper are useful building blocks to elucidate the antibacterial activity of RSV, both free and delivered in cationic galactosylated liposomes, on two strains of S. aureus, wild type (ATCC 25923) and methicillin-resistant (ATCC 33591).We confirmed that free RSV has weak anti-adhesive activity and good bacteriostatic activity.These results are important for clarifying, as recommended by a recent review [5], the conflicting data reported in the literature about RSV activity.The bacteriostatic and antiadhesive activities of RSV are presumably related to its ability to form pores in bacteria cell walls.However, the presence of such pores does not impair the ability of bacteria to replicate in the absence of RSV. We improved with new investigations of the physico-chemical characterization of galactosylated liposomes, empty and RSV-loaded; in particular, we studied the colloidal stability of the formulations and the release of RSV over time.Studying the liposomebacteria interaction, we pointed out the importance of the galactosylated moiety for the interaction with S. aureus and MRSA. With regard to the biological activity, we demonstrated for the first time that the inclusion of RSV in liposomes functionalized with GLT1 greatly amplifies the anti-adhesive (S. aureus, MRSA) and anti-biofilm (S. aureus) properties while completely knocking down the bacteriostatic ones.We can reasonably assume that, to exert its antibacterial activity, Figure 2 . Figure 2. Colloidal properties upon the time of the liposomes (1 mM in total lipids) in the PBS.D h and PDI are the hydrodynamic diameter and the polydispersity index, respectively, calculated by the cumulants method.Solid bars correspond to RSV-loaded liposomes and stripped bars to the empty ones.The error bars associated with D h , PDI, and ζ-pot are the standard deviations of three repeated measurements on three different samples. Figure 4 . Figure 4.The ζ-potential values of a 10 5 CFU/mL dispersion of bacteria in PBS 15 mM as a function of added liposome concentration.(a) blue circle: L (DOPC/Chol) liposomes alone; white triangle: MRSA in the presence of L liposomes; black triangle: SA in the presence of L liposomes.(b) red circle: LG (DOPC/Chol/GLT1) liposomes alone; white triangle: MRSA in the presence of LG liposomes; black triangle: SA in the presence of LG liposomes. Figure 4 . Figure 4.The ζ-potential values of a 10 5 CFU/mL dispersion of bacteria in PBS 15 mM as a function of added liposome concentration.(a) blue circle: L (DOPC/Chol) liposomes alone; white triangle: MRSA in the presence of L liposomes; black triangle: SA in the presence of L liposomes.(b) red circle: LG (DOPC/Chol/GLT1) liposomes alone; white triangle: MRSA in the presence of LG liposomes; black triangle: SA in the presence of LG liposomes. Figure 5 . Figure 5. Pictorial representation of RSV-loaded galactosylated liposome activity: (a) anti-biofilm activity on the mature biofilm of S. aureus; (b) anti-adherence activity on S. aureus and MRSA. Figure 5 . Figure 5. Pictorial representation of RSV-loaded galactosylated liposome activity: (a) anti-biofilm activity on the mature biofilm of S. aureus; (b) anti-adherence activity on S. aureus and MRSA. Biomolecules 2023 , 21 Figure 6 . Figure 6.Anti-biofilm activity of RSV, free and liposome-loaded, on S. aureus wild type mature biofilm.A statistically significant difference was found only in the comparison between untreated bacteria and LG-RSV-treated ones (** indicates p < 0.01) and between untreated and free RSV-treated ones (* indicates p < 0.05).+ Symbols indicate the mean value of each series. Figure 6 . Figure 6.Anti-biofilm activity of RSV, free and liposome-loaded, on S. aureus wild type mature biofilm.A statistically significant difference was found only in the comparison between untreated bacteria and LG-RSV-treated ones (** indicates p < 0.01) and between untreated and free RSV-treated ones (* indicates p < 0.05).+ Symbols indicate the mean value of each series. Biomolecules 2023 , 21 Figure 7 . Figure 7. Anti-adherence properties of RSV (free and loaded in liposomes) on S. aureus wild type and MRSA bacteria.Empty LG liposomes and DMSO did not induce any visible inhibition halos. Figure 7 . Figure 7. Anti-adherence properties of RSV (free and loaded in liposomes) on S. aureus wild type and MRSA bacteria.Empty LG liposomes and DMSO did not any visible inhibition halos. Biomolecules 2023 ,Figure 9 . Figure 9. Propidium iodide staining of S. aureus wild type (A) and MRSA (B) bacteria after 16 h incubation with free RSV (200 µg/mL for S. aureus wild type and 100 µg/mL for MRSA) to assess the possible cell wall damage induced by RSV.The phase contrast picture (at 40× magnification) is reported in each of the two upper panels while the PI staining is shown in the lower ones. Figure 9 . Figure 9. Propidium iodide staining of S. aureus wild type (A) and MRSA (B) bacteria after 16 h incubation with free RSV (200 µg/mL for S. aureus wild type and 100 µg/mL for MRSA) to assess the possible cell wall damage induced by RSV.The phase contrast picture (at 40× magnification) is reported in each of the two upper panels while the PI staining is shown in the lower ones. Figure 10 . Figure 10.Determination of S. aureus wild type and MRSA CFUs after 16 h RSV treatment. Figure 10 . Figure 10.Determination of S. aureus wild type and MRSA CFUs after 16 h RSV treatment. Table 1 . Physicochemical features of empty and RSV-loaded liposomes (20 mM total lipids) in PBS (pH 7.4).DLS and DELS measurements were performed upon dilution of the samples with PBS 150 mM and 15 mM, respectively, down to 1 mM in total lipids. a
2023-12-16T16:22:38.868Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "c4bd97624b49172fc68b086b2e163520bd4704dd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "2872aeeae9acf48024692f6f97ddd114ebeea287", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
25109247
pes2o/s2orc
v3-fos-license
Endoplasmic Reticulum Stress Triggers Autophagy* Eukaryotic cells have evolved strategies to respond to stress conditions. For example, autophagy in yeast is primarily a response to the stress of nutrient limitation. Autophagy is a catabolic process for the degradation and recycling of cytosolic, long lived, or aggregated proteins and excess or defective organelles. In this study, we demonstrate a new pathway for the induction of autophagy. In the endoplasmic reticulum (ER), accumulation of misfolded proteins causes stress and activates the unfolded protein response to induce the expression of chaperones and proteins involved in the recovery process. ER stress stimulated the assembly of the pre-autophagosomal structure. In addition, autophagosome formation and transport to the vacuole were stimulated in an Atg protein-dependent manner. Finally, Atg1 kinase activity reflects both the nutritional status and autophagic state of the cell; starvation-induced autophagy results in increased Atg1 kinase activity. We found that Atg1 had high kinase activity during ER stress-induced autophagy. Together, these results indicate that ER stress can induce an autophagic response. Eukaryotic cells have evolved strategies to respond to stress conditions. For example, autophagy in yeast is primarily a response to the stress of nutrient limitation. Autophagy is a catabolic process for the degradation and recycling of cytosolic, long lived, or aggregated proteins and excess or defective organelles. In this study, we demonstrate a new pathway for the induction of autophagy. In the endoplasmic reticulum (ER), accumulation of misfolded proteins causes stress and activates the unfolded protein response to induce the expression of chaperones and proteins involved in the recovery process. ER stress stimulated the assembly of the pre-autophagosomal structure. In addition, autophagosome formation and transport to the vacuole were stimulated in an Atg protein-dependent manner. Finally, Atg1 kinase activity reflects both the nutritional status and autophagic state of the cell; starvation-induced autophagy results in increased Atg1 kinase activity. We found that Atg1 had high kinase activity during ER stress-induced autophagy. Together, these results indicate that ER stress can induce an autophagic response. Autophagy is a cellular degradation process for long-lived proteins and unnecessary or damaged organelles (1,2). During autophagy, double membrane vesicles termed autophagosomes are formed, which sequester the cytosolic proteins and/or organelles as cargoes and then are fused with the vacuole or lysosome. After fusion, the inner membrane vesicles (subsequently termed autophagic bodies) are released into the vacuole/lysosome lumen, and the contents are degraded by resident hydrolases (2,3). This process is ubiquitous in eukaryotes from yeast to mammals and is essential for normal cellular development and differentiation (1,4). Autophagy occurs at a basal level and can be significantly induced, depending on the cell type, when necessary. In yeast, for example, autophagy is induced in response to nutrient starvation; following breakdown of the cargo, the resulting macromolecules are presumably released from the vacuole lumen for reuse in the cytosol and supply the source for continued biosynthesis. Similarly, in higher eukaryotes, autophagy plays an important role in survival during starvation. In addition, mounting evidence shows that autophagy is associated with various pathophysiological conditions. For example, autophagy may remove aggregateprone proteins, such as mutant huntingtin and ␣-synuclein, which cause neurodegenerative disorders (5)(6)(7). Autophagy also functions in tumor suppression, possibly by removing damaged organelles to reduce the production of reactive oxygen species. In addition, autophagy is involved in the host immune response to invasion by certain bacterial and viral pathogens (8,9). Genetic analyses reveal that degradative autophagy shares mechanistic components with the biosynthetic cytoplasm to vacuole targeting (Cvt) 2 pathway in yeast (10,11). The Cvt pathway is highly selective and specifically transports at least two hydrolases, Ape1 (aminopeptidase I) and Ams1 (␣-mannosidase), to the vacuole after sequestration within double-membrane Cvt vesicles (12)(13)(14). The protein components that function in these autophagy-related pathways are named Atg (15). Most of the Atg proteins localize to a perivacuolar site called the preautophagosomal structure (PAS), where the autophagosome and Cvt vesicle are thought to form (16,17). In eukaryotic cells, most proteins are either synthesized on soluble ribosomes or on ribosomes attached to the ER. Misfolded cytosolic and nuclear proteins are typically tagged with ubiquitin and degraded via the proteasome (18,19). The accumulation of misfolded proteins in the ER induces the unfolded protein response (UPR), which results in the expression of chaperones and other proteins that act as folding catalysts (20). Several studies have reported a linkage between autophagy and ER function. For example, the early secretory pathway is required for autophagy, possibly supplying membrane for autophagosome formation (21)(22)(23). Moreover, it was shown that fragmented ER membrane structures are transported to the vacuole within autophagosomes under starvation conditions; however, no relationship has been described between autophagy and ER stress (24). In this study, we show that ER stress in yeast cells induces an autophagic response. EXPERIMENTAL PROCEDURES Strains, Media, and Reagents-The S. cerevisiae strains used in this study are: AHY001 (25), SEY6210 (26), WHY1 (14), and YTS178 (27). Strain JLY27 (SEY6210 atg12⌬::Kan), UNY102 (SEY6210 TAP-Atg1), and UNY104 (UNY102 atg13⌬::LEU2) * This work was supported by National Institutes of Health Public Health Service Grant GM53396 (to D. J. K.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 were generated through standard molecular genetics, using a PCR-based procedure. Yeast strains were grown or incubated in media as described previously (28). For induction of ER stress, cells were incubated in SMD medium (synthetic medium containing 0.67% yeast nitrogen base with auxotrophic amino acids and 2% glucose) containing 3 mM dithiothreitol (DTT) or 2 g/ml tunicamycin (TM) as described previously (29,30). All chemical reagents were from Sigma unless otherwise indicated. Immunoblotting-Yeast cells were grown in SMD medium at 30°C to A 600 ϭ 0.5, and either TM or DTT was added to each culture to induce ER stress. For starvation conditions, cells were cultured in SD-N medium (starvation medium containing 0.17% yeast nitrogen base without amino acids and 2% glucose). At the indicated times, cells were collected, and proteins were precipitated by the addition of trichloroacetic acid. Protein extracts were subjected to SDS-PAGE, followed by immunoblotting with anti-Atg8 (31), anti-Pgk1 (a generous gift from Dr. Jeremy Thorner (University of California, Berkeley)), anti-Kar2 (a generous gift from Dr. Jeffrey L. Brodsky, University of Pittsburgh, PA), anti-Ape1 (13) or anti-Atg1 antiserum (33), or anti-GFP antibodies (Covance Research Products, Berkeley, CA). Fluorescence Microscopy-Yeast cells expressing fluorescent protein-fused chimeras were grown for 4 h in SMD in the presence or absence of DTT or TM, or in SD-N. Fluorescence microscopy was carried out as described previously (28). Atg1 Kinase Assay-An in vitro phosphorylation assay using TAP-tagged Atg1 was performed as described previously (34). Protein Incorporation Assay-Yeast cells were incubated with or without either TM, DTT, or CCCP (100 M) at 30°C for 4 h. Cells were incubated at 30°C in SMD medium with 1.0 Ci of [ 35 S]methionine. At each time point, cells were collected, and proteins were precipitated with trichloroacetic acid, resuspended in 0.1 N NaOH, and incubated at 40°C for 10 min. The radioactivity was measured in a liquid scintillation counter (Beckman Coulter LS6500, Fullerton, CA). RESULTS The PAS is the putative site where Atg components localize to compose the vesicle-forming machinery for autophagy and the Cvt pathway (16,17). Atg8 is a structural component on these vesicles and also a marker protein for the PAS. In order to test the effect of ER stress on autophagy, we first examined PAS formation by fluorescence microscopy by monitoring GFP-Atg8. Cells were treated with two distinct ER stressors, DTT, an inhibitor of disulfide bond formation, and TM, an inhibitor of N-glycosylation, at concentrations that induce ER stress. In the absence of DTT and TM, wild-type cells expressing GFP-Atg8 showed a punctate dot at the PAS under both growing and starvation conditions, because the PAS is formed for both the Cvt pathway and autophagy (Fig. 1A). In addition, we detected increased staining of the vacuole lumen in starvation conditions, reflecting the up-regulation of Atg8 and increased autophagy. In the presence of DTT and TM in growing conditions, there was an increase in the number of cells displaying punctate structures; 50% of the cells treated with DTT and 46% of the cells with TM showed punctate dots versus 19% of the cells with no treatment (Fig. 1A). In contrast to wild-type cells, GFP-Atg8 is not located at the PAS but is diffusely localized throughout the cytosol in vegetative conditions in the absence of the Cvt pathway-specific component Atg11, whereas PAS localization is restored, followed by transport to the vacuole, under starvation conditions reflecting an induction of autophagy (28) (Fig. 1A). We took advantage of the Cvt pathway-specific defect of the atg11⌬ mutant to monitor the effect of DTT and TM. Treatment with DTT or TM in rich medium elicited localization of GFP-Atg8 at the PAS; essentially the same number of cells displayed punctate dots as seen with the wild-type strain (45% with DTT treatment and 46% with TM), suggesting that ER stress resulted in autophagic induction. Localization of GFP-Atg8 at the PAS was confirmed by the observation that TM-and DTT-induced GFP-Atg8 colocalized with RFP-Ape1, which is a PAS marker (data not shown). To check whether the PAS localization of GFP-Atg8 was dependent on the autophagic process under ER stress conditions, we examined GFP-Atg8 localization in atg12⌬ cells. GFP-Atg8 did not show PAS localization in either growing or starvation conditions in the atg12⌬ strain, as reported previously (17,35). Similarly, in atg12⌬ cells, treatment with TM or DTT was unable to cause localization of GFP-Atg8 to the PAS (Fig. 1A). These results suggested that PAS formation induced under ER stress was dependent on the normal autophagic machinery. FIGURE 1. PAS formation was stimulated by ER stress. A, GFP-Atg8 localization under ER stress conditions. Wild-type (SEY6210), atg11⌬ (AHY001), and atg12⌬ (JLY27) cells expressing CEN plasmid-borne GFP-Atg8 from the endogenous promoter were grown in SMD to A 600 ϭ 0.5. Cells were subsequently kept in SMD or shifted to SMD with 3 mM DTT or 2 g/ml TM or to SD-N. After 4 h, cells were observed by fluorescence microscopy. Arrowheads mark the PAS. B, Atg8 lipidation was enhanced under ER stress conditions. Wild-type and atg12⌬ cells were incubated for 4 h in SMD with or without 3 mM DTT or 2 g/ml TM or in SD-N as described in A. Atg8 and Atg8-PE were separated by 12% SDS-PAGE in the presence of 6 M urea followed by immunoblotting with anti-Atg8 antiserum or anti-Pgk1 as a loading control. WT, wild type. We extended our analysis by examining Atg8 conjugated to phosphatidylethanolamine (Atg8-PE) under ER stress conditions. When autophagy is induced in response to nutrient starvation, higher levels of Atg8/Atg8-PE are observed relative to growing conditions (31,36). In nutrient-rich medium, the addition of either DTT or TM resulted in increased amounts of Atg8-PE as well as nonlipidated Atg8 at levels higher than those in starvation conditions (Fig. 1B). In contrast to the result in wild-type cells, Atg8-PE was not detected either in starvation conditions or in nutrient-rich conditions following treatment with DTT or TM in atg12⌬ cells, although the expression level of Atg8 was increased significantly. This result was consistent with the microscopy data and suggested that ER stress induced autophagy in an Atg protein-dependent manner. Next, we confirmed that treatment with DTT and TM induced the UPR by monitoring the level of Kar2, an ER chaperone, which is induced by the UPR (37). As expected, Kar2 was induced following treatment with either DTT or TM ( Fig. 2A). In contrast, Kar2 was not substantially induced in starvation conditions that induce autophagy in the absence of these ER stressors. These results indicate that the UPR was induced under our experimental conditions of ER stress. We then extended our analysis by examining the transport of autophagosomes into the vacuole under conditions of ER stress. As a marker protein, we monitored GFP-Atg8-PE (referred to hereafter as GFP-Atg8), which remains associated with the inner autophagosome membrane and is transported into the vacuole concomitantly with the contents of the autophagic body (36,38). After lysis of the autophagic body by vacuolar hydrolases, the GFP moiety is proteolytically cleaved from GFP-Atg8 and is relatively stable in the vacuolar lumen, whereas Atg8 is degraded. Accordingly, detecting free GFP through immunoblot can follow the delivery of autophagosomes to the vacuole. In wild-type cells, free GFP was quickly generated by starvation-induced autophagy (SD-N; Fig. 2B). This processing was blocked in an autophagy-defective atg1⌬ strain, verifying its dependence on the autophagic machinery. A greatly reduced level of processing was seen under growing conditions (SMD). Cells that were treated with DTT or TM for 6 h showed the appearance of free GFP even in nutrient-rich conditions; however, the generation of free GFP was considerably lower and delayed relative to starvation conditions. atg1⌬ cells did not show processed GFP following treatment with DTT or TM (Fig. 2B), although analysis of Kar2 indicated that the UPR was induced in the atg1⌬ strain at a level similar to that seen in the wild-type strain (data not shown). These results indicated that processing of GFP-Atg8 occurred in an autophagy-dependent manner under conditions of ER stress. To further analyze the autophagic process under ER stress conditions, we monitored the processing of precursor Ape1 (prApe1). Ape1 is a vacuolar resident hydrolase and is synthesized as a precursor form in the cytosol. Precursor Ape1 is enwrapped within autophagosomes or Cvt vesicles, depending on the nutrient conditions, and is transported to the vacuole as a specific cargo via the Cvt pathway and autophagy. After transport to the vacuole, prApe1 is activated by removal of its N-ter-minal propeptide. In order to eliminate the constitutive maturation of prApe1 that occurs via the Cvt pathway, we used the vac8⌬ mutant. Without Vac8, prApe1 maturation is blocked in growing conditions, whereas maturation is normal in starvation conditions (39). When vac8⌬ cells were treated with DTT or TM, prApe1 maturation was observed, although it was again less efficient than that observed in starvation conditions (Fig. 2C). In contrast, neither starvation nor ER stress caused prApe1 processing in atg1⌬ cells. Again, we observed induction of the UPR in both vac8⌬ and atg1⌬ cells based on analysis of Kar2 under these conditions (data not shown). This result was consistent with that observed with GFP-Atg8 processing. We confirmed that atg1⌬ cells grew as well as the wild-type cells after the drug treatment, indicating that the defect of atg1⌬ cells for GFP-Atg8 processing and prApe1 maturation was not due to FIGURE 2. Autophagy was induced under ER stress. A, UPR induction. Wildtype (SEY6210) cells were incubated in SMD with or without 3 mM DTT or 2 g/ml TM or in SD-N as described in the legend to Fig. 1A. At the indicated times, proteins were precipitated with trichloroacetic acid and resolved by SDS-PAGE followed by immunoblotting using anti-Kar2 serum. B, GFP-Atg8 processing under ER stress. Wild-type (SEY6210) and atg1⌬ (WHY1) cells expressing CEN plasmid-borne GFP-Atg8 from the endogenous ATG8 promoter were grown and processed as above, followed by immunoblotting with anti-GFP antibodies. C, precursor Ape1 maturation occurred under ER stress. The vac8⌬ (YTS178) and atg1⌬ (WHY1) cells were grown and processed as above, followed by immunoblotting with anti-Ape1 antiserum. WT, wild type. loss of viability under ER stress conditions (data not shown). Taken together, we conclude that autophagy was induced in response to ER stress. To gain a further mechanistic understanding of the induction process, we examined Atg1 kinase activity. In nutrient-rich conditions or in the absence of Atg13, Atg1 kinase has lower activity in vitro (40). Starvation or treatment with the autophagy inducer rapamycin results in increased Atg1 kinase activity. When purified from wild-type cells treated with rapamycin, TAP-tagged Atg1 was able to highly phosphorylate the myelin basic protein substrate compared with TAP-Atg1 isolated in the absence of rapamycin (Fig. 3A). In contrast, the phosphorylation of myelin basic protein was extremely reduced when incubated with TAP-Atg1 from atg13⌬ cells, even when the cells had been treated with rapamycin, as reported previously (40). When cells were treated with DTT or TM, TAP-Atg1 displayed a high level of kinase activity based on phosphorylation of myelin basic protein, similar to the result seen with rapamycin treatment. The increased kinase activity of Atg1 further suggested that autophagy was induced during ER stress. DTT and TM interfere with protein biosynthesis in the ER. Accordingly, these treatments might induce a starvation response by blocking the production of plasma membrane nutrient transporters. To eliminate this possibility, we investigated the uptake and incorporation of amino acids in cells treated with ER stressors. We incubated cells with or without TM or DTT for 4 h and examined the cellular uptake and incorporation of radioactive methionine into nascent proteins (Fig. 3B). Wild-type cells subjected to mock treatment with buffer showed a linear increase in acid-precipitable radioisotope with increasing time. Cells treated with DTT showed a level of incorporation similar to those without treatment. TM-treated cells displayed a reduced level of incorporation, but it was significantly higher than that seen with cells treated with the proton ionophore CCCP. These results suggest that cells were able to import extracellular amino acids and that they were not experiencing starvation conditions due to a block in nutrient uptake. DISCUSSION Eukaryotic cells are able to respond to various types of stress caused by changes in the extracellular environment. Autophagy is one type of stress response. This process is normally maintained at low levels in nutrient-rich conditions, whereas it is highly activated in response to nutrient starvation (2,4). In this study, we describe a new induction pathway of autophagy; DTT and TM, which are commonly used to cause ER stress (29), allowed cells to induce autophagy in vegetatively growing conditions. Treatment with either drug results in accumulation of misfolded/unfolded proteins in the ER lumen. Subsequently, the UPR is activated to stimulate the expression of proteins that can relieve the stress condition (20,41). Although the early secretory pathway is known to be required for starvation-induced autophagy (21)(22)(23), no relationship has been noted between autophagy and ER stress. Previous microarray results have characterized up-regulation of the transcription of hundreds of genes by ER stress. Among them, the mRNA of genes related to autophagy, including ATG8, ATG14, and APE1 are also transcriptionally induced (41). In addition, it was very recently shown that in mammalian cells, polyglutamine-and drug-induced ER stress facilitated the conversion of LC3, a mammalian homolog of Atg8, from LC3-I to -II, which is an important step for autophagosome formation (42); however, it was still unclear whether functional autophagy actually occurred under these conditions of ER stress. Here we show that ER stress induced autophagy. We took advantage of the defect in PAS assembly in atg11⌬ cells and monitored induction of autophagy by following GFP-Atg8 localization in response to ER stress. Atg8 is conserved from yeast to mammalian cells and is commonly used as a marker for the PAS and the autophagosome (17). Unlike the situation in normal growing conditions, ER stress facilitated PAS formation in atg11⌬ cells in an Atg protein-dependent manner (Fig. 1A). Next, we biochemically monitored the delivery of autophagosomes to the vacuole using two established marker proteins, GFP-Atg8 and Ape1. The delivery of autophagosomes, measured by the formation of free GFP or maturation of precursor Ape1, was observed in response to ER stress, FIGURE 3. A, Atg1 kinase activity was increased during ER stress. Wild-type cells expressing TAP-Atg1 (UNY102) were grown in YPD with or without either 0.2 g/ml rapamycin (Rap), 3 mM DTT, or 2 g/ml TM for 2 h. As a control, atg13⌬ cells expressing TAP-Atg1 (UNY104) were also grown and treated with rapamycin. TAP-Atg1 was detected by immunoblotting with anti-Atg1 antibody. TAP-Atg1 was immunoprecipitated by IgG-Sepharose; a substrate protein, myelin basic protein, was mixed with the resulting immunocomplexes, and then Atg1 kinase activity was assayed as described under "Experimental Procedures." B, cells treated with ER stressors were able to import and incorporate radioactive amino acids. Wild-type cells were incubated in SMD with no drug (closed circles), 3 mM DTT (open squares), 2 g/ml TM (closed squares), or 100 M CCCP (open circles) for 4 h. After incubation, radiolabeled methionine was added into cell suspensions, and its uptake was measured at the indicated times as described under "Experimental Procedures." Error bars indicate the S.D. of three independent experiments. WT, wild-type. although the level of the delivery was less efficient than that seen under starvation conditions (Fig. 2). Finally, Atg1 purified from cells under ER stress conditions showed essentially the same increase in kinase activity as that from cells treated with the autophagy inducer rapamycin (Fig. 3A). It remains to be determined whether ER stress affects Tor kinase or whether it acts indirectly on a downstream component or even on a different kinase, such as protein kinase A, that might also be involved in the Atg1 signaling pathway (43). To eliminate the possibility that DTT or TM induced autophagy indirectly by generating a starvation response, we monitored amino acid uptake of cells treated with these ER stressors. DTTtreated cells displayed essentially the same level of uptake and incorporation of radioactive methionine as nontreated cells, suggesting that these cells did not experience starvation conditions due to inhibition of the biosynthesis of plasma membrane permeases. Similarly, there was apparently no significant interference with the cell's translational machinery. The uptake and incorporation of methionine in TM-treated cells was less efficient, being about 70% of that seen with untreated or DTTtreated cells (Fig. 3B); however, this level was well above that seen with CCCP treatment. Ire1 is an ER membrane protein that senses accumulation of unfolded proteins in the ER lumen. Active Ire1 splices HAC1 mRNA, and the spliced mRNA, which encodes a transcription factor, is translated into a functional protein. The active Hac1 protein induces the expression of genes encoding proteins involved in the UPR (44). We observed that under ER stress conditions, depletion of either Ire1 or Hac1 blocked autophagy based on GFP-Atg8 processing and prApe1 maturation; however, both ire1⌬ and hac1⌬ cells also showed low viability under ER stress conditions. 3 Thus, it was not possible to determine whether the Ire1-Hac1 signaling pathway is involved in induction of autophagy under ER stress conditions. The ER-associated degradation (ERAD) pathway is the primary degradation mechanism for handling misfolded proteins that cause ER stress (19). During ERAD, unfolded proteins that accumulate in the ER lumen are exported (dislocated) into the cytosol through the translocon channel and then degraded by the proteasome. We speculate that autophagy functions as backup to ERAD if degradative substrates overwhelm the ERAD capacity (30,32). It has been known that there is a regulatory link between ERAD and the UPR (30). One of the next questions to address is how autophagy is associated with these two pathways under starvation and other stress conditions. In summary, our current report shows that autophagy is induced by ER stress. Previous studies have shown that stress caused by accumulation of cytosolic protein aggregates can mediate autophagy in mammalian cells (6,7). To our knowledge, this is the first report of ER stress-induced autophagy. Further study will be needed to gain additional details about the regulatory association of this type of autophagy with the UPR and ERAD.
2018-04-03T06:20:53.301Z
2006-10-06T00:00:00.000
{ "year": 2006, "sha1": "dd3c6622dbc356ead9aecc66019f9b4a9f472047", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/40/30299.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "7f8f4bf9306628653c4c81821cf5f8adb0da761a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232295460
pes2o/s2orc
v3-fos-license
Proteomic analysis of drug-susceptible and multidrug-resistant nonreplicating Beijing strains of Mycobacterium tuberculosis cultured in vitro The existence of latent tuberculosis infection (LTBI) is one of the main obstacles hindering eradication of tuberculosis (TB). To better understand molecular mechanisms and explore biomarkers for the pathogen during LTBI, we cultured strains of Mycobacterium tuberculosis (Mtb) under stress conditions, mimicking those in the host granuloma intracellular environment, to induce entry into the non-replicating persistence stage. The stresses included hypoxia, low pH (5.0), iron deprivation (100 μM of 2, 2’˗dipyridyl) and nutrient starvation (10% M7H9 medium). Three Mtb strains were studied: two clinical isolates (drug-susceptible Beijing (BJ) and multidrug-resistant Beijing (MDR-BJ) strains) and the reference laboratory strain, H37Rv. We investigated the proteomics profiles of these strains cultured in stressful conditions and then validated the findings by transcriptional analysis. NarJ (respiratory nitrate reductase delta chain) was significantly up-regulated at the protein level and the mRNA level in all three Mtb strains. The narJ gene is a member of the narGHJI operon encoding all nitrate reductase subunits, which play a role in nitrate metabolism during the adaptation of Mtb to stressful intracellular environments and the subsequent establishment of latent TB. The identification of up-regulated mRNAs and proteins of Mtb under stress conditions could assist development of biomarkers, drug targets and vaccine antigens. Introduction Tuberculosis (TB), caused by Mycobacterium tuberculosis (Mtb), is a major global health problem and one of the leading causes of death worldwide [1][2][3]. Currently, the major obstacle to control and eradication of TB is the ability of Mtb to persist as a life-long infection in humans in a dormant or non-replicating state (NRP) [3,4]. This condition is asymptomatic and is known as latent TB infection (LTBI) [4]. In a small proportion of LTBI cases (5-10%), the pathogen might be reactivated, leading to active TB [1]. Humans are the only reservoir of this pathogen, which needs to be considered for TB control [5]. Of particular concern are strains belonging to the East Asian or Beijing (BJ) lineages of Mtb. These have caused numerous outbreaks worldwide [6]. The hyper-virulent BJ lineage has been emerging throughout the world associated with disease outbreaks and antibiotic resistance [7,8]. Multidrug resistant BJ strains (MDR-BJ) show resistance to several current anti-tuberculosis drugs. Metabolic processes play important roles in the pathogenesis of Mtb [9]. However, the metabolic processes that occur on entering dormancy, leading to survival and drug resistance in the host, are poorly understood [9]. Multiple-stress conditions within the host, such as oxygen depletion and immune responses, induce mycobacteria to enter a dormancy stage during which they are phenotypically drug resistant. Study of gene regulation and protein expression of Mtb may clarify the processes by which Mtb achieves dormancy. This adaptation may produce more virulent phenotypes of Mtb that can survive within granulomas of host [4,10]. Mycobacterium tuberculosis H37Rv is the most studied strain of TB in research laboratories [11]. However, a laboratory strain might not exhibit the same virulence properties as clinical strains [4]. To better understand the NRP stage of Mtb, in vitro models have been designed to mimic intracellular stress conditions. In this study, we aimed to mimic multiple stresses in vitro by combining conditions of hypoxia, acidic pH, iron deprivation and nutrient starvation. These conditions induce Mtb to enter the NRP stage. We investigated the protein expression patterns (using proteomic approaches and mRNA quantitative analysis) of Mtb reference strain H37Rv, and two clinical strains, BJ and MDR-BJ, cultured in this way. Bacterial strains, media and growth conditions Mycobacterium tuberculosis H37Rv laboratory strain (NCBI:txid1773) and two clinical isolates, drug-susceptible Beijing (BJ) (NCBI: txid634955) and multidrug-resistant Beijing (MDR-BJ), were used. The strains used in this study cannot be linked to patient information, a condition approved by the Khon Kaen University Ethics Committee (No. HE621448). Three strains of Mtb were cultured in 20 ml of Middlebrook 7H9 (M7H9) medium (Sigma) and supplemented with 0.2% of glycerol and 10% of BBL™ Middlebrook OADC Enrichment (BD, US). Bacterial cells were grown for 7 days with shaking at 37 • C. The number of culturable cells was estimated by plate count technique. To mimic the multiple stress conditions, these media were prepared as follows: M7H9 medium with supplements was diluted to 10% with sterile water. The iron chelator was added to 100 μM, methylene blue was added to 1.5 μg/ml, 36% HCl was added to adjust the pH to 5.0. Logphase Mtb cells, 10 5 CFU/ml were then inoculated into 20 ml of multiple stress media with parafilm-sealed test tubes, which were incubated at 37 • C without shaking. The decolorization of methylene blue was used to indicate oxygen depletion (3-4 weeks), which causes Mtb cells to enter the non-replicating/dormant stage after 4 weeks. Log-phase Mtb cells, 10 5 CFU/ml were then inoculated into 20 ml of control (M7H9 media) tubes were not sealed and incubated at 37 • C with shaking for 4 weeks. Protein preparation, in-gel digestion and LC− MS/MS Bacterial cell pellets were harvested by centrifugation after 4 weeks of incubation. Protein was then isolated from the cells by using lysis buffer (0.5 M Na 2 HPO 4 , 5 M NaCl, 1 M imidazole, 100 mg/ml lysozyme, 1X protease inhibitor cocktail (Amresco, USA), 1 M dithiothreitol) with 0.1 mm zirconia/silica beads (BioSpec Products, Inc.). Protein concentrations were determined by the Bradford protein assay, using Bradford reagent (Bio-Rad Laboratories, Inc.). To separate proteins, 5 μg of total proteins of each sample were electrophoresed through 12.5% SDS-PAGE. Gels were then cut, and in-gel digestion was performed. To investigate the protein expression patterns of Mtb, liquid chromatography tandem mass spectrometry (LC− MS/MS) analysis and associated bioinformatics analysis were performed as previously described [12]. The detailed methods of this part are shown in supplementary materials and methods. RNA extraction and cDNA synthesis Bacterial cells were harvested by centrifugation after 4 weeks of incubation. RNA was isolated using Trizol reagent (Invitrogen, USA) with silica beads, according to the manufacturer's instructions. Five hundred nanograms of RNA was treated with DNase I (Invitrogen, USA), to remove genomic DNA, according to the manufacturer's instructions. cDNA was then synthesized using SuperScript III Reverse Transcriptase (Invitrogen, USA) according to the manufacturer's instructions. Quantitative real time PCR (qRT-PCR) Mycobacterium tuberculosis genes that were differentially expressed between the active stage and NRP stage, including narJ, tcrY, uvrC, Rv1356c and Rv3134c, were selected for further study. The primer sequences used in this study are listed in Table 1. The expression of Mtb genes was examined by qRT-PCR. The PCR reaction was performed on a real-time PCR instrument (Applied Biosystems QuantStudio 6 Flex Real-Time PCR System) using SsoFast™ EvaGreen® Supermix (Bio-Rad Laboratory, Inc., USA). The relative expression of genes was examined by 2 -ΔΔCt . The 16S rRNA gene was used as an internal control. Construction of protein-protein interaction networks STRING (search tool for the retrieval of interacting genes/proteins) software was used to construct interaction networks of protein. Predicted functional partners of protein by STRING version 11.0 (https://st ring-db.org/) with high confidence interaction score (0.700). Statistical analysis GraphPad prism 5.0 was used for all data analysis. All experiments were done in triplicate and values are expressed as mean ± SD. Student's t-test was used to analyze the difference among two groups. Statistically significant differences between groups are indicated by *p < 0.05; **p < 0.01; ***p < 0.001. Proteins detected only during the NRP stage or the active stage To identify proteins found only in Mtb cultures under multiple stress conditions (i.e. during the NRP stage), proteomic profiles of the NRP and active stages were compared. During the NRP stage of H37Rv, BJ and MDR-BJ strains, 1,946, 1849 and 1947 proteins were detected, respectively. Of these, 220, 52 and 197 proteins were found only in the NRP stage of H37Rv, BJ and MDR-BJ strains, respectively ( Fig. 1A− C). The details and complete lists of these proteins and their relative quantities are given in Table S1. The same analyses identified 1,750, 1891 and 1799 proteins expressed by H37Rv, BJ and MDR-BJ strains, respectively, during growth in culture. There were 24, 94 and 49 proteins unique to the active stage of H37Rv, BJ and MDR-BJ strains, respectively ( Fig. 1A− C). Common unique proteins in NRP stage Ten proteins were found only in the NRP stage in all three Mtb strains (Fig. 2). Further analysis revealed only one protein, narJ, that was characterized in the reference strains, whereas another 9 proteins were classified as non-characterized proteins. Fifty-seven, 33 and 35 proteins were unique to H37Rv, BJ and MDR-BJ strains, respectively, during the Table 1 Primer pairs used to amplify cDNA. Gene Direction/position of primer Sequence 16S rRNA a Forward, 469-491 Forward, 531-550 Forward, 472-491 NRP stage. But further analysis found only 1, 1 and 2 proteins, respectively, that were well-characterized in database of the M. tuberculosis H37Rv reference strain and significantly up-regulated. These proteins were Rv3134c (in the H37Rv strain), tcrY (BJ strain) and Rv1356c and uvrC (both in the MDR-BJ strain). The details and complete list of proteins in the three Mtb strains, and their relative quantities are given in Table 2 and Table S2. Validation of proteomic analysis by quantitative real-time PCR The shared and strain-specific proteins found by LC-MS/MS analysis to be over-expressed during the NRP stage in the three different Mtb strains were then further validated by measuring mRNA expression levels using qRT-PCR. The relative mRNA expression level of narJ was up-regulated in all strains. The fold-changes of expression level in H37Rv, BJ and MDR-BJ strains were 2.397 (p > 0.05), 51.536 (p < 0.01) and 16.661 (p < 0.01), respectively. These results were correlated with relative intensities (log2-fold) of proteins analyzed by LC-MS/MS. The intensity (log2-fold) of narJ in H37Rv, BJ and MDR-BJ strains were 17.953 (p < 0.001), 18.730 (p < 0.001) and 14.614 (p < 0.001), respectively (Fig. 3). In addition, the relative mRNA expression levels of Rv3134c, tcrY, Rv1356c and uvrC were significant up-regulated, with fold-changes of 3.344, 2.057, 11.735 and 10.941, respectively (Fig. 4). The results of qRT-PCR for all proteins confirmed the results of the LC-MS/MS. Discussion There have been few comparisons of proteomics profiles of BJ and MDR-BJ strains in the same experiment. These strains are very similar, differing only by several mutations associated with drug resistance. However, our proteomic profiles revealed a large number of strainspecific proteins-38 (33 + 5) in BJ and 183 (35 + 148) in MDR-BJ (Fig. 2). This suggests that the multidrug-resistant phenotype might be associated with a large number of proteins. The narJ protein was found only in the NRP stage of BJ and MDR-BJ. This protein is a subunit of nitrate reductase enzyme which bacteria use nitrate as a final electron acceptor instead of oxygen [14]. Nitrate reductase activity occurs at a low level during an aerobic growth of Mtb and significantly increases during the NRP stage under hypoxia [14]. Nitrate reductase activity of Mtb is also correlated with the virulence and is also associated with its growth under anaerobic conditions [15]. NarJ is a promising biomarker for the dormancy stage, especially in the BJ strain. STRING analysis of narJ, the respiratory nitrate reductase delta chain, revealed the protein association networks involving this gene, including proteins encoded in narGHIJ gene clusters. The narG/Rv1161, narH/Rv1162 and narI/Rv1164 genes encoded respiratory nitrate reductase alpha, beta and gamma chains, respectively. These proteins interact with each other at the same time and place. The other proteins shown in Fig. S1 with high and significant score of association with narJ included narX/Rv1736c, nitrate reductase, narK2/Rv1737c, nitrate/nitrite transporter, narU/Rv0267, integral membrane nitrite extrusion protein (nitrite facilitator), nirD/Rv0253, nitrite reductase [NAD(P)H] small subunit, typA/Rv1165, GTP-binding translation elongation factor TypA (tyrosine phosphorylated protein A) (GTP-binding protein) and moeA1/Rv0994, molybdopterin biosynthesis protein. Network nodes represent proteins; spliced isoforms or post-translational modifications are collapsed, i.e. each node represents all the proteins produced by a single protein-coding gene locus. Therefore, Mtb nitrate respiration is also the genes encoded narGHJI operon and is also associated with narK 1, narK2, narK 3, narL, narX, and narU [15]. The narGHJI operon encodes nitrate reductase [14]. To survive in a stress condition, narK2/Rv1737c is up-regulated during anaerobic conditions [15], which allows the transport of nitrate into and nitrite out of the cell [16]. These transportations may generate the ATP, which necessary to Mtb survival in the absence of oxygen as a terminal electron acceptor [16]. Respiratory reduction of nitrate through narGHJI operon which could provide energy for the latent stage survival of the Mtb [16]. Nitrate respiration within phagosomes in macrophages is due to the anaerobic environment there [14]. The composition of nitrate reductase operon family in mycobacterial: the narG and narH bind to the plasma membrane of phagosome via the interaction between a hydrophobic patch of narH and narI which are bound to the cell membrane [15]. NarJ is a specific ligand that recognizes and binds to narG, and forms a complex with narGH to facilitate narJ, [4Fe-4S] cluster, and molybdenum cofactor successively. NarJ dissociates from the complex before interacting with membrane-binding narI. Nitrate passes through the cell wall into the cytoplasm via transmembrane protein narK2. It is reduced via narG and nitrite is released outside the macrophage via an unknown exporter [15]. Therefore, nitrate can be used as an alternative nitrogen source when the availability of other nitrogen sources is limited [16]. Further studies are needed concerning the availability and capability for utilization of other nitrogen sources during the latent infection stage. Such studies would provide a deeper insight into the possible link between dormancy and nitrogen assimilation and assist in identifying future drug targets. Rv3134c, found only in H37Rv strain during the NRP stage, was annotated as a universal stress protein which could play an important role in adaptation to hypoxia and its up-regulation by at least 10-fold [17]. Moreover, its participating in the phosphorelay in the two-component regulatory system devRS [17]. A member of the dormancy Rv3132c/3133c/3134c regulon, it was induced in response to hypoxia, a strain lacking Rv3134c gene does not induce most genes of the dormancy regulon, the hypoxic regulation of hspX was eliminated. These results suggest a possible role for Rv3132c/3133c/3134c in latent stage of Mtb [17,18]. The regulon might give insights into the dormant or NRP stage of Mtb infection [17,18]. Rv3134c was significantly correlated with the expected six proteins including devR/Rv3133c, devS/Rv3132c, dosT/Rv2027c, Rv0079, hspX/Rv2031c and Rv1738. The network of direct interconnections between Rv3134c and other proteins (STRING version 11.0) with a high confidence score (0.700) is shown in Fig. S2. TcrY, a protein seen only in the BJ strain during the NRP stage, was annotated as a two-component sensor kinase. This is associated with a two-component regulatory signal-transduction system, tcrYX, which activates phosphorylation [19,20]. Two component signal transduction systems, its process the signals resulting from the stress environment which developed by bacterium [20]. M. tuberculosis H37Rv cells lacking the tcrYX regulon show an increased virulence with significantly shorter survival times in SCID mice [19,20]. The network of direct high-confidence scoring (0.700) interconnections between TcrY and other proteins, as analyzed by STRING version 11.0, is shown in Fig. S3. MDR-BJ strain-specific proteins during the NRP stage were Rv1356c and uvrC. These were annotated as a hypothetical protein and excinuclease ABC (subunit C -nuclease), respectively. Rv1356c is significantly up-regulated after 24 h under nutrient starvation in the H37Rv strain [21]. UvrC is a DNA-repair enzyme that catalyzes the excision of UV-damaged nucleotide segments producing oligomers having the modified base(s) [22]. In fact, Mtb is exposed to a variety of environmental, endogenous physical and chemical stresses that could produce genotoxic damage, but it possesses an efficient system to counteract the harmful effects of DNA-damaging assaults [22]. The STRING database in combination between the Rv1356c and uvrC with other proteins (STRING analysis version 11.0, with high confidence scores (0.700)) (Fig. S4). In the right-hand group, we found that the Rv1356c was associated with Rv1353c, Rv1354c and moeY/Rv1355c which is cya/Rv1625, a central protein interconnecting the two groups with a high confidence score. In the left-hand group, there are three connectors (rpoB/Rv0667, DNA-directed RNA polymerase (beta chain) RpoB (transcriptase beta chain) (RNA polymerase beta subunit), pykA/Rv1617, probable pyruvate kinase PykA and ndkA/Rv2445c, probable nucleoside diphosphate kinase NdkA (NDK) (NDP kinase) (nucleoside-2-P kinase)). However, we want to focus on rpoB, a protein involved in resistance to first-line drugs (rifampicin) [23]. RpoB connects with gyrA/Rv0006, DNA gyrase (subunit A) GyrA (DNA topoisomerase (ATP-hydrolysing)) (DNA topoisomerase II) (type II DNA topoisomerase) and gyrB/Rv0005, DNA gyrase (subunit B) GyrB (DNA topoisomerase (ATP-hydrolysing)) (DNA topoisomerase II) (type II DNA topoisomerase), all associated with resistance to second-line drugs (fluoroquinolones) [23], and then gyrAB associated between uvrA and uvrB, Uvr ABC system proteins with uvrC also known as excinuclease ABC. Strains of the M. tuberculosis Beijing lineage are globally distributed and are often associated with severely pathogenic and virulent drugresistant TB. The ability of Mtb to enter the NRP stage renders it even more resistant to drugs and allows it to persist in the host without causing symptoms. To deal with the problem of LTBI, understanding of the molecular mechanisms used by the pathogen is necessary. Strainspecific proteins identified in this study, particularly narJ, could be used in the search for new therapeutic targets to prevent and control TB, especially LTBI. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-03-22T17:32:31.054Z
2021-03-09T00:00:00.000
{ "year": 2021, "sha1": "15d5355afdc765f4863acabb0b47481baeffa368", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.bbrep.2021.100960", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15d5355afdc765f4863acabb0b47481baeffa368", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222174585
pes2o/s2orc
v3-fos-license
Relationship between Demographic Characteristics, Depression and Insomnia with Restless Legs Syndrome: a Study of Adults Aged 17-70 Years in Yazd 2019 Received: 26 Dec 2019 Accepted: 24 Feb 2020 Introduction: Restless legs syndrome (RLS) is a neurological-motor disorder in which most patients tend to shake their legs during sleep and describe it as an unpleasant feeling. The aims of this study were to determine the prevalence of RLS, its relationship with demographic characteristics, depression, and insomnia and comparison of the mentioned variables in the group with and without RLS. Methods: This was a case-control analytic study. The sample consisted of 429 adults aged 17-70 years who had referred all the psychiatric and neurological clinics of Yazd (center of Iran) in 2019. Participants were selected by cluster sampling method. Research tools included a demographic questionnaire, the Beck Depression Inventory (BDI-II), Insomnia Severity Index (ISI), and an International Restless Legs Syndrome Questionnaire (IRLSQ). The data were analyzed by SPSS-21, chi-square, Pearson correlation coefficient, independent t-test, and linear regression. Significant level was considered 0.05. Results: The mean and standard deviation of the age of participants was 34.43± 10.82. Furthermore, the mean and standard deviation of the age group with RLS was 36.07± 10.95 while in group without RLS was 33.92± 10.75. Prevalence of RLS in adults was 23.5% (n= 101), in women was 32% (n= 66) and in men was 28.7% (n= 35). The t-test showed patients with RLS had a higher degree of depression and insomnia than those without RLS (p<0.05). Multiple linear regression also showed that insomnia (β= 0.36), age (β= 0.13), and depression (β= 0.15) had a significant effect on RLS score. Conclusion: The prevalence of RLS among adults in Yazd is high. Severe insomnia, depressed mood, and aging are considered as important factors in predicting this disease. According to what was mentioned early detection, prevention, and treatment of this disorder in adults is necessary. Introduction Restless Legs Syndrome (RLS) is a motorneurological disorder and patients with the syndrome have a severe tendency to move their legs during sleep, describing it as an unpleasant feeling that inactivity worsens it and often it causes insomnia (1,2). This tendency to shake legs is associated with an unpleasant feeling that patients liken them as the feeling of worm movement on the skin, the presence of insects in bone, the presence of water on the legs, and the electrical flow in the legs (1). This syndrome is one of the most common sleep disorders that, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), has become a complete disorder (3), but is often not diagnosed or misdiagnosed (4). According to reports, about 80 percent of patients with this disorder visit doctors due to their symptoms, but only 6 percent of them are diagnosed as RLS. After diagnosis, only 13% of patients are treated with appropriate medications. The disease has four diagnostic criteria developed by the international restless legs syndrome study group (IRLSSG) include 1) the tendency to shake legs to reduce the unpleasant feeling of the legs, 2) Symptoms begin with sitting and inactivity,3) the symptoms of the disease relieve by moving the legs and 4) onset or exacerbation of symptoms at night (4,5). Patients often walk at night to relieve the unpleasant symptoms of their legs and sleep early in the morning, thereby experiencing sleep deprivation and daily drowsiness that may interfere with their daily functioning (1,2). RLS has many side effects that reduce the quality of sleep and life, increase the risk of cardiovascular disease and even death, and is closely associated with depressive disorder (6). The results of some studies have shown a significant relationship between RLS and depression. It seems that this syndrome impedes patients to enjoy their life and has negative effects on the social activities of their family life and occupation. For example, these patients are reluctant to engage in activities that require prolonged sitting periods because prolonged periods of rest exacerbate the unpleasant symptoms of their legs (2). However, RLS is not as life-threatening as heart disease or diabetes; it causes chronic insomnia and daily drowsiness, and affects patients' sleep quality (3). Mucsi et al. (7) also stated in their study that RLS reduces patients' quality of life due to sleep disorder. However, in some studies, such as the study by Al-Arabi et al. (8), no relationship was found between RLS and depression. Therefore, considering the contradictory findings about the relationship between depression, sleep disorders, and RLS and lack of similar internal study on Iranian society, research in this field seems necessary. Reports indicate that the disorder is present in 2 to 15 percent of the general population and that the risk for women is about 11 to 27 percent higher than men (9). In a meta-analysis study performed in Iran, the overall prevalence of RLS was 30% (95% CI: 23-37) in adult (10). However, Hosseini et al. (11) reported 27.9% prevalence of this syndrome in cardiovascular patients and Amiri et al. (12) reported 67.2% prevalence in hemodialysis patients. Given the aforementioned, few studies have been conducted in Iran on factors such as demographic status, depression, and insomnia in RLS patients, which need to be investigated. Therefore, the aims of this study were to determine the prevalence of RLS and its relationship with demographic characteristics, depression, and insomnia among adults aged 17-70 years living in Yazd to obtain a report on the overall prevalence of this disorder in Iranian society and its related factors. Study Type and Sampling Method This case-control analytic study was conducted from May 10 to September 12, 2019. The study population included all patients with RLS disorder who had referred to the psychiatric and neurological clinics of Yazd in 2019. According to the 2016 census statistics, the city had a population of about 1,200,000 (28). Using Cochran's formula, a sample size of 384 was needed for the study, but since the questionnaire may not be fully completed by adults, a sample size of 450 was considered. Cluster sampling was used as the sampling method. This means that among the entire districts in this city, districts 1 and 2 of Yazd were randomly selected. Then, two neighborhoods were randomly selected from each district, and adults from these neighborhoods were examined. The sample size was calculated using the following Cochran's formula: N= 1200000; Z= 1.96; p= q= 0. 5; d= 0.05 Selection Criteria After obtaining permission from the Ethics Committee of Yazd University of Medical Sciences (Code: IR.SSU.REC.1398.051), the researcher referred to the research's neighborhoods and explained to everyone the aims of the research and if the individual wishes to participate in the study then the interview was done. Inclusion criteria for RLS subjects were: 1) mild to high syndrome severity score (scores between 5 and 12) in IRLSQ, 2) diagnosis of RLS prescribed by a psychiatrist or neurologist, 3) 18 to 75 years of age, and 4) lack of the history of cardiovascular, renal and diabetes disease (self-reporting). Exclusion criteria were: 1) mild to low syndrome severity score (scores between 0 to 4) in IRLSQ, 2) having some kind of motor or perceptual disability that could interfere with completing the questionnaire, and 3) not having consent to participate in the research. Healthy subjects were also selected based on criteria such as: being old enough (18 to 75 years), completing informed consent, and psychiatric examination for not having RLS or mental disorder. Instruments The following tools were used to collect data. a) Demographic Information: This was a researcher-made questionnaire for collecting demographic data of adults such as age, sex, educational status, economic status, marital status, income, weight, and height. b) International Restless Legs Syndrome Questionnaire (IRLSQ): This questionnaire was developed by the International Study Group for RLS, which examines RLS and its severity and has 4 questions. The questionnaire was designed based on a four-point Likert scale where the choices are always= 3, most often= 2, rarely= 1 and never= 0. To calculate the total score of the questionnaire, the score of all questionnaire questions were summed up. The range of this questionnaire was between 0 and 12. The more points obtained from this questionnaire, the greater the degree of RLS and its severity, and vice versa. A score below 4 means the absence of RLS; a score between 4 and 8 indicates a mild severity of the syndrome, and a score between 8 and 12 indicates a severe severity of RLS. This questionnaire is a standard tool and its validity and reliability have been measured in previous studies (13). For example, Hemmati and Alidousti (14) have obtained its reliability of 0.95 using Cronbach's alpha. In this study, Cronbach's alpha coefficient was 0.89 for this questionnaire. c) Beck Depression Inventory (BDI-II): For the past 35 years, the Beck Depression Inventory has been the most widely used diagnostic tool for depression in patients who have received a clinical depression diagnosis. BDI-II is a newer version of the original Beck Depression Inventory, which was designed to measure depression in adults and adolescents above 13 years of age and measure symptoms in the last two weeks. Because the original BDI covered only 6 of the 9 depression criteria, it was revised in 1996 to further align with the DSM-IV. The questionnaire has 21 items that each item is scored from 0 to 3 and each score can range from 0 to 63. People fall into four groups: 0 to 13 (minimum score), 14 depression). This standard questionnaire is validated worldwide and its Cronbach's alpha coefficient is 0.87 and test-retest reliability is 0.74 (15). In this study, Cronbach's alpha coefficient was 0.93 for this questionnaire. d) Insomnia Severity Index (ISI): The Insomnia Severity Index (ISI) was designed by Morin in 1993. It is a brief self-assessment tool that measures the patient's perception of insomnia in nighttime sleep. The scale consists of seven items that assess difficulty in starting and maintaining sleep (nighttime and early morning wakefulness), satisfaction with the current pattern of sleep, interference with daily functioning, severity of the damage attributed to sleep problems and the degree of disturbance or worry caused by a sleep problem. Participants estimate their perception of ISI items on a 5point scale (0= never and 5= very high). Scores range from 0 to 28. In the ISI, the range of scores is from 0 to 28. Higher scores indicate more perception of insomnia. This scale had an excellent internal consistency (Cronbach's alpha= 0.90). Also, the intraclass correlation coefficient presented an excellent agreement in the English version (ICC= 0.762, CI= 0.481-890) (16). The alpha coefficient of this questionnaire was 0.92 in the present study. Data analysis SPSS-21 was used to analyze the data. At the data entry stage, from the 450 questionnaires collected, 21 questionnaires were excluded from the study due to incompleteness. Next, 429 questionnaires were examined using descriptive statistics such as frequency (N) and percentage (%), mean (M), and standard deviation (SD). Finally, the data were analyzed using inferential statistics such as chi-square, Pearson correlation coefficient, independent t-test, and linear regression. Table 1 showed the demographic characteristics of the participants with and without RLS. The mean and standard deviation of the age group with RLS was 36.07± 10.95 and the group without RLS was 33.92± 10.75. The prevalence of RLS in adults was 23.5% (n= 101), that was 32% (n= 66) in women and 28.7% (n= 35) in men. Adult characteristics with and without RLS The Chi-square test showed that there are no significant differences between gender, economic status, marital status, income, and smoking among participants with and without RLS (p >0.05). *Chi-square test Correlation matrix of main research variables Before determining the prediction of the dependent variable, it is necessary to test the linear relationships between the independent and dependent variables. To do so, Pearson's correlation coefficient was used. As seen in Table 2, in most cases (except socioeconomic status), there was a significant positive correlation between all independent variables of the study and RLS (p <0.05). Therefore, the greater the depression or the severity of insomnia in adults, the more RLS is in them (p <0.01). On the other hand, the older the people were or the higher their body mass index (BMI), the severity of RLS (p <0.05) was greater. The status of the main research variables The Kolmogorov-Smirnov and Shapiro-Wilk showed that research variables had a normal distribution (p >0.05), so the data were analyzed by parametric tests. The results of Table 3, obtained from the t-test, showed that there was a significant difference between depression and insomnia severity in the two groups with and without RLS. This means that patients with RLS have more severe depression and insomnia than those without RLS (p <0.05) but there were no significant differences between age, BMI, and SES among the two groups ( Figure 1). Finding of Regression Analysis in RLS The contents of Tables 4 were obtained by stepwise regression analysis in which RLS were selected as the dependent variable and age, BMI, socioeconomic status, depression, and insomnia were the predictor variables. Initially, the errorindependent values of the Durbin-Watson test showed that the DW= 1.86 was between 1.5 and 2.5, indicating a lack of autocorrelation in the errors so the observation independence assumption is accepted. Therefore, it can be said that the fitted model is a suitable one. On the other hand, the values of Tolerance and variance inflation factor (VIF) were close to 1 and above 0.7. Also, the maximum VIF in the regression was calculated as 1.4 which is far from 2. Therefore, it can be said through these two indices that the co-linearity between the independent variables is low and the standard error of the regression coefficients has a low inflation. Initially, the results of Table 4 show that the three steps of insomnia, age, and depression (F= 28.9; R 2 = 0.16; P= 0.01) can predict 16% of the RLS variable. In addition, stepwise regression results show that the insomnia variable with β= 0.36 (95% CI: 0.14-0.23) has the most capability for predicting RLS. This means that 13% of the insomnia variable is predicted by RLS. However, depression with β= 0.15 (95% CI: 0.011-0.06) and age with β= 0.13 (95% CI: 0.01-0.05) had significant predictive power of RLS, respectively. The other variables not mentioned in Table 4 are excluded because of their lack of a significant role in predicting competence perception. Discussion The present study was a cross-sectional study that investigated the prevalence and psychological factors associated with RLS in Yazd. Initial results indicated that the prevalence of RLS was 23.5% in adults (32% in women and 28.7% in men). In a study of the Iranian adult population, Fereshtehnejad et al. (17) reported an average prevalence of RLS (6%) and reported that 6 out of every 1000 Iranians had RLS. Compared to the World Report, the prevalence of the syndrome in North America and Europe is reported to be between 5.5% and 11.6% and in the Asian population between 1% and 7.5% (18). Overall, the results indicate that US and Asian studies report a higher prevalence of RLS than European studies. Although this syndrome is found in all races, many believe it to be more prevalent in white races (12). Other results indicated that there was no significant difference between gender, economic status, marital status, income, smoking, age, body mass index, and socioeconomic status among participants with and without RLS. However, there was a significant difference between depression and insomnia severity in the two groups with and without RLS, meaning that patients with RLS had more depression and insomnia than the group without RLS. Yilmaz et al. (19) showed RLS patients were found to have greater anxiety and depression scores compared with the control group. According to Theorell-Haglöw et al. (20), depression and insomnia are more common in women in the population, which is expected to make more women than men with these symptoms. Consistent with the reported results, no significant difference was reported between the prevalence of RLS and gender (21), but in two large studies worldwide, the prevalence of RLS has been reported in women more than men (22,23). In their study, Farajzadeh et al. (9) reported that depression rates in RLS patients were higher than those without syndromes. On the other hand, Habibzadeh et al. (24) reported that the quality of sleep in RLS patients was significantly more disturbed and worse than those without the syndrome. Insomnia, which comes with RLS, is characterized by dissatisfaction with the sleep quantity or quality and is accompanied by symptoms such as difficulty starting or maintaining sleep (frequent waking or returning to sleep) or early morning waking up with disabilities back to sleep (25). Other results showed that there was a significant positive correlation between age, body mass index, depression and insomnia and RLS. The older the people are or the higher body mass index (BMI), the greater the risk of RLS with a 95% probability. On the other hand, with a 99% probability, the higher the degree of depression or the severity of insomnia in adults the prevalence of RLS is greater is. Other research results showed that insomnia variable with β= 0.36 (95% CI: 0.14-0.23), depression with β= 0.15 (95% CI: 0.011), and age with β= 0.13 (95% CI: 0.01-0.05) were able to predict RLS, respectively. The results are consistent with the previous studies. For example, Saraji et al. (26) found that body mass index in patients with RLS was significantly higher than in patients without this syndrome. On the other hand, in European and American societies, the prevalence of RLS increases in the general population with aging, but there is no change in the prevalence of RLS in Asian countries through aging (22,27). Yilmaz are regulated by a circadian rhythm that was worsen at night, and thus have a profound effect on the onset of sleep and return to sleep and the increased risk of developing depression and anxiety (20). The most important advantages of this study were the novel results, offering complete results without bias and appropriate sample size. The limitations of this study were that this was a cross-sectional study, the information of some studies was insufficient, and there were no objective experiments in collecting information in this field. Conclusion The aim of this study was to determine the prevalence and factors associated with RLS in Yazd. The results showed that the prevalence of RLS in Yazd was 23.5%. That means that out of every 100 people, about 23 are suffering from RLS and this disorder is more prevalent in women. Compared to reports in similar studies, important factors such as severe insomnia, depressed mood, and old age are important factors in predicting the disease. This requires attention to the early detection of the disorder, timely treatment, and the provision of appropriate preventive measures in Iranian adults, especially women. Acknowledgments This article is part of a doctoral research project approved by the Urmia University and funded by the Deputy of Research of the University of Urmia. It also has ethics approval (Code: IR.SSU.REC.1398.051) from Shahid Sadoughi University of Medical Sciences. Therefore, the authors are grateful to the aforementioned universities and research participants.
2020-07-02T10:39:39.296Z
2020-06-10T00:00:00.000
{ "year": 2020, "sha1": "10b16e1b4d8eb397720df5fa8470ea5bc386cfbe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18502/jchr.v9i2.3402", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7df7a510c7a494de9fc259b93f926b4374528b21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
271542844
pes2o/s2orc
v3-fos-license
A practical approach on the classifications of myeloid neoplasms and acute leukemia: WHO and ICC In 2022, two new classifications of myeloid neoplasms and acute leukemias were published: the 5th edition WHO Classification (WHO-HAEM5) and the International Consensus Classification (ICC). As with prior classifications, the WHO-HAEM5 and ICC made updates to the prior classification (revised 4th edition WHO Classification, WHO-HAEM4R) based on a consensus of groups of experts, who examined new evidence. Both WHO-HAEM5 and ICC introduced several new disease entities that are based predominantly on genetic features, superseding prior morphologic definitions. While it is encouraging that two groups independently came to similar conclusions in updating the classification of myeloid neoplasms and acute leukemias, there are several divergences in how WHO-HAEM5 and ICC define specific entities as well as differences in nomenclature of certain diseases. In this review, we highlight the similarities and differences between the WHO-HAEM5 and ICC handling of myeloid neoplasms and acute leukemias and present a practical approach to diagnosing and classifying these diseases in this current era of two divergent classification guidelines. Introduction The 3rd edition WHO Classification of hematopoietic neoplasms (WHO-HAEM3) published in 2001 was the first comprehensive classification system of myeloid neoplasms and acute leukemias.The WHO-HAEM3 included aspects of the French-American-British classification of MDS and AML [1], but also applied principles developed in the Revised European-American In 2022, two new classifications of myeloid neoplasms and acute leukemias were published: the 5th edition WHO Classification (WHO-HAEM5) and the International Consensus Classification (ICC) [6,7].The reasons behind the publication of two separate classifications are reviewed elsewhere [8,9].As with prior classifications, the WHO-HAEM5 and ICC made updates to the prior classification (WHO-HAEM4R) based on a consensus of groups of experts, who examined new evidence.In particular, a large body of evidence has recently accumulated on the genetic pathogenesis of myeloid neoplasms and their relationship to myeloid precursor lesions.Genetic testing has also revealed new distinct subgroups that are more biologically accurate than prior morphologic markers of disease.Accordingly, both WHO-HAEM5 and ICC introduced new disease entities that are based predominantly on genetic features, superseding prior morphologic definitions.While it is encouraging that two groups independently came to similar conclusions in updating myeloid neoplasm entities, there are several divergences in how WHO-HAEM5 and ICC define specific entities.There are also several differences in nomenclature between the two classifications, which likely reflect differences in how the two groups sought to apply descriptive names to the same entity as well as the influence of the nomenclature of other disease groups.For example, while the ICC retained the term "myelodysplastic syndrome", the WHO-HAEM5 changed the name to "myelodysplastic neoplasm" in consonance with the related entities myeloproliferative neoplasms (MPN) and myelodysplastic/myeloproliferative neoplasms (MDS/MPN).Conversely, the ICC felt that retaining the historic and traditional "syndrome" nomenclature superseded the rationale to apply a more scientifically accurate terminology of "neoplasm".In order to avoid confusion with the commonly abbreviated MPN and MDS/MPN entities, the WHO-HAEM5 retained the "MDS" abbreviation for "myelodysplastic neoplasms". In this review, we highlight the similarities and differences between the WHO-HAEM5 and ICC handling of myeloid neoplasms and acute leukemias and present a practical approach to diagnosing and classifying these diseases in this current era of two divergent classification guidelines.The main categories of myeloid neoplasms and their precursor lesions, which are the same in both classifications (with minor nomenclature differences), are listed in Table 1. Myeloid neoplasm precursor lesions Clonal hematopoiesis (CH) is a myeloid neoplasm precursor lesion characterized by overrepresentation of blood cells derived from a single clone, identified by its somatic mutations, cytogenetic aberrations, and/or copy number abnormalities detected on genetic testing [10,11].Clonal hematopoiesis of indeterminate potential (CHIP) refers to CH specifically harboring either a somatic mutation in a myeloid neoplasm driver gene with a variant allele frequency (VAF) of at least 2% or a non-MDS-defining clonal cytogenetic aberration, in a patient lacking a hematologic neoplasm or unexplained cytopenia [12] (Table 2).Clonal cytopenia of undetermined significance (CCUS) is defined as CHIP detected in the presence of one or more persistent unexplained cytopenias, while diagnostic criteria for any defined myeloid neoplasm are not met.Both WHO-HAEM5 and ICC for the first time included CHIP and CCUS as myeloid precursor lesions.The ICC also recognized VEXAS syndrome and paroxysmal nocturnal hemoglobinuria (PNH), both caused by somatic mutations, as clonal myeloid proliferations associated with cytopenia that are not equivalent to MDS unless diagnostic morphologic criteria for MDS are met.Some individuals with myeloid neoplasm precursor lesions progress to MDS or other myeloid neoplasms (Fig. 1).However, further study is warranted to better define the determinants of their progression risk [13,14].Moreover, refinement in the distinction between higher-risk CCUS and lower-risk MDS is warranted: these are biologically and prognostically similar and are currently separated arbitrarily by the absence versus presence of significant morphologic dysplasia, the identification of which can be subjective [15,16]. MPN Myeloproliferative neoplasms (MPN) include chronic myeloid leukemia (CML), the JAK2/MPL/CALR-associated MPN (essential thrombocythemia, primary myelofibrosis, and polycythemia vera), chronic neutrophilic leukemia (CNL), chronic eosinophilic leukemia, and MPN-NOS/unclassifiable. WHO-HAEM5 includes juvenile myelomonocytic leukemia (JMML) within the category of MPN, while the ICC includes JMML in a separate group of pediatric myeloid neoplasms (discussed later).Like the WHO-HAEM4R, the ICC recognizes an accelerated phase of CML (CML-AP), but this has been simplified from WHO-HAEM4R CML-AP definition to now only include cases with 10-19% blasts, ≥20% blood basophils, and/or presence of certain specific clonal cytogenetic aberrations in addition to the defining BCR::ABL1 rearrangement.In contrast, the WHO-HAEM5 does not recognize CML-AP, but instead defines high-risk morphologic and genetic features within chronic phase CML.In both classifications, blast phase CML is still defined by ≥20% blasts.There are essentially no differences in the diagnostic criteria for the JAK2/MPL/CLAR-associated MPN and chronic eosinophilic leukemia between the two classifications, and both retain a category to place MPN that cannot be otherwise classified, but with slightly different names: MPN-NOS in WHO-HAEM5 and MPNunclassifiable in ICC.CNL is strongly associated with a somatic CSF3R mutation and in recognition of this strong genotype-phenotype association, the ICC allows a diagnosis of CNL in the presence of CSF3R mutation with a WBC ≥13 × 10 9 /L provided other criteria are met, while the WHO-HAEM5 continues to require a WBC ≥25 × 10 9 /L for all cases, as in WHO-HAEM4R.This difference is expected to affect very few cases given the rarity of CNL and its strong association with a markedly elevated WBC [17,18] ; it may allow an earlier diagnosis for the prevalent CSF3R-mutated cases when following the ICC criteria. MDS In addition to a different name for the overall disease group, WHO-HAEM5 and ICC have several differences in the criteria that define the borders of MDS as well as the division of MDS into distinct subtypes. Borders of MDS with myeloid neoplasm precursor lesions In the WHO-HAEM5, morphologic dysplasia affecting at least 10% of cells in at least one hematopoietic lineage is required to establish a diagnosis of MDS in all instances; in the ICC, similar to WHO-HAEM4R, there are several genetic aberrations that are considered to define MDS in a patient with unexplained cytopenia, even in the absence of ≥10% dysplasia.These aberrations are now limited to the presence of complex karyotype (at least 3 independent acquired cytogenetic abnormalities, excluding -Y), -7/del(7q), del(5q), and SF3B1 or bi-allelic TP53 mutations.The latter two mutations must be seen at a minimum VAF of at least 10%, since small CH clones would be unlikely to cause a clinically significant cytopenia.Importantly, the above genetic abnormalities are almost ubiquitously associated with significant morphologic dysplasia and thus it is expected that this difference will result in few discrepancies.In practice, the absence of dysplasia in the setting of these MDS-associated abnormalities is more likely to reflect a suboptimal sample rather than truly absent morphologic dysplasia [19]. Borders of MDS with AML Both WHO-HAEM5 and ICC recognize several genetic lesions as AML-defining (see AML section below).However, the ICC requires at least 10% blasts in bone marrow or blood to classify any case as AML, whereas WHO-HAEM5 allows any increase in blasts to qualify for AML in the presence of an AML-defining genetic lesion; although increased blasts is typically defined as ≥5% in bone marrow or ≥2% in blood, there is no clear evidence to support a specific blast cutoff in this context.Given some subjectivity in counting blasts, cases which yield discrepant diagnoses due to these different blast thresholds should be approached with careful clinical correlation and follow-up, with the treatment approach influenced by the clinical picture as well as the specific blast count at a given timepoint [20].Conversely, while WHO-HAEM5 requires at least 20% blasts to define AML in the absence of an AML-defining genetic lesion, the ICC recognizes an "MDS/AML" overlap group encompassing cases with 10-19% blasts that lack AMLdefining genetics, effectively replacing MDS-EB2.The rationale behind this change in the ICC is that some patients with MDS/AML may benefit from AML-type intensive therapy, and this designation may facilitate wider therapeutic options for patients with 10-19% blasts [21].The ICC recommends to subclassify MDS/AML along the lines of other AML, into 4 subgroups defined by mutated TP53, myelodysplasia-related gene mutations, myelodysplasia-related cytogenetic abnormalities, or no specific genetic features (NOS); further research is needed to determine the clinical significance of subgrouping MDS/AML and the relationship of these subgroups to their overt AML counterparts with ≥20% blasts [22].All recurrent AML-defining genetic aberrations are classified as overt AML and are therefore excluded from MDS/AML. MDS classification Both WHO-HAEM5 and ICC have recognized SF3B1 mutation and bi-allelic TP53 mutation as defining new MDS subtypes, while retaining isolated del(5q) as a specific MDS subtype.However, there are several minor differences in the definitions of the new SF3B1 and TP53 entities, which are shown in Table 3. Cases with excess (≥5% in bone marrow and/or ≥2% in blood) blasts are categorized using different terminology from the prior WHO-HAEM4R: MDS with excess blasts and MDS/ AML in ICC, and MDS with increased blasts-1 and MDS with increased blasts-2 in WHO-HAEM5, correspond respectively to the prior MDS with excess blasts-1 and MDS with excess blasts-2.However, there are some minor differences in these correspondences, as shown in Table 3.Given that fibrosis has been shown to confer adverse prognosis in MDS [23], the WHO-HAEM5 (but not the ICC) introduced a new subgroup of MDS with increased blasts: "MDS with increased blasts and fibrosis".For cases that lack excess blasts or Auer rods and do not qualify for any of the three genetically-defined groups [SF3B1, bi-allelic TP53, or del(5q)], the ICC subdivides cases by the presence of dysplasia involving one (single lineage dysplasia, SLD) or more (multilineage dysplasia, MLD) hematopoietic lineages, while the WHO-HAEM5 introduced a new entity of hypoplastic MDS (MDS-h), defined by age-adjusted hypocellularity (cellularity < 20% for patients ≥70 years and < 30% for patients < 70 years).Although genetically heterogeneous, MDS-h cases may have a more favorable prognosis and respond more effectively to immunosuppressive therapy compared to other MDS lacking increased blasts [24].The WHO-HAEM5 has also retained ring sideroblasts in the absence of SF3B1 mutation as a morphologically-defined entity, although recent studies have shown similar prognosis to cases of MDS with low blasts that lack ring sideroblasts [25].WHO-HAEM5 removed requirement for SLD vs. MLD distinction due to poor reproducibility of this subjective determination [16], while the ICC retained it due to prognostic relevance in multiple studies [26,27]. Myeloid neoplasms in Children In both WHO-HAEM5 and ICC, the above MDS classifications apply to adult patients (age ≥18 years), and both classify pediatric MDS separately.Although both classifications employ different names for specific entities, these entities are mostly analogous to one another and have similar diagnostic criteria (Table 4).Of note, the ICC MDS/AML entity does not apply to pediatric MDS: pediatric MDS patients with increased blasts are managed differently from adult MDS patients, and may not warrant intensive therapy prior to stem cell transplant despite elevated blast counts approaching AML. Regarding juvenile myelomonocytic leukemia (JMML), both classifications removed this entity from the prior MDS/MPN group.The ICC now considers JMML in a Refractory cytopenia of childhood • WHO-HAEM5 allows ≥10% dysplasia in any lineage, while ICC requires ≥10% dysplasia specifically in megakaryocytes (or lesser degrees of dysplasia in 2 or 3 lineages) Childhood MDS with low blasts MDS-NOS • WHO-HAEM5 requires cytopenia and ≥10% dysplasia, while ICC allows absence of cytopenia or dysplasia if an MDS-defining cytogenetic abnormality is present.Childhood MDS with increased blasts MDS with excess blasts • None Juvenile myelomonocytic leukemia (JMML) Juvenile myelomonocytic leukemia (JMML) • WHO-HAEM5 allows cases lacking RAS-pathway mutations in the presence of increased HbF, leukoerythroblastosis, thrombocytopenia with hypercellular marrow, or hypersensitivity of myeloid progenitors to GM-CSF, while ICC excludes such cases and instead classifies them as JMML-like neoplasms. group of pediatric myeloid neoplasms including pediatric MDS, while the WHO-HAEM5 has placed JMML in the MPN group.Both WHO-HAEM5 and ICC have similar definitions for JMML, except the ICC considers the presence of RAS-pathway mutations an absolute requirement for the diagnosis; related cases that lack a RAS-pathway mutation are considered within a separate entity of JMML-like neoplasms. MDS/MPN Chronic myelomonocytic leukemia (CMML) Major changes were introduced to CMML diagnostic criteria in both WHO-HAEM5 and ICC, mainly lowering the threshold of absolute monocytosis to 0.5 × 10 9 /L in PB, while still requiring that monocytes comprise at least 10% of WBCs.This was based on recent evidence showing that patients with relative monocytosis (≥10% of WBCs) but absolute monocytosis in the 0.5-<1 × 10 9 /L range (so-called 'oligomonocytic CMML') displayed similar features to 'traditional' CMML with monocytes ≥1 × 10 9 /L [28,29].Additionally, the subgroup of CMML-0 (< 2% blasts in blood and < 5% blasts in bone marrow) introduced in the WHO-HAEM4R, that was previously thought to have relatively indolent behavior [30], has been eliminated due to its limited prognostic impact and poor reproducibility based on additional more comprehensive data [31].Both WHO-HAEM5 and ICC require evidence of clonality for the diagnosis of oligomonocytic CMML and both continue to subdivide all CMML into myelodysplastic and myeloproliferative subtypes based on a WBC threshold of 13 × 10 9 /L.However, there are several differences between WHO-HAEM5 and ICC CMML criteria (Table 5). 1.The ICC emphasizes the presence of at least one cytopenia as a prerequisite for diagnosing CMML, while noting that a small proportion of cases may show only borderline or no cytopenia, usually in early-phase disease.A recent study suggests that clonal monocytosis, CMML, and MDS exist on a spectrum, and the complex diagnostic criteria put forth by both WHO-HAEM5 and ICC may arbitrarily separate biologically related entities [32]. Thus, further research is needed to optimize the classification of clonal proliferations associated with cytopenia and variable monocytosis and these criteria may evolve in future myeloid neoplasm classifications. MDS/MPN with iso17q is a new provisional entity in ICC In the ICC, MDS/MPN with i(17q) is added as a new provisional subentity under the diagnostic umbrella of MDS/MPN-NOS.This category includes cases meeting criteria for MDS/MPN-NOS (i.e.failing to fulfill criteria for MDS or other MDS/MPN entities), but with an i(17q) cytogenetic abnormality with up to one additional cytogenetic abnormality (non-complex karyotype) other than del(7q)/−7.These cases show a high frequency of mutations in SRSF2, SETBP1, ASXL1, and NRAS genes [35].SRSF2 is often co-mutated with SETBP1 (but not with TET2) and co-existent triple mutations in SRSF2, SETBP1, and ASXL1 are seen in approximately 30% of cases.Despite loss of one TP53 locus on 17p due to the i(17q), TP53 mutations are absent in this entity. Other changes Although the criteria remain nearly identical, WHO-HAEM5 renamed "atypical chronic myeloid leukemia" to "MDS/MPN with neutrophilia" with the intention of avoiding potential confusion with CML.The WHO-HAEM4R entity "MDS/MPN with ring sideroblasts and thrombocytosis" (MDS/MPN-RT-T) has been largely redefined based on the highly prevalent SF3B1 mutation in these cases, and is renamed "MDS/MPN with SF3B1 mutation and thrombocytosis" in both WHO-HAEM5 and ICC.However, "MDS/MPN with ring sideroblasts and thrombocytosis" has been retained as a repository for cases with wild-type SF3B1 and ≥15% ring sideroblasts in both ICC and WHO-HAEM5, as the clinical behavior and biologic features of these infrequent cases is uncertain. AML There are major updates on the classification of AML in both WHO-HAEM5 and ICC. Diagnostic algorithm Both WHO-HAEM5 and ICC classifications emphasize the importance of genetic findings and their influence on the disease biology.The category of AML with recurrent genetic abnormalities is expanded by including more recurrent cytogenetic rearrangements that lead to novel fusion genes and/or increased oncogene expression driving leukemogenesis (Table 6).The terminology of AML with myelodysplasia related changes (AML-MRC) is replaced by AML, myelodysplasia-related (AML-MR) in WHO-HAEM5, representing a single entity defined by the presence of at least one of the following: history of MDS or MDS/MPN, MR cytogenetic abnormalities and/ or MR gene mutations (Table 7).This AML-MR group corresponds to 3 separate AML entities in the ICC: those defined by MR gene mutations (with or without MR cytogenetics abnormalities), MR cytogenetic abnormalities (without MR gene mutations), or mutated TP53 (monoor bi-allelic, and with VAF ≥10%, since the vast majority of TP53-mutated AML cases have complex karyotype that qualifies for AML-MR per WHO-HAEM5).Additionally, there are some differences in the composition of MR gene mutations and MR cytogenetic abnormalities between WHO-HAEM5 and ICC (Table 7).The ICC removed history of MDS or MDS/MPN as classifier for AML, and applies this history as a disease qualifier to the genetically-defined AML subtype; since most cases of AML progressed from MDS or MDS/MPN will have MR mutations and/or cytogenetic abnormalities, or fall into the TP53-mutated AML category in the ICC, these cases will still largely be in concordance with the AML-MR WHO-HAEM5 category.Due to its poor interobserver reproducibility and often difficult applicability [36], morphologic dysplasia was removed as a diagnostic criterion for AML-MR in both WHO-HAEM5 and ICC.AML cases that fail to place in any of the aforementioned genetic categories are classified as "AML defined by differentiation" in the WHO-HAEM5, further refined by their specific immunophenotypic profile (myeloid, monocytic, megakaryocytic, or erythroid), and as "AML-NOS" in the ICC.One subcategory of WHO-HAEM5 AML defined by differentiation, acute erythroid leukemia (AEL, previously termed 'pure erythroid leukemia in WHO-HAEM4R), nearly ubiquitously harbors bi-allelic TP53 mutations and complex karyotype and thus corresponds to AML with mutated TP53 in the ICC.Since AEL supersedes AML-MR in WHO-HAEM5, these rare cases are divergently classified in WHO-HAEM4R and ICC. Both WHO-HAEM5 and ICC now apply therapyrelatedness as a qualifier to the genetic/differentiation AML subtype, except the WHO-HAEM5 has changed "therapy-related" terminology to "post-cytotoxic treatment", since a prior history of cytotoxic therapy does not necessarily imply a causation.Both WHO-HAEM5 and ICC also consider germline predisposition as disease qualifiers to the relevant AML subtype, e.g.AML with MR gene mutation, in the setting of germline RUNX1 mutation.A detailed comparison of WHO-HAEM5 and ICC AML diagnostic algorithms is shown in Fig. 2. Blast cutoff The blast cutoff for AML diagnosis has been continually evolving.In the original FAB Classification, patients with myelodysplastic syndromes and 20-29% blasts were classified as refractory anemia with excess blasts in transformation (RAEB-T).In 2001, WHO-HAEM3 adopted a blast cutoff of 20% for AML diagnosis, thus eliminating RAEB-T and encompassing them within AML.This cutoff has since remained largely unchanged with an exception of AML with PML::RARA and AML with the core-binding factor gene translocations inv(16)/t(16;16) or t(8;21), in which the presence of such rearrangements are considered as pathognomonic for AML regardless of the blast percentage.As discussed above, both the WHO-HAEM5 and ICC have softened the blast requirement for most genetic subtypes of AML (Table 6), with the exception of BCR::ABL1 fusion: cases with BCR::ABL1 and 10-19% blasts are still considered within the category of CML (accelerated phase in the ICC). Other changes AML with CEBPA mutations Both WHO-HAEM5 and ICC further refined the diagnostic criteria for AML with CEBPA mutations based on recent studies showing that the favorable prognostic impact is determined by the presence of an in-frame bZIP mutation in the gene, not merely the presence of two (bi-allelic) mutations [37,38]. The ICC requires the presence of at least one in-frame bZIP mutation for diagnosing this entity, while in WHO-HAEM5, AML with CEBPA mutation is defined more broadly by either any single bZIP mutation or any biallelic mutations.Additionally, while the ICC allows a diagnosis of AML with CEBPA mutation with ≥10% blasts (similar to other genetically-defined AML, discussed above), the WHO-HAEM5 requires 20%, since the rare cases of bZIP CEBPA-mutated disease presenting with < 20% blasts have not been well studied. Myeloid/lymphoid neoplasms with tyrosine kinase gene fusions The category name is changed from the prior "myeloid and lymphoid neoplasms with eosinophilia (M/LN-eo) and gene rearrangement" to "Myeloid/lymphoid neoplasms with eosinophilia and tyrosine kinase gene fusions" (M/LN-eo-TK) by both WHO-HAEM5 and ICC (Table 8).M/LN-eo-TK often manifests as chronic myeloid neoplasms but can present as AML, B-ALL, T-ALL or even MPAL.Inclusion of this group of diseases in the differential diagnosis of chronic myeloid neoplasms and acute leukemias and detection of the defining TK fusions are key for an accurate and timely diagnosis, since many of these entities are effectively treated by targeted therapies.In addition to previously included PDG-FRA, PDGFRB, FGFR1, and JAK2 fusions, FLT3 fusions and ETV6::ABL1 are now added to this category in both WHO-HAEM5 and ICC [39][40][41].The most common partner gene of FLT3 fusions is ETV6 located at 12p13 [42].PDGFRA, PDGFRB and ETV6::ABL1 cases are sensitive to ABL1 inhibitors.WHO-HAEM5 also created a subgroup named MLN-eo with other defined tyrosine kinase fusions to encompass other rare tyrosine kinase fusions i.e.ETV6::FGFR2; ETV6::LYN; ETV6::NTRK3; RANBP2::ALK; BCR::RET; and FGFR1OP::RET. Systemic mastocytosis WHO-HAEM5 and ICC both made only minimal refinements to the definition of systemic mastocytosis (SM).While the WHO-HAEM5 allows any hematologic neoplasm (including lymphoma and plasma cell myeloma) within the entity of "SM with an associated hematologic neoplasm" (SM-AHN), the ICC specifically restricts this category to myeloid neoplasms and renames the entity "SM with associated myeloid neoplasm" (SM-AMN); this was based on demonstrated shared genetic origin between co-occurrent myeloid, but not lymphoid neoplasms, with the mast cell clone [43].Another difference is that the ICC requires immature mast cell cytomorphology for mast cell leukemia (MCL), while the WHO-HAEM5 MCL category encompasses rare cases displaying well-differentiated morphology, terming them "chronic MCL" as retained from the prior WHO-HAEM4R [44]. Hematologic/myeloid neoplasms with germline predisposition Comparing to WHO-HAEM4R, there are subtle changes in WHO-HAEM5 and ICC and minor differences in nomenclature for the category of germline predisposition disorders, which was first introduced into the WHO-HAEM4R classification (Table 9).Several additional genes are incorporated into this group (Table 10): germline TP53 mutations, RASopathies, germline SAMD9/ SAMD9L mutations, and germline BLM mutations.In ICC the title is changed from "myeloid neoplasms" to "hematologic neoplasms" with germline predisposition as increasing data have demonstrated that many of these germline-mutated genes predispose not only to myeloid malignancy but also to lymphoid malignancies [45].In addition to the genes mentioned above, the ICC added a new subgroup: acute lymphoblastic leukemia with germline predisposition encompassing patients with germline PAX5 and IKZF1 mutations. The former includes cases with BCR::ABL1 and KMT2A rearrangements (both also previously recognized by WHO-HAEM4R) and two new entities: MPAL with ZNF384 rearrangement and ALAL/MPAL with BCL11 rearrangement/activation.ZNF384-rearranged MPAL compromises nearly half of MPAL with B/myeloid immunophenotype, and approximately 20% of all MPAL cases [48], and is particularly common in children.Partners include TCF3, EP300, TAF15 and CREBBP 48 .ZNF384-rearranged B/myeloid MPAL is transcriptionally similar to its B-ALL counterpart, suggesting a biological continuum in this disease.BCL11B-rearranged ALAL compromises one third of MPAL with T/myeloid immunophenotype, and 10-15% of all MPAL; rare cases present as acute undifferentiated leukemia.FISH studies show translocations involving Boundary between AML-MR and ALAL/MPAL According to WHO-HAEM4R a diagnosis of AML-MRC or therapy-related AML overrode a diagnosis of ALAL/ MPAL, even when a mixed immunophenotype was present [47].However, changes in the diagnostic criteria for AML by WHO-HAEM5 and ICC create new dilemmas [6,7].Specifically, the criteria for AML-MR have been modified in both WHO-HAEM5 and ICC to include MR gene mutations, regardless of history of antecedent hematologic malignancy or myelodysplasia-related cytogenetic abnormalities, which would potentially shift more cases previously classified as MPAL to AML-MR.Therefore, it is uncertain how these changes will shift the boundary between AML-MR/t-AML and MPAL, which requires clarification in future studies [50].The ICC stipulates a minimum of 5% population of divergent aberrant lineage to establish a diagnosis of MPAL, while the WHO-HAEM5 classification does not stipulate a specific minimal threshold. Blastic plasmacytoid dendritic cell neoplasm (BPDCN) In WHO-HAEM5, two entities composed of plasmacytoid dendritic cells are recognized: mature plasmacytoid dendritic cell proliferation (MPDCP) and blastic plasmacytoid dendritic cell neoplasm.MPDCP are clonal proliferations of plasmacytoid dendritic cells (PDCs) that occur in association with myeloid neoplasms, most often CMML, and involve the skin, bone marrow or lymph nodes with mature bland cytologic features [53][54][55].MPDCP has also been recently described in AML, particularly with RUNX1 mutations [55,56].In this setting, the morphology of PDCs ranges from mature to immature and at the extreme may be indistinguishable from BPDCN involving marrow.The ICC does not formally recognize MPDCP as a distinct myeloid neoplasm, given its typical association with other myeloid neoplasms.BPDCN is retained in both ICC and WHO-HAEM5, with essentially identical definition to BPDCN in WHO-HAEM4R. B lymphoblastic leukemia/lymphoma (B-ALL/LBL) Although most B-ALL/LBL subtypes from the WHO-HAEM4R are retained, both WHO-HAEM5 and ICC include new entities subsequently identified by gene expression profiling and clustering algorithms (Table 11).These new entities are characterized by distinct clinical behavior/features and are driven by gene rearrangements, point mutations or gene expression signatures. Changes to previously recognized entities The previously recognized B-ALL/LBL entities defined by aneuploidy or gene rearrangements in the WHO-HAEM4R are retained in the new classifications, though the WHO-HAEM5 uses a shorter nomenclature that does not list cytogenetic changes.The ICC divides the hypodiploid B-ALL/LBL into two subtypes, a low hypodiploid one (32-39 chromosomes), more common in adults, and a near haploid one (24-31 chromosomes), more common in children and associated with poor prognosis and, frequently, with Li-Fraumeni syndrome (germline TP53 mutation). The ICC also recognizes two subtypes of B-ALL with BCR::ABL1, with possibly different prognosis, one with lymphoid only involvement, and the other with multilineage involvement.The latter entity is not easily distinguishable from CML in lymphoid blast phase and requires demonstration of the BCR::ABL1 rearrangement in myeloid cells in addition to the lymphoid blasts. The entity of B-ALL with BCR::ABL1-like features /BCR::ABL1-like is no longer considered a provisional subtype in the new classifications.The ICC further subtypes it into three subgroups, based on the driver genetic alteration and available targeted therapies: "ABL1-class rearranged", "JAK-STAT activated" and "not otherwise specified". T lymphoblastic leukemia/lymphoma (T-ALL/LBL) The WHO-HAEM5 classification of T-ALL/LBL is unchanged, with the only distinct variant entity, early T cell precursor (ETP) ALL, identified by immunophenotype.BCL11B activated T-ALL/LBL is a new genetic subtype recognized by the ICC, which encompasses ~ 30% of ETP ALL and is driven mostly by BCL11B rearrangements (Table 12). The WHO-HAEM5 acknowledges the existence of four distinct genetic subgroups of T-ALL/LBL, based on aberrant expression of TAL or LMO, TLX1, TLX3, or HOXA genes, and also acknowledges the more recent proposal of four additional less common subgroups, also based on aberrant activation of different families of transcription factors [57].While the WHO-HAEM5 does not recognize these as distinct entities, the ICC lists these eight T-ALL/LBL subgroups as provisional entities, acknowledging limited information is currently available for the four less common subtypes. Handling two classifications in diagnosis, therapeutic approach, clinical trials, and research publications Between 2001 and 2022, the advancement of myeloid neoplasm and acute leukemia classification was sequential, with updates made periodically (in 2008 and 2017) to reflect advancing knowledge.Although some AML clinical trials have even until now retained the antiquated FAB classification for case annotation, in general pathologists, clinicians, researchers, pharmacologic companies, and regulatory authorities such as the FDA have accepted the WHO Blue Books as the single classification to be used as their 'lingua franca' for the purposes of diagnosing and studying disease and labelling of specific drugs.Since 2022, this landscape has changed, with the release of two mostly concordant-but often divergent-classification systems.This has created a complex situation on several fronts: (1) Different nomenclature has caused confusion among patients and physicians.(2) Differing diagnostic criteria have resulted in some patients receiving different diagnoses, which may each have unique standards of care.(3) It is unclear how to apply existing drug labelling, which has been largely based on the WHO-HAEM4R, to the new classification systems, or how to label new drug indications in the setting of two classifications with some divergent disease definitions.(4) There is uncertainty as to how researchers and pharmaceutical companies should write inclusion criteria for clinical trials, how to enroll patients in existing trials based on WHO-HAEM4R criteria (many of which have significantly changed in WHO-HAEM5, ICC, or both) and how to stratify patients when studying particular myeloid neoplasms.Practically speaking, diagnosticians, clinicians, and researchers must become familiar with both classifications (Table 13).Despite a myriad of publications that have lamented this chaotic situation [58][59][60], it is important to understand that any classification process cannot be regarded as an absolute truth, but rather represents the efforts of a group of experts to balance scientific evidence with practical considerations of applying diagnostic criteria in the real world.Classifications can harbor errors that warrant correction: for example the purportedly lowerrisk ultra-low-blast subgroup of CMML, "CMML-0", that was introduced in WHO-HAEM4R was subsequently eliminated in both WHO-HAEM5 and ICC due to further evidence showing that CMML-0 in fact has no significant prognostic relevance, as discussed above.These errors underscore the importance of scientific enquiry in both validating and challenging existing classification systems.Although we are now focused on comparing and contrasting the current WHO-HAEM5 and ICC systems, we must look toward the future, at the next classification that will inevitably follow in the next few years.The presence of two 'competing' classifications in fact provides an opportunity to engage in scientific testing of both systems, particularly where there are differences.Many such studies testing the differences between WHO-HAEM5 and ICC are already underway or published, and will validate or refute each classification's criteria in categorizing myeloid diseases [32,44,59,62,63].This body of accumulating evidence has the potential to inform a subsequent single classification that will be more accurate, reproducible, and clinically relevant than either the current WHO-HAEM5 or ICC, and most importantly, could serve as a single unified classification accepted by all. Table 1 Summary of myeloid neoplasm entities Table 2 Definitions of CH, CHIP and CCUS Table 3 Comparison of WHO-HAEM5 and ICC classification of adult MDS Table 7 MR genes and MR cytogenetic abnormalities Table 8 Myeloid/lymphoid neoplasms with eosinophilia and TK fusion Table 9 Hematologic/myeloid neoplasms with germline predisposition Table 13 Recommendations on how stakeholders should handle two different classifications of myeloid neoplasmsProvide both WHO-HAEM5 and ICC diagnoses in pathology reports, whenever there are differences.Allow facile translation of diagnoses if patients are seen at other institutions or enter trials or research studies.Researchers reporting studies on myeloid neoplasms Classify cases according to both WHO-HAEM5 and ICC (or if using one classification, include the other system in supplementary material) Allow testing of each classification's criteria for robustness and prognostic relevance; facilitate comparison and meta-analyses of different studies.Pharmaceutical companies developing drugs to treat myeloid neoplasms Consider criteria of both classifications when defining the target patient population for a new drug in development Ensure wider applicability of potential new drugs.Sponsors and researchers writing clinical trials to study myeloid neoplasms Write trial inclusion criteria according to both classifications, with careful consideration of the targeted disease.Promote broader patient enrollment and capture signals that may be better revealed by one classification's disease definition Regulatory agencies evaluating new or previously approved drugs that treat myeloid neoplasms Explicitly include both WHO-HAEM5 and ICC diagnoses in drug labels Ensure equitable access of patients to new and established drugs, irrespective of which classification their physician or health care system may use Clinicians treating patients with myeloid neoplasms Thoughtfully explain different disease names to affected patients and emphasize that disease classification, like selection of therapy, has controversies; consider therapeutic options based on both diagnoses when different Alleviate patient confusion about their diagnosis; facilitate maximal therapeutic options for patients.
2024-07-31T06:17:40.057Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "a75393ea187524f06d8f747cd24a01e9267bf0a6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fa0b0bbcca469436c2fc75e00d4b761d91eb981f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5723521
pes2o/s2orc
v3-fos-license
PDCD5 transfection increases cisplatin sensitivity and decreases invasion in hepatic cancer cells Low expression levels of the programmed cell death 5 (PDCD5) gene have been reported in numerous human cancers, however, PDCD5 expression has not been investigated in hepatic cancer. The present study aims to investigate the biological behavior of PDCD5 overexpression in hepatocellular carcinoma (HCC) cells. The PDCD5 gene was stably transfected into the HepG2 HCC cell line (HepG2-PDCD5), and the expression levels of PDCD5 were examined by quantitative polymerase chain reaction and western blotting. An MTT assay was used to assess the cellular proliferating ability, and propidium iodide (PI) staining was used to evaluate the cell cycle by flow cytometry. The cells were incubated with 2 ng/ml transforming growth factor (TGF)-β for 7 days in order to induce invasion and epithelial-mesenchymal transition (EMT). Apoptosis was measured by Annexin V-fluorescein isothiocyanate and PI double labeling. A Boyden chamber invasion assay was carried out to detect tumor invasion. Western blotting was performed to detect the protein expression levels of PDCD5, insulin-like growth factor (IGF)-1 and the EMT marker, Snail. The results showed that the HepG2-PDCD5 cells exhibited slower proliferation rates and high G2/M cell numbers compared with those of the HepG2 and HepG2-Neo controls (P<0.05). The PDCD5 transfected cells showed higher sensitivity to cisplatin treatment than the HepG2-Neo cells, with a higher p53 protein expression level. PDCD5 overexpression can attenuate tumor invasion, EMT and the level of IGF-1 protein induced by TGF-β treatment. In conclusion, stable transfection of the PDCD5 gene can inhibit growth and induce cell cycle arrest in HepG2 cells, and its also notably improves the apoptosis-inducing effects of cisplatin, and reverses invasion and EMT induced by TGF-β. The use of PDCD5 is a novel strategy for improving the chemotherapeutic effects on HCC. Introduction A number of studies have demonstrated that apoptosis is closely associated with the initiation, progression and recurrence of cancer (1)(2)(3). Therefore, it is important to manipulate apoptosis-regulating factors in the effective treatment of cancer patients. Human programmed cell death 5 (PDCD5), formerly designated as TF-1 cell apoptosis-related gene 19, was cloned from TF-1 cells during the apoptotic process induced by cytokine withdrawal. PDCD5 plays a significant role in cellular apoptosis, and its overexpression in TF-1, MGC-803 and HeLa cells facilitates apoptosis triggered by growth factor or serum withdrawal (4). PDCD5 is widely expressed in various tissues and its mRNA expression level is significantly higher in adult tissues than in fetal tissues. In cells undergoing apoptosis, the level of PDCD5 protein is significantly increased and is located in the nuclei preceding the externalization of phosphatidylserine and the fragmentation of chromosomal DNA (5). Cisplatin is a common drug for cancer chemotherapy, however, with repeated exposure, tumor cells often become resistant to its effects. Hepatocellular carcinoma (HCC) is known to be resistant to various chemotherapeutics, including cisplatin. Studies have shown that this resistance is likely to be attributable to the cisplatin-induced upregulation of hTERT and the PI3K-dependent survivin pathway (14,15). Therefore, we hypothesize that the exogenous overexpression of PDCD5 may enhance apoptosis and reverse cisplatin resistance in HCC. In the present study, the biological behavior of HCC cells were analyzed in vitro by the stable transfection of the PDCD5 gene, and the effects on apoptosis induced by cisplatin and invasion by transforming growth factor (TGF)-β were investigated. Materials and methods Cell culture. The human HCC cell line, HepG2, was purchased from the Institute of Biochemistry and Cell Biology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences (Shanghai, China). The cells were incubated in complete Dulbecco's modified Eagle's medium (DMEM; Invitrogen, Carlsbad, CA, USA) supplemented with 10% heat-inactivated fetal bovine serum (FBS; Sijichun Bioengineering Materials Inc., Hangzhou, Zhejiang, China), 100 U/ml penicillin and 100 µg/ml streptomycin, in a humidified incubator at 37˚C with 5% CO 2 . Construction and transfection of PDCD5 plasmid. A PDCD5 full length cDNA sequence was obtained from GenBank (http://www.ncbi.nlm.nih.gov/genbank/; accession number, NM_004708.3). Total RNA was extracted using oligo (dT) from the human HCC HepG2 cells and was reverse transcribed as a template for reverse transcription polymerase chain reaction (RT-PCR). The primer sequences were as the follows: Sense, 5'-CGC GGA TCC CCG AGG GGC TGC GAG AGT GA-3' and antisense, 5'-CGC GAA TTC CCT AGA CTT GTT CCG TTA AG-3'. PCR conditions of 40 cycles of 94˚C for 30 sec, 60˚C for 45 sec and 72˚C for 30 sec followed by a final elongation step at 72˚C for 10 min, were used. The PCR products of full-length PDCD5 cDNA were then ligated into the BamHI/EcoRI sites of the eukaryotic expression vector, pcDNA3.1/Neo(+) (Invitrogen) using T4 DNA ligase at 16˚C overnight, followed by transformation of competent Escherichia coli DH5α. DNA sequencing was used to identify a recombinant plasmid clone with the correct sequence, and this bacterial clone was amplified and purified in for eukaryote transfection. The HepG2 cells were transfected with pcDNA3.1-PDCD5 plasmid or pcDNA3.1-Neo plasmid (empty vector) [pcDNA3.1(+)] using Lipofectamine™ 2000 (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions. RT-PCR and RT-quantitative (q) PCR were performed to detect PDCD5 mRNA expression 48 h after transfection. SuccessfulLY transfected HepG2 cells were then grown in complete medium for further G418 screening (400 µg/ml; Sigma-Aldrich, St. Louis, MO, USA). After four weeks, colonies were isolated and expanded into cell clones. The subclone cells expressing only Neo or Neo and PDCD5 genes were termed HepG2-Neo and HepG2-PDCD5, respectively. RT-PCR analysis. The levels of PDCD5 mRNA were first examined by RT-PCR and β-actin was used as an internal reference. Total RNA (5 µg) was isolated from the HepG2 cells 48 h after transfection and RT was performed to synthesize cDNA using random primers with Easyscript First-Strand cDNA Synthesis SuperMix (TransGen Biotech, Beijing, China) primed with oligo(dT 18 ). The forward and reverse primers were synthesized by Sangon Biotech (Shanghai) Co., Ltd., (Beijing, China), and the sequences and expected sizes of the PCR products were as follows: PDCD5 forward, 5'-ACA GAT GGC AAG ATA TGG ACA-3' and reverse, 5'-TCC TAG ACT TGT TCC GTT AAG-3' (210bp); and β-actin forward, 5-CGG GAA ATC GTG CGT GAC ATT-3' and reverse, 5'-CTA GAA GCA TTT GCG GTG GAC-3' (510bp). The thermal procedure of PCR for PDCD5 and β-actin mRNA was performed at 94˚C for 4 min for 1 cycle, then 94˚C for 45 sec, 52˚C for 45 sec and 72˚C for 1 min for 30 cycles, and 72˚C for 7 min for 1 cycle. PCR products were subjected to electrophoresis on 1.5% agarose gels containing ethidium bromide and then visualized under ultraviolet light. qPCR analysis. To quantify the results of the RT-PCR, PDCD5 mRNA expression levels were further analyzed by RT-qPCR analysis, which was performed by an RT-Cycler™ Real Time PCR Detection System (CapitalBio, Ltd., Beijing, China) with SYBR Green (Molecular Probes, Invitrogen). The following primers were used: Sense, 5'-ACA GAT GGC AAG ATA TGG ACA-3' and anti-sense, 5'-TCC TAG ACT TGT TCC GTT AAG-3' (199 bp) for PDCD5 and; sense, 5'-TTA GTT GCG TTA CAC CCT TTC-3' and anti-sense, 5'-ACC TTC ACC GTT CCA GTT T-3' (150 bp) for β-actin. Firstly, RNA was extracted and reverse transcribed to cDNA using Easyscript First-Strand cDNA Synthesis SuperMix (Beijing TransGen Biotech Co., Ltd., Beijing, China), then 1 µl cDNA was added in a 20-µl reaction mixture containing 0.5X SYBR Green, 1X TransStart Green qPCR Supermix (Transgen Biotech) and 0.5 µmol/l primer sets. The cycling conditions were as follows: 95˚C for 5 min for 1 cycle, followed by 95˚C for 45 sec, 57˚C for 20 sec and 72˚C for 20 sec for 40 cycles. The expression levels of PDCD5 were internally normalized to β-actin. The relative expression level of PDCD5 mRNA was calculated by the 2 -∆∆CT method. Each experiment was performed in duplicate and repeated three times. Cell viability assay. The cell growth rate was determined by MTT assay (Sigma-Aldrich). Briefly, cells (100 µl) at the logarithmic growth phase were seeded at a 1x10 4 /ml density into 96-well culture plates. MTT solution (10 µl; 5 mg/ml) was added into each well and incubated at 37˚C for 4 h. Following centrifugation at 1,409 x g for 10 min, the supernatant was discarded and 100 µl DMSO was added. When the remaining formazan pellet was dissolved completely, the absorbance values at a 570 nm wavelength were read on an ELISA plate reader (Bio-Rad, Hercules, CA, USA). The total procedure was repeated three times. Flow cytometry analysis of the cell cycle. HepG2 cells at the logarithmic growth phase were seeded in 6-well plates. After reaching 50% confluence, the adherent cells were cultured in serum-free medium for 24 h and then were cultured in DMEM supplemented with 10% FBS. After 48 h, the cells were digested and harvested with 250 µl trypsin. The cell pellet was obtained following centrifugation for 3 min at 4˚C and 978 x g, and was re-suspended with 300 µl ice-cold PBS on ice, followed by resuspension in 70% ethanol at 4˚C for 30 min. Finally, 1 ml propidium iodide (PI) staining solution (20 µg/ml PI, 0.1% Triton X-100, 2 mM EDTA and 8 µg/ml DNase-free RNase) was added to the samples, and then the associated data were analyzed on a FACScan (Becton-Dickinson, San Francisco, CA, USA). Results were acquired from 10,000 cells. Flow cytometric analysis of apoptosis. Next, an Annexin V-fluorescein isothiocyanate (FITC) apoptosis detection kit (BD Biosciences Clontech, California) was used to identify the translocation of phosphatidylserine. The HepG2 cells were cultured with 5 µg/ml cisplatin. After 24 h, 2x10 5 cells in each well were harvested. Subsequent to being washed with PBS buffer, the cell pellet was incubated with 2.5 µl Annexin V and 5 µl PI (final concentration, 10 µg/ml) in 100 µl 1X binding buffer for 15 min in the dark. Apoptosis was determined by flow cytometry and analyzed using CellQuest and Modfit software (Becton-Dickinson). At least 10,000 events were analyzed for each sample. Cell migration assay. The Boyden chamber invasion assay was performed to evaluate the in vitro migration of the HepG2-Neo and HepG2-PDCD5 cells in a 24-well tissue culture plate with a Transwell filter membrane. The lower side of the filters was coated with type I collagen (0.5 mg/ml). The cells were seeded in the upper part of the Transwell plate at a density of 5x10 5 /ml, with 100 µl cell suspension in each well. After 24 h, the cells on the upper surface of the filter were removed, and the remaining cells were fixed with methanol and stained with hematoxylin and eosin (Sigma-Aldrich). Cell counting was performed under light microscopy (x200 magnification) and the cells that had migrated to the lower chamber were regarded as migrated cells. Each sample was assayed in triplicate and repeated twice Western blot analysis. Proteins from the HepG2 cells were extracted and their concentrations were determined by bicinchoninic acid protein concentration assay kit (Beijing Biosea Biotechnology, Co., Ltd., Beijing, China). The cell lysates (50 µg) were resolved on 15% SDS-polyacrylamide gels, electrophoretically transferred to polyvinylidene difluoride membranes and then incubated with primary monoclonal mouse antibodies against PDCD5, phospho-p53, Snail or insulin-like growth factor (IGF)-1 (Santa Cruz Biotechnology, Inc., Santa Cruz, CA, USA). The horseradish peroxidase-conjugated rabbit anti-mouse secondary antibody was used at 1:1,000 dilutions for 2 h at room temperature. Blots were visualized using the chemiluminescence method. β-actin was used as an internal control. Statistical analysis. All quantitative data are expressed as the mean ± standard deviation. The statistical analysis was performed by commercially available SPSS 14.0 software (SPSS, Inc., Chicago, IL, USA). Student's t-test (unpaired, two-tailed) was performed to compare the means between two groups. The means of the different groups were compared using a one-way analysis of variance. P<0.05 was used to indicate a statistically significant difference. (Fig. 1). Changes of the biological behavior of the tumor by the stable transfection of PDCD5. To analyze the effect of PDCD5 overexpression on cancer cell growth, an MTT assay and flow cytometry were performed to assess cell proliferation and the cell cycle. Cell proliferation was significantly slower in the HepG2-PDCD5 cells compared with the HepG2 and HepG2-Neo cells ( Fig. 2A). Next, an analysis of the cell cycle was performed on the HepG2, HepG2-Neo and HepG2-PDCD5 cells. The percentage of cells in the G 2 /M phase was significantly higher in the HepG2-PDCD5 cells compared with the HepG2 and HepG2-Neo cells (P<0.05) (Fig. 2B). This indicated that PDCD5 overexpression induces G 2 /M cell cycle arrest in HepG2 cells. PDCD5 protein enhances the sensitivity of HepG2 cells to cisplatin in vitro. The HepG2-Neo and HepG2-PDCD5 cells were treated with varying concentrations of cisplatin and the chemotherapeutic sensitivity was assessed by Annexin V-FITC and PI double staining. Following treatment with 2.5, 5 and 10 µg/ml cisplatin for 24 h, the number of HepG2-Neo and HepG2-PDCD5 apoptotic cells were increased in a dose-depen-dent manner ( Fig. 3A and B). In the cells without cisplatin treatment, no difference was found in the apoptotic rate between the HepG2-Neo and HepG2-PDCD5 cells. PDCD5 interacts with p53 proteins in a variety of cancer cells. To assess the effect of PDCD5 on p53 protein, the level of phospho-p53 protein was measured by western blot analysis. Phospho-p53 protein expression was increased following 24 h of treatment with cisplatin in the HepG2-Neo and HepG2-PDCD5 cells. Moreover, the HepG2-PDCD5 cells showed higher phospho-p53 protein levels compared with the HepG2-Neo cells following cisplatin treatment ( Fig. 3C and D). No difference was found in the p53 protein expression level between the HepG2-Neo and HepG2-PDCD5 cells without cisplatin treatment. PDCD5 overexpression reduces invasion and epithelial-mesenchymal transition (EMT) induced by TGF-β. The HepG2-Neo and HepG2-PDCD5 cells were incubated with TGF-β (2 ng/ml) for 24 h to induce invasion and EMT. The Boyden chamber invasion assay showed that PDCD5 transfection exhibited no effect on the invasion index in the cells without TGF-β treatment. However, in the TGF-β-treated HepG2 cells, PDCD5 transfection significantly decreased the invasion index compared with the cells transfected with the control plasmid (P<0.05) (Fig. 4A). The expression of the EMT marker, Snail, was determined by western blotting. In the cells transfected with the HepG2-Neo control plasmid, Snail protein was significantly increased by TGF-β treatment. However, PDCD5 transfection significantly decreased Snail protein expression in the HepG2 cells treated with TGF-β (P<0.05) (Fig. 4B and C). IGF-1 protein expression was also measured in the HepG2 cells by western blotting. TGF-β treatment significantly increased the IGF-1 protein expression level in the HepG2-Neo cells, which was attenuated in the HepG2-PDCD5 cells ( Fig. 4B and C). This indicates that PDCD5 may downregulate IGF-1 protein expression in TGF-β-treated HepG2 cells. Discussion In the present study, the PDCD5 gene was stably transfected into a human HCC cell line to induce its overexpression. It A B was demonstrated that the transfection of PDCD5 into HepG2 cells could change the biological behaviors of tumors, such as the cellular proliferation, cell cycle progression, cisplatin sensitivity, tumor invasion and EMT. The growth of the HepG2-PDCD5 cells was slower than that of the HepG2 and HepG2-Neo cells. A higher percentage of HepG2-PDCD5 cells were in the G 2 /M phase compared with the HepG2 and HepG2-Neo cells. The PDCD5-transfected cells showed higher sensitivity to cisplatin treatment compared with the HepG2-Neo cells, with higher p53 protein expression. PDCD5 overexpression can attenuate tumor invasion, EMT and IGF-1 protein induced by TGF-β treatment. In the present study, the HepG2-PDCD5 cells had decreased levels of proliferation compared with the HepG2 and HepG2-Neo cells, as demonstrated by the presence of less viable cells. This indicated that PDCD5 may participate in the pathogenesis of tumors, and will be associated with various clinicopathological factors in HCC patients. The decreased expression level of PDCD5 has been observed in various human tumors, and is correlated with high-grade astrocytic gliomas (9), a higher Gleason grade in prostate cancer (6) and an advanced International Federation of Gynecologists and Obstetricians stage and poorer survival in epithelial ovarian carcinomas (8). To the best of our knowledge, no study has been reported on the correlation between PDCD5 expression and the clinicopathological factors in HCC patients; this requires further investigation. The present study found that the stable transfection of PDCD5 could also induce cell cycle arrest, as demonstrated by the higher percentage of HepG2-PDCD5 cells in the G 2 /M phase compared with the HepG2 and HepG2-Neo cells. This suggests that the decreased number of viable HepG2-PDCD5 cells is partly caused by the inhibition of cell cycle progression. G 2 /M is an important cell cycle checkpoint prior to cells entering the mitotic phase. In the present study, G 2 /M arrest following PDCD5 transfection depended on functional p53, since in cells with inactive p53, G 2 /M arrest did not occur (16,17). This indicates that in HepG2 cells, p53 protein is intact and cisplatin resistance could be reversed (18). Therefore, the present study evaluated whether PDCD5 overexpression enhances the sensitivity of HepG2 cells to cisplatin in vitro. Cisplatin is a second-generation platinum-based chemotherapeutic drug for the clinical treatment of a variety of tumors. Cisplatin functions through restraining DNA replication, leading to cell cycle arrest and apoptosis. The long-term (20), and co-localization of p53 and PDCD5 protein has been found in the synergistic therapeutic effect of PDCD5 with cisplatin (12). The present results were further supported by a recent study that showed that the knockdown of PDCD5 by RNA interference decreased the level of p53 phosphorylation (21). This indicates that PDCD5 may function as a co-activator of p53 upon DNA damage, such as that caused by cisplatin treatment. The present study also found that PDCD5 overexpression could reduce invasion and EMT induced by TGF-β. Tumor invasion is a complex and multi-step process that involves alteration of the cell adhesion to extracellular matrix proteins. TGF-β has been shown to promote vascular invasion in HCC (22). The present results showed that the pro-invasive effect of TGF-β was reversed by PDCD5 overexpression. There are few studies on the correlation between PDCD5 and tumor invasion. However, a correlation was indicated in rheumatoid arthritis, an inflammatory disease. The PDCD5 levels in the plasma and synovial fluid of patient with rheumatoid arthritis were shown to be inversely associated with two inflammatory cytokines, tumor necrosis factor (TNF)-α and interleukin (IL)-17 (23,24). TNF-α-mediated nuclear factor-κB expression promotes the invasion of HCC cells, and its downregulation mediates the anti-invasive effect of a number of drugs (25). IL-17 can also promote the invasion and metastasis of HCC cells (26). EMT is one vital step in epithelial cells, characterized by loss of cell adhesion and acquisition of malignant phenotype, including capabilities of cell motility, migration, invasion and metastasis to a new location (27). The present results showed that following TGF-β treatment, compared with the HepG2-Neo cells, the HepG2-PDCD5 cells showed decreased expression levels of Snail, which is an EMT marker protein. inhibited the Snail and IGF-1 protein expression induced by TGF-β. Western blotting was performed to detect Snail and IGF-1 protein levels in the HepG2 cells treated with TGF-β (2 ng/ml) or without for 24 h. β-actin served as a loading control. One representative figure is shown from three independent experiments. (C) Relative expression of Snail and IGF-1 protein of the four groups. The Y axis indicates the gray value of Snail or IGF-1 normalized to that of β-actin. Data are expressed as the mean ± standard deviation and a two-tailed, unpaired t-test was performed. * P<0.05 (n=3). HepG2-Neo, HepG2 cells transfected with control plamid; HepG2-PDCD5, HepG2 cells transfected with PDCD5 plamid; TGF-β, transforming growth factor-β; IGF-1, insulin-like growth factor 1. A B C This means that PDCD5 has an inhibitory effect on EMT in HCC cells, and provided a novel anti-tumor strategy for treating HCC. The present study found that TGF-β treatment significantly increased IGF-1 protein expression in the HepG2-Neo cells, which was attenuated in the HepG2-PDCD5 cells, indicating negative regulation on IGF-1 by PDCD5. IGF-1 is mainly secreted by the liver as a result of stimulation by growth hormone. IGF-1 plays roles in the promotion of cell proliferation and the inhibition of apoptosis. In human prostate cancer cells, IGF-1 upregulates ZEB1 and drives EMT (28). The present results validated the effect of IGF-1 on EMT in HCC cells. Recently, a study showed negative correlations between IGF-1 and PDCD5 at the mRNA and protein levels (29). The study inferred that IGF-1 may downregulate the expression of PDCD5. However, the present results indicate that IGF-1 may be downregulated by PDCD5. This disparity requires further investigation. In conclusion, stable transfection of the PDCD5 gene can inhibit the growth and induce cell cycle arrest in the G 2 /M phase in HepG2 cells. PDCD5 transfection can also enhance the sensitivity to cisplatin and p53 phosphorylation, and reverse invasion and EMT induced by TGF-β. PDCD5 represents a novel drug target and therapeutic strategy for the improved chemotherapeutic treatment of HCC.
2018-04-03T03:29:28.651Z
2014-10-29T00:00:00.000
{ "year": 2014, "sha1": "c541de22f7cfaf6a5976650d1ad1da1a007b8de6", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/9/1/411/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c541de22f7cfaf6a5976650d1ad1da1a007b8de6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55960931
pes2o/s2orc
v3-fos-license
Review of Tax Shield Valuation and Its Application to Emerging Markets Finance Due to the existence of tax-deductible expenses, a tax advantage, called tax shield, arises. The aim of the chapter is to identify and define the well-known approaches associated with tax shield, mainly interest tax shield and to analyze the approaches to quantify the present value of interest tax shields. Finally, we identify those that can be used in the conditions of emerging markets. Introduction The issue of tax shields is an increasingly important object of interest for both business managers and academics.Worldwide in recent years, the volume of leveraged buyouts and management buyouts (MBOs) has increased.In this case, debt is an important component of value [1]. Tax expenses generate tax savings (tax shields), which significantly affect business decisionmaking, especially investment decision-making and capital structure issues.The most important sources of tax savings are interest and depreciation.Therefore, tax shields are divided into two main categories: interest and non-interest tax shields. More than 50 years of research on tax shield has brought a number of theories to quantify them.The main area of research is the interest tax shield, which has a direct influence on the company's decision about the capital structure, acceptance or non-acceptance of investment projects. Chapter focuses on the identification and analysis of selected methods for measuring the value of tax shield with an emphasis on the interest tax shield.In Section 2, we define the tax shield and review the main tax shield valuation models.These models are subdivided in accordance with the chosen corporate debt policy.Section 3 is focused on tax shield models when book value of debt is assumed.In Section 4, we summarize the findings from the previous sections and examine which models are applicable in emerging markets.We also analyze which factors affect the value of tax shield and how the identified gaps can be addressed.In Section 5 we sum up the previous information. Main tax shield valuation theories Within this section, we will focus on defining the tax shield and the breakdown of tax shield theories according to debt policy that divides the theory into two categories: if debt is fixed or if leverage is constant.For investment decision-making, the present value of tax shield is an important category.The criterion for choosing an appropriate method of quantification is the nature of the debt policy which is part of corporate financial management.Debt policy is the source of differences between theories as it determines what discount rate is chosen to quantify the present value of tax shield. Among economists, there is no consensus about which theory is correct, a source of disagreement is the discount rate used in calculating the present value of tax shield.Copeland et al. [2] argue: The finance literature does not provide a clear answer about which discount rate for tax benefit of interest is theoretically correct. Definition of tax shield The tax shield is the result of tax deductibility of business expenses.This is defined as: A tax shield is a reduction in taxable income for an individual or corporation achieved through claiming allowable deductions such as mortgage interest, medical expenses, charitable donations, amortization and depreciation.These deductions reduce a tax payer's taxable income for a given year or defer income taxes into future years.Tax shields lower the overall amount of taxes owed by an individual taxpayer or a business [3]. It follows from the previous definition that the source of the tax shield (also called tax benefit or tax advantage or TS) is the different type of business expenses.The most significant sources of expenses include interest and other deductions; therefore tax shields are divided into interest and non-interest.According to Brealey et al. [4], an interest tax shield is defined as: tax savings resulting from deductibility of interest payments.According to Damodaran [5], the interest tax shield is expressed in a similar vein: Interest is tax-deductible, and the resulting tax savings reduce the cost of borrowing to firms. The first impulse for the development of different approaches how to quantify tax shield, was the theory of Modigliani and Miller [6]; the authors created the first widely accepted theory of capital structure.The model assumes perfect capital market, risk-free interest rate and zero taxation of corporate income.Capital structure is given by real assets, for examaple, irrelevant to the value of the business.Therefore, it is not important whether the company is levered or not.The main flaw of this theory, however, was the absence of taxes.This unrealistic assumption has been removed in the modified model of Modigliani and Miller [7], abbreviated MM model, resulting in the fact that the value of the company increases with Financial Management from an Emerging Market Perspective the growth of company's leverage.The newly created value results from tax deductibility of interest and represents the value of tax shield.The value of the levered company is given by Eq. ( 1) and shown in (Figure 1). The value of tax shield is simply given as corporate tax rate times the cost of debt times the market value of debt. If the debt is constant and perpetual, the company's tax shield depends only on the corporate tax rate and the value of debt.Then the present value of tax shield equals the discounted value of Eq. ( 2). PV TS ðÞ ¼ Eq. ( 2) is the formula for calculating the interest tax shield based on the Modigliani and Miller theory [7], Eq. ( 3) is the formula of its present value. 1 It is based on the assumption that the main source of tax shields (hereinafter TS) is the interest accruing from the company's leverage. It should be noted that the tax shield is influenced by three variables: the tax rate, cost of debt and the value of debt.Liu [8], in contrast to the previous formulae, considers tax shield as a variable influenced by four variables: Tax shield is a function of four variables "net income, interest rate, debt, and tax rate."However, the value of the MM tax shields only includes two variables "debt and tax rate," is independent of interest rate, and cannot be true.Tham and Velez-Pareja define two different methods of calculating the present value of tax shields: There are two ways to define the present value of tax shield (PVTS).First, the PVTS is simply the tax shield (TS), discounted by the appropriate discount rate for the tax shield ψ.Second, the PVTS is the difference in the taxes paid by the unlevered and levered firms [9]. Fernandez, on the other hand, argues that only one definition is true: the value of tax shields is the difference between the present values of two different cash flows, each with their own risk: the present value of taxes for the unlevered company and the present value of taxes for the levered company [10]. These definitions are ambiguous and suggest that the value of tax shield is a function of multiple quantitative and qualitative variables, one of the key variables is debt policy of the company. Modigliani-Miller model The first of the analyzed theories is the model of Modigliani and Miller [7] (hereinafter MM), which is outlined in the previous section.According to the assumptions of the model, the company can borrow and lend money on perfect capital markets at risk-free rate and market value of debt is constant.For this reason, the tax savings (tax shield) are risk-free and the appropriate discount rate is risk-free rate.Eq. ( 4) is similar to Eq. (3). PV TS ðÞ ¼ The previous model is based on the conditions of an efficient capital market, so its use is limited.Given that the MM model predicts zero cost of financial stress, the enterprise could be funded theoretically only by debt.If the tax rate would not change, then the marginal benefit resulting from the debt is equal to the tax rate, and the value of company changes in proportion to the value of debt. This model is being criticized for unrealistic and very restrictive assumptions.Nevertheless, the model is known as the basis for the theory of corporate finance, it clearly defines the upper limit of business value. Other tax shield theories if debt is constant Similar to the model of Modigliani and Miller, there are other approaches that assume fixed debt.The risk of debt determines the discount rate and its choice varies according to the authors' opinion. Myers [11] first suggested the adjusted present value (APV) method, which is used for the valuation of investment projects.The model is based on several assumptions: the first one is to determine the value of a company as the sum of the unlevered business value and the value of tax shield.Dividend policy impact is neglected.Company generates perpetual cash flow which is known with certainty at time t =0. Financial Management from an Emerging Market Perspective The market value of debt is known and debt is perfectly correlated with the value of interest tax savings.Therefore, debt and tax shield are equally risky; both components should be discounted at the same discount factor (cost of debt).The value of tax shield is quantified according to Eq. ( 5). PV TS ðÞ The Ruback model [12] is based on the assumption that debt is risky because the debt value changes due to the change in the cost of debt.The default option is disregarded.The debt has a constant value (book value) known at i = 0. Appropriate discount rate is given by Eq. ( 6) and cash flow from tax benefits is quantified according to Eq. ( 7). If book value of debt is fixed, the Beta of tax shield is equal to the Beta of debt ( β D ).It implies that both the debt and the tax shield share the same systematic risk and therefore the tax shield is discounted at cost of debt.The value of the levered company is measured by APV method as the sum of the value of unlevered company and the value of tax shield.Each component of the value is discounted at appropriate discount rate, as follows Kaplan and Ruback [13] have logically pursued the previous model.They compared the market value of MBOs (management buyouts) and leveraged recapitalization to the discounted value of their corresponding cash flow forecasts.To estimate the present value of these cash flows, they used the discount rate based on capital asset pricing model (CAPM). Cost of capital is measured by weighted average cost of capital before tax according to Eq. ( 9), which is CAPM model for unlevered company. Business value is measured by discounting capital cash flow using the discount rate for unlevered company.Authors used so-called method "compressed APV" (APV C ) which, unlike standard adjusted net present value, assumes that tax shields and cash flow share the same systematic risk.Both are discounted at the same discount rate k e = k u .The tax shield is more risky than previous models, which indicates the discount rate used. Luehrman [14] focused his work on analyzing the use of APV method for business valuation.He criticized using weighted average cost of capital (WACC) for evaluating the company because the method is inconsistent in use (the cost of each type of capital is calculated on the basis of the book values instead of the market values and vice versa).Another critical point is leverage, the change of which necessitates a periodic revaluation of WACC. The author suggested using Myers model, two types of cost of capital are used as a discount rate: cost of equity of a comparable company and cost of debt.The condition for this assumption is the existence of debt with the constant value over the entire estimated period. Tax shield valuation theories if market leverage ratio is constant The assumption of fixed debt is simple and unrealistic since the company should know future debt.This financial strategy is relatively binding because it does not reflect sufficiently the economic conditions and the emergence of favorable market conditions (e.g. a fall in interest rates).Therefore, the company should choose a less strict financial strategy. More realistic debt policy is based on the constant leverage (debt-to-equity and debt-to-value ratio).In the case of constant debt, future interest tax shields have deterministic nature because their future levels are known with certainty at time i = 0.The present value of these cash flows may change only in accordance with a change in tax rate or discount rate reflecting both microeconomic and macroeconomic indicators.In the case of constant leverage, future interest tax shields are stochastic and their future values should be estimated only with probability. From the point of view of the discount rate used in this approach, there is a split, as individual authors variously estimate the risk of tax shield. Miles-Ezzell model Miles and Ezzell [15,16], assuming perfect capital market, state that the discount rate for unlevered company, the cost of debt, the tax rate and the market leverage are constant during the existence of the investment project (or the company). The company value, as well as free cash flow, is stochastic and the company rebalances its capital structure regularly (most frequently every year) to maintain the target leverage.Therefore, the value of debt is known only in the first period; this cash flow is deterministic.In other periods, the value of debt is unknown, so the key component (debt) is stochastic.The tax shield also has deterministic nature in the first period, and in other periods it is stochastic. An appropriate discount rate for interest tax shield is cost of debt in the first year, it is the unlevered cost of capital in the following years. The basic difference among MM approach, the theory of Myers and Milles-Ezzell (hereinafter ME) model is estimated riskiness of tax shield which determines its present value.MM and Myers model are characterized by the discount rate k d , 2 the risk of tax savings are the same as the riskiness of debt.The ME approach uses the cost of debt in the first year.Tax savings in the first year are deterministic, as in the MM approach (Myers model), which corresponds to the discount factor.In the next years, the cash flow resulting from tax benefits is stochastic and the risk of this flow corresponds to the operational risk of the company. Harris-Pringle model Harris and Pringle [17] model (hereinafter HP model) is based on the previous model while the constant leverage is assumed.The company continuously rebalances its capital structure to achieve the fixed debt-to-equity ratio.Therefore, debt has a stochastic character because its value is estimated only with some probability and is unknown in all periods, including the first one. If the value of debt is unknown, tax shield is stochastic, too.An appropriate discount rate is the unlevered cost of capital that takes into account the risk of tax benefit.The present value of the interest tax shield is therefore equal to the formula in Eq. (12). The authors clarify the benefits of the model as follows: "the MM position is considered too extreme by some because it implies that interest tax shields are no more risky than the interest payments themselves.The Miller position is too extreme for some because it implies that debt cannot benefit the firm at all.Thus, if the truth about the value of tax shields lies somewhere between the MM and Miller positions, a supporter of either Harris and Pringle or Miles and Ezzell can take comfort in the fact that both produce a result for unlevered returns between those of MM and Miller.A virtue of either Harris and Pringle compared to Miles and Ezzell is its simplicity and straightforward intuitive explanation."[17]. Another models assuming constant leverage The Miles and Ezzell and Harris and Pringle models are the most commonly applied approaches while the constant leverage is assumed.In addition to constant debt, Ruback [12] also developed another model based on fixed leverage.The formula for calculating the present value of interest tax shields is consistent with the Harris and Pringle model. On the other hand, Lewellyn and Emery [18] suggested three different methods for calculating tax shields.In their view, the Miles and Ezzell method is the most consistent and correct. Myers, except from model in Section 2.2.2, in Ref. [4], extended its model on the condition of constant leverage (debt to equity ratio): the risk of interest tax shields is the same as the risk of the project.Therefore, we will discount the tax shields at the opportunity cost of capital (r).The appropriate discount rate is unlevered weighted average cost of capital. Other authors combine both approaches (Miles and Ezzell, Harris and Pringle) as well as the Myers model if the company assumes fixed debt.Taggart [19] summarized the valuation models according to impact on personal taxes and suggested using ME model if company rebalances debt annually.If the company rebalances debt continuously, then HP model is suitable. Inselbag and Kaufold [20] recommend using the Myers model if the value of debt is constant; in the case of fixed leverage, the Miles and Ezzell model is suitable. Damodaran [21] did not mention the formula for the value of tax shield, but Fernandez [22] derived, according to the Damodaran equation ( 30), the present value of tax shield that is equal to Eq. ( 13). PV TS ðÞ ¼ Fernandez, in relation to the cost of capital, mentioned Practioner's method.It is used by consultants and investment banks.He derived the formula for the present value of tax shield based on the formula for leveraged Beta. Arzac and Glosten [23], based on the approach of Miles and Ezzell, developed a unique method which eliminates the discount rate.They used "pricing kernel", a stochastic discount factor.They derived the formula for the company market value, for the market value of equity and for market value of tax shield using an iterative process. The authors then mentioned: "The value of tax shield depends upon the nature of the equity stochastic process, which, in turn, depends upon the free cash flow process."[23] If the second part of Eq. ( 15) is equal to zero, the model is identical to the Modigliani and Miller model. Grinblatt and Liu [24] developed one of the most general approaches to determine the value of tax shield.Their approach is different from all other models, since the Black-Scholes and Merton option models are applied.The model assumes that the information follows Markov diffusion process; the market is dynamically complete.The model also quantifies any cash flow and tax shield.The approach is mathematically correct, but practically difficult to apply due to many abstract assumptions. Liu [8] developed the model assuming a dependence of the value of tax shield on four variables: net income, interest rate, debt and tax rate.Tax shield is divided into two parts: earned tax shield and unearned tax shield depending on whether the interest rate is higher or lower than return on investment (ROI).The author himself noted that his theory is inconsistent with other approaches. Fernandez model Fernandez model for calculating the value of tax shield is different than those in previous cases.He argued that his approach is independent of debt policy [10].The basic idea is that the Financial Management from an Emerging Market Perspective value of tax shield is not equal to the present value of tax shields, but the value of tax shields (VTS 3 ) is the difference between the present value of two cash flows of each with different risk: the present value of taxes paid by unlevered company and the present value of taxes paid by levered company. Figure 2 shows the business value according to Ref. [25]. The tax paid by unlevered company is proportional to the free cash flow; they are equally risky.An appropriate discount rate is the unlevered cost of capital in the case of perpetuity.The tax paid by levered company is proportional to the equity cash flow (ECF).The appropriate discount rate for estimating the present value of taxes paid by levered company is the cost of equity, since the risk of both flows is consistent in the case of perpetuity.The value of tax shield is equal to the difference between the present values of these cash flows, as follows Eq. ( 17) is identical to the MM model [7] but Fernandez claimed it could be valid irrespective of debt policy.In the case of constant growth, Eq. ( 17) is derived to the form of Eq. ( 18). Despite the revolution of this model, it is criticized.It should be noted that, that equity cash flow is not equal to the taxable income, since any new debt makes equity cash flow increasing without tax increasing.The book value of debt is stochastic and positively correlated with the unlevered equity.Taxes paid by unlevered companies have a lower risk than ECF (hence a different discount rate).There is further criticism on the combination of two different approaches (zero growth and non-zero growth) [26].Cooper and Nyborg argued that Fernandez developed the model based on the combination of two different approaches (MM and ME) and therefore the value of tax shield is equal to the present value of tax shield.Based on Fernandez approach, the authors found out that the value of tax shield is identical to the Harris and Pringle model in the case of perpetuity [27]. Fernandez [28] subsequently modified the original model.The present value of taxes paid by levered company is, as follows Eq. ( 20) expresses a difference between the present value of taxes paid by unlevered and levered company. The previous equation indicates that the value of tax shield should depends only on the nature of the stochastic process of the net increase of debt and should not depend on the nature of the stochastic process of the free cash flow.The issue is to estimate the present value of ΔD which requires estimating the discount rate.It depends on the nature of stochastic process of the net increase of debt, it may be: • fixed debt, • debt is proportional to the equity value, • debt increases are as risky as the free cash flow, • debt of one-year maturity but perpetually rolled over [29]. Tax shield valuation theories with book value of debt There are alternative models based on the book value.Book values are important when deciding on debt policy.Market values better reflect the current value and stock market volatility, nevertheless unreliability of market values highlighted particularly during the financial crisis of 2009. Another important fact is the use of book values to measure the creditworthiness of businesses.Credit rating agencies (CRAs) take into account financial and non-financial factors.Leverage and interest coverage ratio are considered as key determinants of the credit rating and they are quantified by book values. The last important factor is the weak development of some capital markets, for example, emerging markets.There are relatively few listed companies in Central and Eastern Europe as well as in other emerging markets.The capital market does not provide enough relevant information needed for application of market-based models.Moreover, in these countries, a Financial Management from an Emerging Market Perspective large number of small and medium enterprises, often family owned, meets the conditions for achieving tax savings, but previous models are not relevant to them. Fernandez model for book leverage ratio Fernandez, in this model, assumed that the company set its debt policy on the basis of target book leverage [30].Debt is the product of book leverage ratio and book value of equity.The value of unlevered company is equal to, if perpetuity and non-zero growth are assumed, as follows The present value of the debt change ΔD t is important to know for estimating the value of tax shield. If the company estimates the present value of the debt change according to Eq. ( 22), the value of tax shield with a constant book leverage ratio is equal to Eq. ( 23). Fernandez highlighted several advantages of using constant leverage instead of market leverage: • CRAs focus on book value leverage ratios, • the value of debt does not depend on the movements of the stock markets, • it is easier to follow for non-quoted companies, • the empirical evidence provides more support to the fixed book leverage ratio hypothesis [30]. Velez-Pareja model Velez-Pareja defined tax shield similar to other authors: "Tax shields or tax savings TS, are a subsidy that the Government gives to those who incur in deductible expenses.All deductible expenses are a source of tax savings.This is, labour payments, depreciation, inflation adjustments to equity, rent and any expense if they are deductible."[31]. If it is assumed that the main source of tax savings is interest, the company achieves the tax advantage if earnings before interest and taxes (EBIT) plus other income are sufficient to offset the interest paid by the company.In this case, the value of tax shield is equal to the tax rate multiplied by financial expenses (FE).If the value of EBIT and other income (OI) is less than the amount of financial expenses, the company does not pay corporate income tax.Nevertheless it Review of Tax Shield Valuation and Its Application to Emerging Markets Finance http://dx.doi.org/10.5772/intechopen.70943generates the tax shield; its value is equal to corporate tax rate times EBIT plus other income according to Eq. (24). Another possible scenario occurs if the sum of EBIT and OI is negative.Tax savings do not arise because the company does not pay any tax.In sum, all possible cases are given in Eq. (25). This is significant for further research; most of the literature dealing with the issue of tax shields is based on Eq. ( 2).It also means that both new businesses and start-ups can achieve partial tax savings, despite the fact that EBIT and OI cannot cover the value of financial expenses.Eq. (25) indicates that the value of tax shield should be a function of EBIT plus OI and not a function of the net income as Liu [8] argued in his theory.Eq. ( 26) expresses the relation between the dependent and independent variables. Figure 3 shows the course of the function of tax shield with respect to the sum of earnings before interest and tax and other income. Marciniak model Marciniak [33] suggested decomposition method for business valuation.The basis of the method is to divide the value into three different effects: • cash flow from operating and investing activities, • tax shield and • financial effect expressed as a difference between cost of equity and cost of debt. Figure 3. Tax shield as a function of EBIT plus other income [32]. Financial Management from an Emerging Market Perspective The first part, operating and investment cash flow (free cash flow) is discounted at cost of equity (instead of weighted average cost of capital).The tax shield is quantified as the sum of taxes paid on interest (corporate tax rate times interest).Financial effect is the product of debt and a difference between the cost of equity and the cost of debt, it is discounted at the cost of equity.Last component of the business value (financial effect) is positive if the required return on equity is higher than the cost of debt and vice versa. Eq. ( 27) expresses the value of levered company as a function of the sum of the present values of these three factors. Unlike Myers' adjusted present value, decomposition method discounts all cash flows at the same discount rate (the cost of equity).Therefore, this method is similar to the Kaplan and Ruback model.One of the advantages of the model is that it is not necessary to estimate weighted average cost of capital. Based on the previous method, Marciniak derived the value of tax shield formula expressed in Eq. ( 28). PV TS ðÞ ¼ This model is similar to Harris and Pringle or Kaplan and Ruback model because the cost of equity is used as a discount factor, assuming book value instead of market value. Emerging markets finance and tax shield valuation The previous sections show that significant factors of interest tax shield are: • debt, • cost of debt (e.g. interest rate), • corporate tax rate and Each of these factors is influenced by other microeconomic and macroeconomic factors.The value of debt determines the capital structure of company and one of the primary objectives is to optimize it.In terms of developed and emerging markets, there are different determinants of capital structure.This issue is a field of research in many studies.Booth et al. [34] investigated capital structure in developing countries.They found that capital structure in developed and developing countries are affected by same firm-specific factors (like debt ratios).Nevertheless, they found out that there are differences such as GDP growth, capital market development and inflation rates. Bas et al. [35] also investigated capital structure in emerging markets.They examined the capital structure in 25 countries from different regions.It should be noted that according to their study listed companies that prefer equity financing instead of long-term debt financing.They also investigated the effect of company size.Large companies are more diversified and default risk is reduced as a result of higher leverage.Hence, small and large companies have different debt policies.Also, large and traded companies can easily get access to finance that depends more on the economic conditions of the country. Jong et al. [36] examined the importance of country and firm-specific factors in the leverage choice of companies from 42 countries.They found that the impact of several firm-specific factors (tangibility, company size, growth and profitability) on cross-country capital structure is significant and consistent with conventional theories. According to the studies mentioned above, the capital structure in emerging markets is determined, in addition to factors similar to those in developed countries, by specific factors.These include the development of the capital market, inflation or the size of businesses [37].The weak development of the capital market, especially bond market, means that the company cannot take advantage of the possibility of issuing a bond.Therefore, it is not possible to determine the market value of debt, and market value-based theories of the tax shield cannot be applied.Within the models reviewed in the chapter, we can suggest the use of models with a book value of debt because they are suitable for all businesses, regardless of size and tradability of a company on the capital market. In addition to debt value used (market versus book); it is also questionable to estimate the cost of capital (discount factor).For example, the cost of equity is traditionally estimated by CAPM model.However, if the company is non-listed, the model is inappropriate or inaccurate.Also the weighted average cost of capital is difficult to quantify.Damodaran [37] has created the database to help estimate the cost of equity and debt.In addition, a build-up model is often used. The tax shield is also affected by the tax system and corporate and personal tax rate or loss carried forward, which affects the effective tax rate and tax burden [38][39][40]. Under the conditions of emerging markets, the tax shield represents a significant source of value and is therefore part of several methods of investment decision analysis.Leasing is a frequent form of financing for small and medium enterprises; net advantage to leasing model includes an analysis of interest and depreciation tax shields; value of tax shield may be a decisive factor for selecting a portfolio of investment projects (using a modified resource-constrained project scheduling problem with discounted cash flows).In addition, other methods of investment decision-making may be adjusted for the existence of a tax shield, like risk analysis [41][42][43][44]. Conclusion The chapter deals with the analysis and classification of selected approaches to the quantification of tax shields.Theories are based on the premise of the perfect capital market and a clearly Financial Management from an Emerging Market Perspective defined corporate debt policy.However, both assumptions cannot be met in the realistic conditions of emerging markets; many businesses in emerging markets are not listed and debt policy is determined based on the book value of debt and not on the basis of a fixed market value of debt or market leverage. The theories mentioned in this chapter have many gaps that prevent the correct use under conditions of emerging markets.Gradually, new theories are emerging, reflecting real economic conditions, but it makes it difficult to determine which model is correct.In their book, Copeland et al. investigated various models of tax shield, and their opinion on the choice of the appropriate method is: We leave it to the reader's judgment to decide which approach best fits his or her situation [2]. Figure 1 . Figure 1.Value of levered company according to Modigliani and Miller [7]. Figure 2 . Figure 2. The value of unlevered and levered company according to Fernandez [25]. Review of Tax Shield Valuation and Its Application to Emerging Markets Finance http://dx.doi.org/10.5772/intechopen.70943 Review of Tax Shield Valuation and Its Application to Emerging Markets Finance http://dx.doi.org/10.5772/intechopen.70943 Financial Management from an Emerging Market Perspective
2018-12-07T22:24:21.283Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "b205451be74a4556c13d7692aef4fcc270535e2b", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/57292", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "0e869e4b50f04b46a74b9a84b1624b67fcc70893", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
2064866
pes2o/s2orc
v3-fos-license
Program Transformations for Asynchronous and Batched Query Submission The performance of database/Web-service backed applications can be significantly improved by asynchronous submission of queries/requests well ahead of the point where the results are needed, so that results are likely to have been fetched already when they are actually needed. However, manually writing applications to exploit asynchronous query submission is tedious and error-prone. In this paper we address the issue of automatically transforming a program written assuming synchronous query submission, to one that exploits asynchronous query submission. Our program transformation method is based on data flow analysis and is framed as a set of transformation rules. Our rules can handle query executions within loops, unlike some of the earlier work in this area. We also present a novel approach that, at runtime, can combine multiple asynchronous requests into batches, thereby achieving the benefits of batching in addition to that of asynchronous submission. We have built a tool that implements our transformation techniques on Java programs that use JDBC calls; our tool can be extended to handle Web service calls. We have carried out a detailed experimental study on several real-life applications, which shows the effectiveness of the proposed rewrite techniques, both in terms of their applicability and the performance gains achieved. INTRODUCTION In many applications calls made to execute database queries or to invoke Web services are often the main causes of latency. Asynchronous or non-blocking calls allow applications to reduce such latency by overlapping CPU operations with network or disk IO requests, and by overlapping local and remote computation. Consider the program fragment shown in Example 1. In the example, it is easy to see that by making a non-blocking call to the database we can overlap the execution of method foo() with the execution of the query, and thereby reduce latency. Many applications are however not designed to exploit the full potential of non-blocking calls. Manual rewrite of such applications although possible, is time consuming and error prone. Further, opportunities for asynchronous query submission are often not very explicit in the code. For instance, consider the program fragment shown in Example 2. In the program, the result of the query, assigned to the variable part-Count, is needed by the statement that immediately follows the statement executing the query. For the code in the given form there would be no gain in replacing the blocking query execution call by a non-blocking call, as the execution will have to block on a fetchResult call immediately after making the submitQuery call. It is however possible to transform the given loop, as shown in Example 3, and thereby enable asynchronous query submission. The rewritten program in Example 3 contains two loops; The original program is likely to be slow since it makes multiple synchronous requests to the database, each of which incurs network round trip delays, as well as delays in the database. In contrast, the rewritten program allows the network round trips to be overlapped. It also allows the database to better use its resources (multiple CPUs and disks) to process mul- tiple asynchronously submitted queries. Asynchronous calls have been long employed to make concurrent use of different system components, like CPU and disk. In this paper our focus is on automated rewriting of application programs so as to submit multiple queries asynchronously, as illustrated in Example 3. In general, automatically transforming a given loop so as to make asynchronous query submissions is a non-trivial task, and we address the problem in this paper. The most closely related prior work to our paper is that of Guravannavar and Sudarshan [1], who describe how to rewrite loops in database applications to replace multiple executions of a query in a loop by a single execution of a set-oriented (batched) form of the query. Batching can provide significant benefits because it reduces the delay due to multiple synchronous round trips to the database, and because it allows more efficient query processing techniques to be used at the database. Our program transformation techniques for asynchronous query submission are based on the techniques described in [1], but unlike [1], we show how to exploit asynchronous query submission, instead of batching. Although batching reduces round-trip delays and allows efficient set-oriented execution of queries, it does not overlap client computation with that of the server, as the client completely blocks after submitting the batch. Batching also results in a delayed response time, since the initial results from a loop appear only after the complete execution of the batch. Also, batching may not be applicable altogether when there is no efficient set-oriented interface for the request invoked, as is the case for many Web services. As compared to batching, asynchronous submission of queries can allow overlap of client computation with computation at the server; it can also allow initial results to be processed early, instead of waiting for an entire batch to be processed at the database, which can lead to better response times for initial results. Further, asynchronous submission is applicable to Web Services that do not support set-oriented access. On the other hand pure asynchronous submission can lead to higher network overheads, and extra cost at the database, as compared to batching. We present a technique which we call asynchronous batching, which combines the benefits of asynchronous submission and batching. The following are the key contributions of this paper: 1) We show (in Section 3) how a basic set of program transformations, such as loop fission, enable complex programs to be rewritten to make use of asynchronous query submission. Although loop fission is a well known transformation in compiler optimizations and batching, to the best of our knowledge no prior work shows its use for asynchronous submission of database queries. 2) Section 4 describes the design of our implementation. We first describe (in Section 4.1) the design challenges of such a program transformation tool. Since programmers may need to debug a rewritten version of their program, we present several techniques to make the rewritten program more readable. We then describe (in Section 4.2) the design of a framework that supports asynchronous query submission. Our framework provides a common API that can be configured to use either asynchronous submission or batching, or a combination of both. 3) In Section 5 we present extensions of the basic techniques described above. Specifically, we present (in Section 5.1) a modification of the code generated by the loop fission transformation that optimizes for response time by allowing early generation of initial results. We also present (in Section 5.2) asynchronous batching, a novel technique that combines the benefits of asynchronous query submission and batching by combining, at run time, multiple pending asynchronous requests into one or more batched requests. 4) These techniques have been incorporated into the DBridge holistic optimization tool [2], [3] to optimize Java programs that use JDBC. We present (in Section 6) a detailed experimental study of the proposed transformations on several real world applications. The experimental study shows significant performance gains due to our techniques. This article is an extended version of our earlier conference paper [4]; the key additions made in this journal version are described in Section 7. The rest of the paper is organized as follows. A brief background of asynchronous submission models is given in Section 2. Sections 3 through 6 describe our key contributions, as outlined above. Related work is described in Section 7. We discuss possible extensions of our techniques in Section 8 and conclude in Section 9. MODELS OF ASYNCHRONOUS CALLS Two models are prevalent for coordinating asynchronous calls: the observer model and the callback model. The Observer Model: In this model, the calling program explicitly polls the status of the asynchronous call it has made. When the results of the call are strictly necessary to make any further computation, the calling program blocks until the results are available. The observer model is suitable when the results of the calls must be processed in the order in which the calls are made. Example 1 of Section 1 shows a program making use of the observer model to coordinate the asynchronous query execution. We now formally define the semantics of the methods we use. • executeQuery: Submits a query to the database system for execution, and returns the results. The call blocks until the query execution completes. • submitQuery: Submits a query to the database system for execution, but the call returns immediately with a handle (without waiting for the query execution to finish). • fetchResult: Given a handle to an already issued query execution request, this method returns the results of the query. If the query execution is in progress, this call blocks until the query execution completes. The Callback Model: In this model, the calling program registers a callback function as part of the non-blocking call. When the request completes, the callback function is invoked to process the results of the call. The event driven model is suitable when the program logic to process the call results is small and the order of processing the results is unimportant. The program transformations presented in this paper make use of the observer model for asynchronous query submission. It is possible to extend the proposed approach to make use of the callback model for programs in which the order of processing the query result is unimportant. However, the details of such extensions are not part of this paper. BASIC TRANSFORMATIONS Guravannavar et.al. [1] present a set of program transformation rules to rewrite program loops so as to enable batched bindings for queries. In this section, we show how some of these transformation rules can be extended for asynchronous query submission. The program transformation rules we present, like the equivalence rules of relational algebra, allow us to repeatedly refine a given program. Applying a rule to a program involves substituting a program fragment that matches the antecedent (LHS) of the rule with the program fragment instantiated by the consequent (RHS) of the rule. Some rules facilitate the application of other rules and together achieve the goal of replacing a blocking query execution statement with a nonblocking statement. Applying any rule results in an equivalent program and hence the rule application process can be stopped at any time. We omit a formal proof of correctness for our transformation rules, and refer the interested reader to [5]. Each program transformation rule has not only a syntactic pattern to match, but also certain pre-conditions to be satisfied. The pre-conditions make use of the inter-statement data dependencies obtained by static analysis of the program. Before presenting the formal transformation rules, we briefly describe the data dependence graph, which captures the various types of inter-statement data dependencies. Data Dependence Graph Inter-statement dependencies are best represented in the form of a data dependence graph [6] program dependence graph [7]. The Data Dependence Graph (DDG) of a program is a directed multi-graph in which program statements are nodes, and the edges represent data dependencies between the statements. The data dependence graph for the program of Example 2 is shown in Figure 1. The types of data dependence edges are explained below. exists from statement (node) s a to statement s b if s a writes a location that s b may read, and s b follows s a in the forward control-flow. For example, in Figure 1, a flow-dependence edge exists from node s2 to node s3 because statement s2 writes category and statement s3 reads it. • An anti-dependence edge ( AD − − →) exists from statement s a to statement s b if s a reads a location that s b may write, and s b follows s a in the forward control flow. For example, in Figure 1, an anti-dependence edge exists from node s1 to node s2 because statement s1 reads categoryList and statement s3 writes it. where the schema T and statement sequences ss ′ 1 , ssr are constructed as follows. Let SV (split variables) be the set of variables for which either an LCAD or LCOD edge crosses the split boundaries (the edge is incident from ss2 to s or ss1, or from s to ss1). 1) Table t and record r have attributes corresponding to each variable in SV and a key. 2) ss ′ 1 is same as ss1 but with additional assignment statements to attributes of r. Each write to a split variable v is followed by an assignment statement r.v = v;. If the write is conditional, then the newly added statement is also conditional on the same guard variable. 3) ssr is a statement sequence assigning attributes of r to corresponding variables. Each assignment in ssr is conditional; the assignment is made only if the attribute of r is non-null (assigned). manner. For instance, we could model the entire database (or file system) as a single program variable and thereby assume every query/read operation on a database/file to be conflicting with an update/write of the database/file. In practice, it is possible to perform a more accurate analysis on the external writes and reads. Basic Loop Fission Transformation Consider the program fragment shown in Example 2 and its rewritten form shown in Example 3. The key transformation, to enable such a program rewriting is loop fission (or loop distribution) [8]. Guravannavar et al. [1] make use of loop fission to replace iterative query executions with a batched (or set-oriented) query execution. In this section, we show how the program transformation rules proposed in [1] can be extended for rewriting programs to make use of asynchronous calls. A formal specification of the transformation is given as Rule A. The LHS of the rule is a generic while loop containing a blocking query execution statement s. ss 1 and ss 2 are sequences of statements, which respectively precede and succeed the query execution statement in the loop body. The LHS of the rule then lists two pre-conditions, which are necessary for the rule to be applicable. The RHS of the rule contains two loops, the first one making asynchronous query submissions and the second one performing a blocking fetch followed by execution of statements that process the query results. Note that any number of query execution statements within a loop can be replaced by non-blocking calls by repeatedly applying the loop fission transformation. Although we present the loop fission transformation rule w.r.t. a while loop, variants of the same transformation rule can be used to split set iteration loops (such as the second loop in the RHS of the Rule A). Rule A makes an improvement of the fundamental nature to the loop fission transformation proposed in [1]. Rule A significantly relaxes the pre-conditions (see Rule 2 in [1]). For instance, Rule A allows loop-carried output dependencies to cross the split boundaries of the loop. This rule can also be applied to perform batching, thereby increasing its applicability. In general, our transformations are such that the resulting program can be used either for batching or for asynchronous submission, and this choice can be made at runtime. Our transformations in fact blur the distinction between batching and asynchronous submission, and can be used to achieve the best of both, as described in Section 5.2. Applicability The pre-condition that no loop-carried flow dependencies cross the point of split can limit the applicability of Rule A in several practical cases. Consider the program in Example 4. We cannot directly split the loop so as to make the query execution statement (s2) non-blocking, because there are loopcarried flow-dependencies from statement s4 to s1 and to the loop predicate, which violate pre-condition (a) of Rule A. Statement s4, which appears after s1, writes a value and statement s1 reads it in a subsequent iteration. Such cases are very common in practice (e.g., in most while loops the last statement affects the loop predicate, introducing a loopcarried flow dependency). However, in many cases it is possible to reorder the statements within a loop so as to make loop fission possible, without affecting the correctness of the program. For example, the statements within the loop of Example 4, if reordered as shown in Example 5, permit loop fission. Note that in the transformed program of Example 5 there are no loop-carried flow dependencies, which prohibit the application of Rule A to split the loop at the query execution statement. An algorithm for statement reordering to enable loop fission, along with a sufficient condition for the applicability of the loop fission transformation are given in [4]. Further, Rule A is also not directly applicable when the query execution statement lies inside a compound statement such as an if-then-else block. We now present additional transformation rules which can be used to address this restriction. Control Dependencies We handle control dependencies using the approach of [1]. Consider the initial program shown in Example 6. The query execution statement appears in a conditional block. This prohibits direct application of Rule A to split the loop at the program point immediately following the query execution statement. Conditional branching (if-then-else) and while loops lead to control dependencies. If the predicate evaluated at a conditional branching statement s1 determines whether or not control reaches statement s2, then s2 is said to be control dependent on s1. During loop split, it may be necessary to convert the control dependencies into flow dependencies [8], by introducing boolean variables and guard statements. We define a transformation rule to perform this conversion. The formal specification of the transformation, called Rule 4 in [1] is shown as Rule B in this paper. An if-then-else block is transformed into an assignment of the value of the predicate p to a boolean variable cv, followed by a sequence of statements guarded by the value (or the negation) of boolean variable cv. In Example 6, we apply Rule B and introduce a boolean variable c to remember the result of the predicate evaluation, and then convert the statements inside the conditional block into guarded statements. We can then apply Rule A and split the loop, as shown in the last part of Example 6. Nested Loops A query execution statement may be present in an inner loop that is nested within an outer loop. In such a case, it may be possible to split both the inner and the outer loops, thereby increasing the number of asynchronous query submissions before a blocking fetch is issued. To achieve this, we first split the inner loop and then the outer loop. Such a transformation is illustrated in Example 7. Note that the temporary table introduced during the inner loop's fission becomes a nested table for the temporary table introduced during the outer loop's fission. As the idea is straight-forward, we omit a formal specification of this rule. SYSTEM DESIGN AND IMPLEMENTATION The techniques we propose can be used with any language and data access API. We have implemented these ideas and incor- [3]. A system that can support asynchronous query submission would include two main components (i) a source-to-source program transformer, and (ii) a runtime asynchronous submission framework. The runtime infrastructure also supports asynchronous batching, a technique that supports batching of asynchronous requests, described in Section 5.2. We now describe each of these components in detail. Program Transformer Our rewrite rules can conceptually be used with any language. We chose Java as the target language and JDBC as the interface for database access. To implement the rules we need to perform data flow analysis of the given program and build the data dependence graph. We used the SOOT optimization framework [9]. SOOT uses an intermediate code representation called Jimple and provides dependency information on Jimple statements. Our implementation transforms the Jimple code using the dependence information. Finally, the Jimple code is translated back into a Java program. The important phases in the program transformation process are shown in Figure 2. The main task of our program transformation tool appears in the Apply Async Trans Rules phase. The program transformation rules are applied in an iterative manner, updating the data flow information each time the code changes. The rule application process stops when all (or the user chosen) query execution statements, which do not lie on a true-dependence cycle, are converted to asynchronous calls. Our tool has been implemented with the following design goals. 2) Robustness for variations in intermediate code 3) Extensibility Since our program transformations are source-to-source, maintaining readability of the transformed code is important. We achieve this goal through several measures. (a) The transformed code mostly uses standard JDBC calls and very few calls to our custom runtime library. This is achieved by providing a set of JDBC wrapper classes. The JDBC wrapper classes and our custom runtime library hide the complexity of asynchronous calls. (b) When we apply Rule B followed by Rule A to split a loop, the resulting code will have many guarded statements. This leads to a very different control structure as compared to the original program. We therefore introduce a pass where such guarded statements are grouped back in each of the two generated loops, so that the resulting code resembles the original code. The intermediate code has the advantage of being simple and suitable for data-flow analysis, but it makes the task of recognizing desired program patterns difficult. Each high-level language construct translates to several instructions in the intermediate representation. We have designed our program transformation tool for robust matching of desired program fragments. The tool can handle several variations in the intermediate (Jimple) code. One of our design goals has been extensibility. Each of the transformation rules has been coded as a separate class. Application of any transformation rule independently must preserve the correctness of the program. Such a design makes it easy to add new program transformation rules. Runtime Asynchronous Submission Framework The runtime library works as a layer between the actual data access API (such as JDBC) and the application code. It provides asynchronous submission methods in addition to wrapping the underlying API. Features such thread management and cache management are handled by this library. The transformed programs in our implementation use the Executor framework of the java.util.concurrent package for thread scheduling and management [10]. Figure 3 shows the behaviour of the asynchronous submission API. The first loop in the transformed program submits the query to a queue in every iteration. The stmt.addBatch(ctx) invocation is a non blocking query submission, with the same The second loop accesses the results corresponding to the loop context using the stmt.getResultSet(ctx) which has the same semantics as the fetchResult API described in Section 2. Subsequently, it executes statements that depend on the query results. The LoopContextTable ensures the following: (i) it preserves the order of execution between the two loops and (ii) for each iteration of the first loop, it captures the values of all variables updated, and restores those values in the corresponding iteration of the second loop. EXTENSIONS AND OPTIMIZATIONS We now describe two extensions to our basic technique of asynchronous query submission. These extensions can significantly improve performance as shown by our experiments. Overlapping the Generation and Consumption of Asynchronous Requests Consider the basic loop fission transformation Rule A. A loop when transformed using this rule, will result in two loops, the first that generates asynchronous requests (hereafter referred to as the producer loop), and the second that processes, or consumes results (hereafter referred to as the consumer loop). According to Rule A, the processing of query results (the consumer loop) starts only after all asynchronous submissions are completed i.e, after the producer loop completes. Although this transformation significantly reduces the total execution time, it results in a situation where results start appearing much later than in the original program. In other words, for a loop of n iterations, the time to k-th response (1 ≤ k ≤ n) for small k is more as compared to the original program, even though the time may be less for larger k. This could be a limitation for applications that need to show some results early, or that only fetch the first few results and discard the rest. This limitation can be overcome by overlapping the consumption of query results with the submission of requests. The transformation Rule A can be extended to run the producer loop (the loop that makes asynchronous submissions) as a separate thread. That is, the main program spawns a thread to execute the producer loop, and continues onto the consumer loop immediately. Since the loop context table (Table t in Rule A) may be empty when the consumer loop starts, and may get more tuples as the consumer loop progresses, we implement the loop context table as a blocking (producerconsumer) queue. The producer thread submits requests onto this queue, which are picked up by the consumer loop. Note that this transformation is safe, and does not lead to race conditions since there are no data dependences between the producer and consumer loop other than through the loop context table. This is because the values of all variables updated in the producer loop are captured, and restored in the consumer loop via the loop context table. The blocking queue implementation of the loop context table avoids race conditions on the table. The details of this extension are straightforward and hence a formal specification is omitted. We evaluate the benefits of this extension and show the results in Section 6.3. Asynchronous Submission of Batched queries As mentioned earlier, the transformation rules proposed in this paper can be used for either batching or asynchronous submission. However, there are key differences in the approaches due to which their performance characteristics vary. In this section we compare the relative benefits and drawbacks of batching and asynchronous query submission, and propose a new strategy that can combine the benefits of both strategies. Asynchronous Submission vs. Batching The following are some of the drawbacks of batching as compared to asynchronous submission: • Although batching reduces round-trip delays and allows efficient set-oriented execution of queries, it does not overlap client computation with that of the server, as the client blocks after submitting the batch. • Although batching reduces the overall execution time of the program, for initial results it typically results in a worse response time, since the result of the first query is available only when the result set of the large batch is returned. • Since batching retrieves the results for the whole loop at once, it may significantly increase the memory requirement at the client. • Batching may not be applicable when there is no (efficient) set-oriented interface for the request invoked. The asynchronous query submission technique presented in Section 3 avoids the problems mentioned above for batching, but has a few drawbacks of its own, as compared to batching: • Asynchronous query submission does not reduce the number of network round trips but only overlaps them. This may increase network congestion. • The database still receives individual queries and hence this may result in a lot of random IO at the database. • As a result of the above, whenever batching is applicable, and the number of iterations of the loop is large, batching leads to much better performance improvements (in terms of total execution time) than asynchronous submission. More details are described in our experiments in Section 6.3. We now describe how to combine both these approaches. Asynchronous Batching: Best of Both Worlds Batching and Asynchronous submission can be seen as two ends of a spectrum. Batching, at one end, combines all requests into one big request with no overlapping execution, where as asynchronous submission retains individual requests as is, while completely overlapping their execution. Clearly, there is a range of possibilities between these two, that can be achieved by asynchronous submission of multiple, smaller batches of queries. This approach, which we call asynchronous batching, retains the advantages of batching and asynchronous submission, while avoiding their drawbacks. Consider the example in Figure 3. As mentioned earlier, the first loop in this program submits a query to a queue in each iteration. This request queue is monitored by a thread pool. In pure asynchronous submission, each free thread picks up an individual request from the queue. In contrast, with asynchronous batching, the thread can observe the whole queue, and pick up one, or more, or all requests from the queue. If a thread picks up a single request, it executes the query as described earlier. However. if a thread picks up more than one request, it performs a query rewrite as done in batching, and executes those requests as a batch. Once the result of the batch arrives, it is split into multiple result sets corresponding to each individual query, which are then placed in the cache. Asynchronous batching aims to achieve the best of batching and asynchronous submission, since it has the following characteristics. • Like batching, it reduces network round trips, since multiple requests may be batched together. • Like asynchronous submission, it overlaps client computation with that of the server, since batches are submitted asynchronously. • Like batching, it reduces random IO at the database, due to use of set oriented plans. • Although the total execution time of this approach might be comparable to that of batching, this approach results in a much better response time comparable to asynchronous submission, since the results of queries become available much earlier than in batching. • Memory requirements do not grow as much as with pure batching, since we deal with smaller batches. The key challenge in engineering such a system is to identify the sweet spot in the spectrum between batching and asynchronous submission. This primarily involves deciding the size of each batch and the number of threads to use, which would result in the best performance. This decision cannot be made statically during program transformation, since it depends on runtime factors such as (i) the number of iterations in the loop, (ii) the query processing time and the size of its results, (iii) the capacity and load on the client machine and the database server, (iv) network bandwidth availability. Asynchronous batching is a completely runtime decision; the program transformation is performed in accordance with the rewrite rules in this paper, and requires no additional rewriting. The runtime library makes decisions on asynchronous calls vs. partial batching in a dynamic fashion. We now discuss strategies to tune parameters for asynchronous batching. Adaptive tuning of parameters The runtime library is extended to allow a thread to pick up one or more requests from the queue. However the key problem is the following: given a queue of n requests, how many requests should a free thread pick up? Note that the only information available for a thread is the current state of the queue. In order to simplify our discussion, we make a few performance related assumptions: • The database is not under heavy load and is able to handle concurrent requests efficiently. • The number of threads T available on the client is fixed. • The network characteristics do not vary drastically during program execution. Given these assumptions, we now propose strategies to automatically vary the batch size at runtime. These strategies are affected by the following metrics: 1) The request arrival rate: This is the rate at which the program submits requests onto the queue. In our example of Figure 3, the first loop submits one request per iteration. So this arrival rate essentially captures the time taken by each iteration of the first loop before submitting a request. If there are expensive operations in this loop such as remote calls, they affect the request arrival rate. 2) The request processing rate: This is the rate at which requests in the queue are processed. Processing a request includes the query processing time at the database and the network round trip time. Since we only consider cases where the query is the same in each iteration, with varying parameter values, we assume that the query processing time is the same for each request. The request arrival rate would be higher than the requests processing rate if any of the following are true: (a) the producer loop has no expensive operations, (b) network round trips are very expensive, (c) query processing time is high. We now propose three possible strategies for asynchronous batching. One-or-all Strategy: This is a simple strategy to combine asynchronous submission and batching. Given a queue with n requests, the One-or-all strategy for a free thread is as follows: If n = 1, then pick up the request from the queue, and execute it as an individual request. If n > 1, pick up all the n requests in the queue and batch them. In other words, (i) insert the parameters of the n requests into a temporary parameter table, (ii) rewrite the query using the technique given in [1], (iii) execute this rewritten query. If n = 0, wait for new requests. In this strategy, a free thread will always clear the queue by picking up all pending requests from the queue. Lower Threshold Strategy: The One-or-all strategy can be improved based on an observation regarding batching. Batching results in 3 network round trips, one each for (a) inserting parameters into a temporary table, (b) executing the batched query, and (c) clearing the temporary table. In fact each thread incurs another round trip while batching for the first time in order to create the temporary table. This means that the time taken to process one batch is roughly equivalent to the time taken to process at least three individual requests sequentially, since there are 3 network round trips and 3 queries being executed for every batch. We verified this in our experiments, and found that very small batches perform poorly as compared to asynchronous submission. Therefore, we use the following strategy. We define a batching threshold bt ≥ 3. If n > bt, then pick up all the n requests in the queue and batch them. If 1 ≤ n ≤ bt, then pick up one request from the queue, and execute it as an individual request. If n = 0, wait for new requests. Observe that in this strategy, a free thread does not necessarily clear the queue. Consider the situation where the request arrival rate is higher than the request processing rate. In this setting, the first few (about T ) requests would be sent as individual requests asynchronously. Since the queue builds up much faster than it is consumed, after the first few iterations, the requests would be submitted in batches with increasing sizes. On the other hand, consider the case where the request processing rate is higher than the rate of arrival of requests onto the queue. In this situation, the queue would not grow in size since the requests keep getting consumed at a higher rate, and hence n would remain below (or close to) the batching threshold. This implies that most requests would be sent individually, mimicking the behaviour of asynchronous query submission. Thus we can see that the lower threshold strategy is actually quite adaptive. Batch sizes vary in accordance with the queue size, which in turn depends upon the arrival rate of requests, the rate at which requests get processed, and the number of threads working concurrently on processing requests. Growing upper-threshold based Strategy: Although the above approach improves response time and adapts the batch size according to the queue size, in situations where the arrival rate of requests is high, it may lead to a situation where a single large batch is submitted while the remaining threads are idle. This could lead to a slower response time for initial results, since the database would take a longer time to process a large batch, and higher memory consumption due to a large request queue, although the larger batch size may reduce overall work at the database server, and reduce the time to process all requests. For applications that need better response times for initial results, we use an upper-threshold strategy. We use a growing upper threshold that bounds the maximum batch size. This upper threshold is not a constant but is initially small, so that batch sizes are small initially, but grows as more requests are submitted, so that response times for later results are not unduly affected due to very small batch sizes. The growing upper-threshold strategy works as follows. If the number of requests in the queue is less than the current upper threshold, all requests in the queue are added to a single batch. However, if the number of requests in the queue is more than the current upper threshold, the batch size that is generated is equal to the current threshold; however, for future batches, the upper threshold is increased; in our current implementation of the growing upper-threshold strategy, we double the upper threshold whenever a batch of size equal to the current upper threshold is created. Note that the upper threshold strategy is orthogonal to the lower-threshold strategy, and each may be used with or without the other. EXPERIMENTAL RESULTS We have conducted a detailed experimental evaluation of our techniques using the DBridge tool. In Section 6.1, we present our experiments on asynchronous query submission and its benefits. Next, in Section 6.3, we compare basic asynchronous submission with the extensions and optimizations described in Section 5, and discuss the results. Asynchronous query submission For evaluating the applicability and benefits of the proposed transformations, we consider four Java applications: two publicly available benchmarks (which were also considered by Manjhi et.al. [11]) and two other real-world applications we encountered. Our current implementation does not support all the transformation rules presented in this paper, and does not support exception handling code. Hence, in some cases part of the rewriting was performed manually in accordance with the transformation rules. We performed the experiments with two widely used database systems -a commercial system we call SYS1, and PostgreSQL. The SYS1 database server was running on a 64 bit dual-core machine with 4 GB of RAM, and PostgreSQL was running on a machine with two Xeon 3 GHz processors and 4 GB of RAM. Since disk IO is an important parameter that affects the performance of applications, we report the results for both warm cache and cold cache. The Java applications were run from a remote machine connected to the database servers over a 100 Mbps LAN. The applications used JDBC API for database connectivity. The cache of results was maintained using the ehcache library [12]. Experiment 1: Auction Application: We consider a benchmark application called RUBiS [13] that represents a real world auction system modeled after ebay.com. The application has a loop that iterates over a collection of comments, and for each comment loads the information about the author of the comment. The comments table had close to 600,000 rows, and the users table had 1 million rows. First, we consider the impact of our transformations as we vary the number of loop iterations (by choosing user ids with appropriate number Figure 4 shows the performance of this program before and after the transformations with warm and cold caches in log scale. The y-axis denotes the end to end time taken for the loop to execute, which includes the application time and the query execution time. For a small number of iterations, the transformed program is slower than the original program. The overhead of thread creation and scheduling overshoots the query execution time. However, as the number of iterations increases, the benefits of our transformations increase. For the case of 40,000 iterations, we see an improvement of a factor of 8. Next, we keep the number of iterations constant (at 40,000) and vary the number of threads. The results of this experiment are shown in Figure 5. The execution time (for both the warm and cold cache) drops sharply as the number of threads is increased, but gradually reaches a point where the addition of threads does not improve the execution time. The results of the above experiment on PostgreSQL follow the same pattern as in the case of SYS1, and the results are given in [4]. Experiment 2: Bulletin Board Application: RUBBoS [13] is a benchmark bulletin board-like system inspired by slashdot.org. For our experiments we consider the scenario of listing the top stories of the day, along with details of the users who posted them. Figure 6 shows the results of our transformations with different number of iterations. Although the transformed program takes slightly longer time for small number of iterations, the benefits increase with the number of iterations (note the log scale of y-axis). . The TPC-H part table, augmented with a new column category-id and populated with 10 million rows, was used as the item table. The category table had 1000 rows -900 leaf level, 90 middle level and 10 top level categories (approximately). A clustering index was present on the category-id column of the category table and a secondary index was present on the category-id column of the item table. Figure 7 shows the performance of this program before and after applying our transformation rules. As in the earlier example, we first fix the number of threads and vary the number of iterations. We perform this experiment with ten threads, on a warm cache on SYS1. The results are in accordance with our earlier experiments. In addition, we observe that the number of threads is an important parameter in such scenarios. This parameter is influenced by several factors, such as the number of processor cores available for the database server and the client, the load on the database server, the amount of disk IO, CPU utilization etc. When the program is run with a cold cache, the amount of disk IO involved in running the queries is substantially higher than with a warm cache. But the bottleneck of disk IO can be reduced by issuing overlapping requests. Such overlapping query submissions enable the database system to choose plan strategies such as shared scan. The effect of varying the number of threads shows similar trends as that of Experiment 1, though the actual numbers differ. The results can be found in [4]. In transforming this program, the reordering algorithm was first applied and then the loop was split using Rule A. Experiment 4: Web service invocation: Although we presented our program transformation techniques in the context of database queries, the techniques are more general in their applicability, and can be used with requests such as Web service calls. In this experiment, we consider an application that fetches data about directors and their movies from Freebase [14], a social database about entities, spanning millions of topics in thousands of categories. It is an entity graph which can be traversed using an API built using JSON over HTTP. The client application, written in Java, retrieves the movie and actor information for all actors associated with a director. Such applications usually require the execution of a sequence of queries from within a loop because (a) operations such as joins are not possible directly, and (b) the Web service API may not support set oriented queries. Since our current implementation supports only JDBC API, we manually applied the transformations to the code that executes the Web service requests. The results of this experiment are shown in Figure 8. As we vary the number of threads, overlapping HTTP requests are made by the client application which saves on network round-trip delays. Since our experiment used the publicly available Freebase sandbox over the Internet, the actual time taken can vary with network load. However, we expect the relative improvement of the transformed program to remain the same. Time Taken for Program Transformation: Although the time taken for program transformation is usually not a concern (as it is a one-time activity), we note that, in our experiments the transformation took very little time (less than a second). Applicability of Transformation Rules In order to evaluate the applicability of our transformation rules, we consider the two publicly available benchmark applications used above, the auction application and the bulletin board application. For each of these, we have analyzed the source code to find out (a) how many opportunities for Table 1. We consider all kinds of loop structures which include a query execution statement in the loop body, as potential opportunities (# Opportunities). Among such potential opportunities, those which satisfy the preconditions for our rules, are exploited (# Transformed). This would involve reordering of statements in a lot of situations. We see that all such opportunities present in the auction system indeed satisfy the preconditions and can be transformed. Although our preconditions are more general than those proposed in [1], the opportunities satisfied both. In the bulletin board application, few of the loops performed recursive method invocations which prevent them from being transformed. Out of the programs seen earlier, the remaining were too small for this analysis, and hence omitted. Effect of Optimizations We have performed experiments to compare the following approaches: (i) Original: the original program (ii) Batch: the program rewritten using query batching, (iii) Asynch: the program rewritten according to our technique of asynchronous submission, (iv) Asynch Batch: our technique of combining batching and asynchronous submission (Section 5.2), using the simple threshold based strategy (v) Asynch Overlap: asynchronous submission with concurrent generation of requests (Section 5.1), (vi) Asynch Batch Overlap: asynchronous batching with concurrent generation of requests, and (vii) Asynch Batch Grow: asynchronous batching with concurrent generation of requests and the growing-upper-threshold strategy. Our current implementation does not support the Async Overlap transformation, and hence we have rewritten the code manually as described in Section 5.1. The experiments have been conducted on a widely used commercial database system SYS3. The SYS3 database server was running on a 64 bit 2.3 GHz quad-core machine with 4 GB of RAM. The Java applications were run from a remote 3.3 Ghz quad-core machine connected to the database servers over a 100 MBps LAN. For approach (iv), we used a lower batching threshold of 300 with 48 threads, and for approach (vii), we used a doubling growth rate for the upper threshold. We again consider the benchmark auction application RUBiS [13] and the scenario described in Experiment 1 of Section 6.1. Total execution time First, we compare the total execution time of this program according to the approaches (i) through (iv), since the optimizations in (v), (vi) and (vii) have minimal impact on the total execution time. The results of this experiment with cold cache are shown in Figure 9. The x-axis shows the number of iterations and the y-axis shows the total execution time in milliseconds. It can be observed from Figure 9 that at smaller number of iterations, all approaches behave very similarly, and differences can be observed at larger number of iterations. Asynchronous submission (with 12 threads) gives about 50% improvement, while batching leads to about 75% at 40000 iterations. Asynchronous batching, with 48 threads and a lower batching threshold of 300 leads to about 70% improvement. At 40000 iterations, we have recorded the behaviour of one run of asynchronous batching, shown in Figure 10. The x-axis shows the number of requests (either batched or individual), and the y-axis shows the batch sizes in log scale. Overall, there were 38 batch submissions and 645 asynchronous submissions, and among the 38 batches, the average batch size was 1019. Initially, many requests are sent individually since the lower batching threshold was set to 300. But the queue builds up quite fast and hence there are a few intermittent batch submissions. As the execution progresses, there are more and more batch submissions, and batch sizes also start growing. Towards the end, there are batches of upto 10000 requests. This behaviour is in accordance with our expectation as described in Section 5.2. Time to k-th response Next, we compare the response time of the program according to approaches (i) through (vii) described earlier. Here, by response time we mean the duration between the start of the program and the arrival (or the output) of the k-th response from the program. In our auction system experiment, records are printed when the information about the author of the comment is retrieved. Therefore, the response time is measured at the instant where the author information of the k-th comment is output. We fix the number of iterations at 40000, and record the time taken for the k-th response, with k varying from 1 to 40000. The results of this experiment are shown in Figure 11. The x-axis shows the response number k, and the y-axis shows time in milliseconds. For this experiment, the Async Batch Grow approach used a lower batching threshold of 100, and an upper threshold that doubles, initially set to 200. The original program has the best response time initially. However, the response time increases quite steeply with k, and reaches about 31 seconds for the 40000th response. Batching, in contrast, has a constant curve. This is because even the first response is output only after (i) all parameters are added to the parameter batch table, and (ii) the transformed (set oriented) query is executed. Essentially, the time to k-th response in batching is very close to the total execution time, since all the results are returned together. The response time for Asynch starts off with a better (lower) response time as compared to batching, but increases beyond batching for larger values of k. Asynchronous batching, initially behaves similar to asynchronous submission, and slowly deviates from it. At larger number of iterations, it behaves more similar to batching. In other words, it always tends towards the better of Asynch and Batch. The Overlap versions of Asynch and Asynch Batch show much better response times compared to the earlier approaches. The Async Batch Grow approach behaves the best in balancing response time vs total execution time. It initially shows response times similar to the original program, and does even better than the asynch and Batch at larger iterations. At k = 40000, it results in the response time comparable to Batch. Discussion In summary, our experimental study shows that batching and asynchronous submission are beneficial techniques with different trade offs, and the combined technique of asynchronous batching with optimizations aims at balancing these trade offs. Some of the trade offs are (a) total execution time vs. time to k-th response, (b) reducing network round trips (by batching multiple requests) vs. overlapping execution of queries, (c) reducing memory consumption (by using iterative query execution) vs. set oriented execution of the query. These trade offs are essentially controlled by the parameters used in asynchronous batching, such as the batching threshold, number of threads etc. Based on the use case, the parameters have to be tuned in order to achieve the desired behaviour. Our contribution in this paper has been to expose these trade offs to the developer, and allow manual tuning of such parameters. We have also presented some initial approaches for automatic tuning of parameters. Although our approaches are quite adaptive, as described in Section 5.2.3, we believe that there is scope for more work in this area. RELATED WORK Most operating systems today allow applications to issue asynchronous IO requests [15]. Asynchronous calls are also used for data prefetch and overlapping operator execution inside query execution engines [16], [17], [18]. Asynchronous calls have also been used to hide memory access latency by issuing prefetch requests [19]. Asynchronous calls are widely used in the communication between the web browser and the server using manually placed AJAX requests. Yeung [20] proposes an approach to automatically optimize distributed applications written using Java RMI based on the concept of deferred execution, where remote calls are delayed for as long as possible. Such delaying enables optimizations such as call aggregation, server forwarding etc. However, this work does not consider asynchronous calls and and query executions within loops. Dasgupta et al. [21] and Chaudhuri et al. [22] propose an architecture and techniques for a general static analysis framework to analyze database application binaries that use the ADO.NET API, with the goals of identifying security, correctness and performance problems. Like these approaches, we too use static analysis, but specifically for optimization by introducing asynchronous prefetching of query results. There has been work on prediction based prefetching of query results [23], by analyzing logs and trace files, but this work does not consider asynchronous prefetching. There has been very recent work on automatic partitioning of database applications by Cheung et al. [24], with the goal of eliminating the many small, latency-inducing round trips between the application and database servers. However, their approach does not exploit the opportunities that arise due to program transformations, and overlapping of computation by asynchronous submission of queries. Guravannavar et.al. [1] consider rewriting loops in database applications and stored procedures, to transform iterative executions of queries into a single execution of a set-oriented form of the query. We use a similar framework of program transformation, but for asynchronous query submission. While our transformation rules are based on [1], we make the following novel contributions. First, we show how the transformation rules presented in [1] in the context of batching, can be adapted for asynchronous query submission. Second, we describe an extension to our transformation that enables overlapping of generation and consumption of asynchronous requests, thereby greatly improving the response time. Third, we present a technique to combine batching and asynchronous query submission into a common framework. Also, we describe an infrastructure to support asynchronous query submission, and the challenges and trade offs in designing and implementing such infrastructure. Manjhi et al. [25] consider prefetching of query results by employing non-blocking database calls, made at the beginning of a function. A blocking call is subsequently issued when the results of the query are needed, and this call is likely to take much less time as the query results would be already computed and available in the cache. However, they do not describe details to automate this task, and also do not consider loops and procedure invocations. Ramachandra et al. [26] propose a technique to insert prefetch requests for queries/web services at the earliest possible point in the program across procedure invocations. However, they do not consider loop transformations for queries within loops while exploiting opportunities for prefetching, and this forms the main focus of this paper. The approaches described in [25], [26] integrate well with our technique of transforming programs to enable asynchronous submission of queries. Consider cases where a loop invokes a procedure which in turn executes a query. Such cases are quite common in applications backed by object relational mappers such as Hibernate [27]. They can be optimized by first applying the prefetching technique described in [26] which brings the prefetch instruction directly into the loop. Subsequently, the loop transformations presented in this paper can be applied. The present article is an extended version of the conference paper [4], with the following key differences. The extensions presented in Section 5, and the related performance experiments in Section 6.3 are entirely novel, as are most of the system design issues in Section 4, and the discussions in Section 8. Due to lack of space, we have omitted details of statement reordering to improve the applicability of our transformations, and a few of the experimental results from [4]. EXTENSIONS We now discuss some system design considerations and extensions of techniques described in this paper. Ensuring transaction properties: In our implementation, we have used one connection per thread in order to achieve overlapping query execution. This is because in JDBC, (a) a database connection allows only one open query at a time, (b) there are no API methods that allow asynchronous submission. ADO.NET provides asynchronous API (such as the BeginExecuteReader and EndExecuteReader APIs), which allow overlapping of query execution with local computation. However, even these APIs do not support overlapping query executions through a single connection. In order to fully preserve transaction properties and achieve true asynchronous submission, individual threads in the thread pool should be part of a single shared transaction. Such an infrastructure is not currently supported by any database vendor to the best of our knowledge. Although databases support distributed transactions (such as JDBC XA transactions), their goal is to allow transactions across multiple data sources. One way to implement this (if snapshot queries are supported) is to allow multiple connections to share a snapshot point. Such a feature, if supported, would allow multiple threads (with their own connections) to share and execute transactions on the same snapshot. We believe that this would be a minor change in databases that already support snapshot isolation, and would be a useful feature to have. Such a built in support would not only simplify application development, but also lead to significant improvement in performance, as compared to our current implementation. Rewriting loops containing update transactions needs to consider dependencies between update statements and program variables. A conservative approach is to assume that update statements are dependent on other update or select statements in a loop, and model them as data dependencies which factor in to the preconditions for our transformation rules. This can be improved by using more precise inter query dependence analyses [28]. Minimizing memory overheads: If the number of loop iterations is large, the transformed program may incur high memory overhead, in order to store the handle and the state associated with each iteration. Storing such state on disk increases the IO cost. Our technique can be extended such that, based on memory usage, the producer thread backs off and waits while results are consumed and memory freed, and then generates more requests. Which calls to be transformed?: It may not be beneficial to transform every blocking query submission call to a nonblocking call. From our experimental study it is also evident that given a query execution statement, the benefit to be achieved by converting it to a non-blocking call depends on the number of iterations and other system parameters. In our current implementation we assume that user can specify which query submission statements to be transformed. Making this decision in a cost-based manner is a topic of future work. CONCLUSION We propose a program analysis and transformation based approach to automatically rewrite database applications to exploit the benefits of asynchronous query submission. The techniques presented in this paper significantly increase the applicability of known techniques to address this problem. We also described a novel approach to combine asynchronous submission with our earlier work on batching in order to achieve a balance between the trade offs of batching and asynchronous query submission. Although our program transformations are presented in the context of database queries, the techniques are general in their applicability, and can be used in other contexts such as calls to Web services, as shown by our experiments. We presented a detailed experimental study, carried out on real-world and publicly available benchmark applications. Our experimental results show performance gains to the extent of 75% in several cases. Finally, we identify some interesting directions along which this work can be extended.
2014-02-24T02:39:44.000Z
2014-02-24T00:00:00.000
{ "year": 2015, "sha1": "15ab221559a0e208b2c3906248a05216e83aad78", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.5781", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "04978dbefc0ce57aa3ff5ff05ba4df789521637e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
207013613
pes2o/s2orc
v3-fos-license
A bi-criteria evolutionary algorithm for a constrained multi-depot vehicle routing problem Most research about the vehicle routing problem (VRP) does not collectively address many of the constraints that real-world transportation companies have regarding route assignments. Consequently, our primary objective is to explore solutions for real-world VRPs with a heterogeneous fleet of vehicles, multi-depot subcontractors (drivers), and pickup/delivery time window and location constraints. We use a nested bi-criteria genetic algorithm (GA) to minimize the total time to complete all jobs with the fewest number of route drivers. Our model will explore the issue of weighting the objectives (total time vs. number of drivers) and provide Pareto front solutions that can be used to make decisions on a case-by-case basis. Three different real-world data sets were used to compare the results of our GA vs. transportation field experts’ job assignments. For the three data sets, all 21 Pareto efficient solutions yielded improved overall job completion times. In 57 % (12/21) of the cases, the Pareto efficient solutions also utilized fewer drivers than the field experts’ job allocation strategies. Introduction and background of BSL Amid significant shifts in the socioeconomic landscape, the growing complexity of global supply chains and innovative technological advances, it has become increasingly challenging for transportation companies to maintain their competitive advantage. Transportation-based companies are incessantly looking for ways to cut logistics costs, reduce waste, utilize information technologies, improve operational efficiency and increase overall productivity levels. Transportation companies are ultimately tasked with delivering packages efficiently and exceeding customer expectations. After extensively researching logistics companies and existing methodologies for vehicle routing problems (VRP), it became apparent that the literature focused on more generalized problems that did not collectively address many of the variants that are common to transportation companies. Specifically, we perused the literature for research about solving VRPs that involved a heterogeneous fleet of vehicles, multi-depot subcontractors (drivers), and pickup/delivery time window and location constraints. Since we were unable to find existing literature that collectively addressed the aforementioned variants in a VRP, the initial phase of our research involved developing an evolutionary algorithm to minimize the total time to complete all jobs with the fewest number of route drivers given pickup/delivery times, locations, and vehicle capacity constraints (Lightner-Laws et al. 2016). Our results revealed an interesting dichotomy between the objectives of minimizing the overall completion time vs. the number of route drivers needed to complete all jobs. There was an apparent trade-off between improving either the total job completion time or number of drivers. Consequently, in this phase of our research, our primary objective is to minimize this bi-criteria problem by exploring varying weights that prioritize the time vs. the number of drivers needed. We plan to use a nested bi-criteria genetic algorithm (GA) that computes a Pareto frontier to explore this VRP, with a heterogeneous fleet of 4 vehicle types, multi-depot subcontractors (drivers), and pickup/delivery time window and location constraints. Our model will explore the issues of weighting the objectives and provide Pareto front solutions that can be used to make decisions on a case-by-case basis. Three data sets from BSL, a mid-sized transportation company, will be used in this paper. BSL has customarily used in-house field specialists to assign jobs to each driver based on the pickup/delivery times, location, number of packages and weight of the load. Although this is a common practice, management wants to automate their process and find more efficient methods to allocate jobs for route drivers as a means of improving operations and competitiveness. Ultimately, our model will provide the Pareto front solutions (route assignments) that BSL can use on a case-by-case basis to assign jobs to route drivers. This paper is organized as follows: Sect. 2 provides a review of the literature about VRPs. In Sect. 3, we describe our VRP and the associated constraints. Our GA framework and its specific components are presented in Sect. 4. The experimental results and discussion are given in Sect. 5. Finally, the conclusion and future research directions are presented in Sect. 6. Literature review The vehicle routing problem is a combinatorial problem that seeks to find the optimal route that minimizes the total travel distance required to deliver packages for a given set of customers. Typically customer demand is satisfied by a homogeneous fleet of vehicles from a central depot. Dantzig and Ramser (1959) capacitated vehicle routing problem (CVRP) is a VRP in which the homogeneous fleet of vehicles servicing the demand has a limited capacity which cannot be exceeded. The CVRP precipitated variant streams of research about multiple depots (Baldacci and Mingozzi 2009;Lau et al. 2010;Ombuki-Berman and Hanshar 2009;Vidal et al. 2011a), heterogeneous fleets (Baldacci and Mingozzi 2009;Brandao 2011;Choi and Tcha 2007), time windows for making deliveries (VRPTW), and designated pickup/delivery times (Dumas et al. 1991;Ropke and Cordeau 2009;Ropke et al. 2007;Baldacci et al. 2011). A wide variety of closed form techniques have been used to solve VRPs. However, as more complex constraints are considered, finding an explicit optimal solution becomes computationally expensive and virtually impossible to ascertain; thus researchers have explored a variety of heuristics such as data mining (Chen et al. 2012), evolutionary algorithms (Vidal et al. 2011b;Lau et al. 2010), tabu searches (Cordeau and Maichberger 2011;Brandao 2011), graph theory (Likaj et al. 2013) andsimulated annealing (1993) to solve these complex transportation problems. All of these approaches seek to find near optimal solutions to address routing problem variants. There has been extensive research on VRPs; however, Pisinger and Ropke (2007) are among the few who have developed a single robust heuristic that can be used to solve five different variants of this transportation problem. Their approach will solve the vehicle routing problem with time windows (VRPTW), capacitated vehicle routing problem (CVRP), multi-depot vehicle routing problem (MDVRP), site-dependent vehicle routing problem (SDVRP) and the open vehicle routing problem (OVRP). While this heuristic has been applied to a variety of VRPs individually, it was not applied to multiple variants simultaneously in a single problem instance (i.e., a CVRP with multi-depots, time window constraints and a heterogeneous fleet) or multi-objective problems. Research has been conducted on general multi-objective transportation route problems using linear programming parametrics (Aneja and Nair 1979;Gal 1975; and adjacent efficient methods (Evans and Steuer 1973;Yu and Zeleny 1974). Researchers have also used heuristic techniques and Pareto front solutions to explore bi-criteria transportation problems (Muller 2010;Prakash et al. 2014;Konak et al. 2006;Abounacer et al. 2012). Recently, an increasing number of studies have emerged to solve a multi-criteria VRPTW. Tan et al. (2006) implemented a hybrid multi-objective evolutionary algorithm to minimize the total travel distance and the number of vehicles used to meet customer demand. They incorporated Pareto's optimality concepts for determining the best solutions to solve their multi-objective problem. Muller (2010) examined a VRPTW that utilized a homogeneous fleet of vehicles from a central depot. The objective of their research was to minimize overall costs and penalties for not adhering to soft time constraints. Their optimization problem had two competing objectives that made simultaneously optimizing both cost and penalties challenging. The dual objectives formed a Pareto front that the decision maker used to determine which solution best fits their needs. Their multi-objective optimization problem was solved using the ε-constraint method (Coello et al. 2002;Miettinen 1999), graph theory, Solomon's 11 insertion heuristic, ejection chain theory, or-opt and 2-opt procedures (Potvin and Rousseau 1995). Zou et al. (2013) proposed a hybrid swarm optimization model for minimizing the number of vehicles utilized to serve customer demand, the total travel distance and the total waiting times. Their model assumed that there were an unlimited number of service vehicles. Banos et al. (2013) presented a Pareto-based simulated annealing model for solving a VRPTW that minimized the travel distance and the imbalance (travel distance and vehicle loads) of the individual routes. Most single and multi-objective VRPTW research has been tested using Solomon's benchmark data (1987). This data set consists of 56 different problems with varying fleet sizes, vehicle capacities, and extensive information regarding the location of customer demand and travel time/distances between different location sites. This data set has been used extensively to compare different heuristic methods for solving a VRPTW. The research presented in this paper includes the additional variants of using a heterogeneous fleet and multiple depots, with a multi-objective VRPTW. Although these additional variants are common for a variety of real-world delivery/logistics scenarios, an exhaustive exploration of the literature did not reveal existing research about a VRPTW with all of the variants presented in this paper. As a result, no standardized benchmark datasets are available to test and compare with the results of this research. Lightner-Laws et al. (2016) developed a heuristic solution for a VRP where multiple variants are presented in a single problem instance. Their approach utilized a GA to solve a VRP with a heterogeneous fleet of vehicles, multi-depot subcontractors, and constraints on pickup/delivery times and locations. The solution aimed to minimize the overall driving time to complete all job assignments while utilizing the fewest number of drivers. While their approach addressed multiple variants of the classic VRP, it only marginally explored the trade-off between overall driving time and the total number of drivers. This paper further develops the solution proposed by Lightner-Laws et al. (2016) to optimize a multi-objective VRP where multiple variants of the problem exist in a single problem instance. We develop a nested GA, which explores the VRP for a heterogeneous fleet of vehicles with multidepot subcontractors and constraints on pickup/delivery times and locations. We provide Pareto front solutions that allow BSL management to fully explore the trade-off between these two objectives. Problem formulation The multiple depot VRP presented in this research is concerned with the execution of a set of jobs where each job represents a package to be delivered. Each job is associated with a set of requirements that specify how the job is to be fulfilled. The following list gives the six requirements specified for each job: PT is the earliest possible time the package can be picked up, while DT is the latest possible time that the package can be delivered. Specialists use their expertise to assign the appropriate vehicle types (based on customer estimates of the weight, size, and number of packages) for each job. Although cars (C), SUVs (S), box trucks (B), and tractor trailers (T) are the vehicle types used in this problem formulation, alternative vehicle types could also be easily incorporated as well. Finally, JW is the sum of the weight of all packages for a given job. All jobs, J i , (where i = 1. . .N and N is the total number of jobs) to be fulfilled are known in advance. A potential driver list is created for each job, J i , (based on customer input, estimated weight, size, and capacity constraints) from a set of available drivers, D k , (where k = 1. . .M and M is the total number of available drivers). A driver's job completion time is calculated from the time they leave their home and pickup their first job until the time their last job is delivered. The overall total time for all route deliveries is equivalent to the sum of the job completion times for each individual driver. Both mapping software and input from field experts are used to determine the travel time between any home, pickup or delivery location (Lightner-Laws et al. 2016). BSL wishes to automate the process of allocating jobs to drivers and designating the order in which jobs should be picked up/delivered. The driver assignments should be made in a manner that minimizes total travel time and the total number of drivers required to complete all jobs. Holland (1975) developed a stochastic search technique, a genetic algorithm (GA), which incorporates features of natural selection to solve optimization problems. The main components in the design of a GA include: A GA starts with a population of initial solutions. Genetic operations, (mutations and/or crossover events) alter initial parent solutions to create new children solutions. The next generation of solutions is selected from the set of parent and children solutions, in such a way that the best solutions have a higher probability of continuing to the next generation. The goal is to produce improved solutions at each generation by ultimately allowing natural selection to filter out the weaker candidates. Our optimization model features a nested GA. Our main GA seeks to minimize the total distance traveled and the number of drivers required to complete all jobs. This genetic algorithm employs a secondary GA to help compute the fitness of a candidate solution. The secondary GA minimizes the total travel time for an individual driver by finding the optimal ordering of assigned pickups and deliveries. Our primary GA will be referred to as the Outer GA, and the secondary GA will be referred to as the Inner GA. Candidate solution representation/initial population generation Candidate solution representation refers to the encoding scheme that is used to represent a candidate solution. Suppose we were given the five jobs and potential driver list for completing each job shown in Table 1. As mentioned in a previous section, available driver lists for each job are determined by BSL experts based upon the availability of BSL drivers with vehicle types that meet the established requirements. For this example, job J 1 can be serviced by driver D 1 or D 2 ; however, driver D 4 is the only driver available to service job J 5 . Figure 1 depicts our encoding scheme for three possible candidate solutions to this problem. Our encoding scheme includes the pickup and drop off locations for each job since optimal driver routes may include consecutive (nested) pickups and/or drop offs when multiple jobs are assigned to a single driver, in addition to the maximum amount of weight that the driver is carrying at any given time. In the first solution, driver D 2 was assigned job J 1 ; thus the pickup location J 1 PL and drop off location J 1 DL are assigned to driver D 2 . Since driver D 2 only carried job J 1 , the maximum weight that the driver carried at any given time is 300 lbs (the total package weight for job J 1 ). Driver D 3 was assigned jobs J 2 , J 3 , and J 4 , where they picked up and delivered each job consecutively, and the maximum weight that the driver carried at any given time was 850 lbs. Additionally, driver D 4 was assigned job J 5 (600 lbs). In the second solution, driver D 2 was assigned jobs J 1 and J 3 ; driver D 3 was assigned jobs J 2 and J 4 , and driver D 4 was assigned job J 5 . The last candidate solution assigned jobs J 1 , J 2 , J 3 , and J 4 to driver D 2 , and job J 5 to driver D 4 . The maximum weight carried by a driver is recorded at all times. To create our initial population, we use the aforementioned encoding scheme to randomly assign job pickup and delivery locations to drivers, based upon the potential drivers list for each job. We impose large time penalties in the fitness function for each candidate solution if time conflicts make it impossible for the assigned driver to complete all jobs within the given time constraints, or if the known vehicle capacity was exceeded. These penalties effectively prevent infeasible candidates from being considered when other feasible candidates have been found. Selection strategy We adopt the tournament selection strategy to select candidate solutions for breeding. Our "tournament" is conducted by randomly selecting two candidate solutions from the current population, comparing their fitness values, and then declaring the strongest and weakest candidate of the pair (i.e. stronger candidates have high fitness values). Since weaker candidates can sometimes produce strong progenies, we set a selection probability of 0.9 that the strongest candidate solution from each pair will be selected for genetic operation. Additionally, we employ the elitism strategy, where the best candidate solution from each generation is directly copied into the next population, unaltered. For our model, we select the top three candidate solutions to directly move on to the next generation. We then conduct tournaments, as described above, until our next generation is fully populated. Genetic operator-mutation Once a candidate solution is identified via the tournament selection process, our model utilizes mutation as the genetic operator. A candidate solution is mutated as follows: 1. We randomly select three current job assignments. 2. For each job selected, we randomly reassign the job to a different driver in the potential driver list for that job. If a selected job only has one potential driver in its list, no change is made. For example, suppose we are given the potential drivers list provided in Table 1, and our tournament selection process produced Candidate Solution 1 (from Fig. 1). Our process for mutating Candidate Solution 1 is illustrated in Fig. 2. In step 1 of this figure, we see that jobs J 1 , J 3 , and J 5 are randomly selected to mutate. In step 2, we review the potential driver lists for the three jobs selected, and reassign each job to a different driver from their respective list. Since driver D 4 is the only available driver that can complete job J 5 , this job assignment remained unchanged. Fitness evaluation The fitness evaluation function mathematically expresses the value of a candidate solution. Genetic algorithms are primarily concerned with finding the best or "most fit" solutions for the final population. Accordingly, we start our fitness evaluation, by checking the vehicle weight capacities for all proposed driver assignment solutions. We immediately impose a large fitness penalty value to all solutions that violate capacity limits; thus reducing the likelihood of invalid solutions continuing to the next generation (Michalewicz 1995). For all valid solutions we focus on the multi-objective goal of minimizing the travel time and the number of drivers to complete all jobs. Like many multi-criteria problems, it is challenging to optimize both objectives simultaneously. In Original Candidate Solution 1 Step 1: Randomly select 3 jobs to mutate Supposed jobs J 1 , J 3 , and J 5 Step 2a: Review the potential driver list for each job selected. From Table 1: • Job J 1 could assigned to drivers D 1 or D 2 • Job J 3 could be assigned to drivers D 2 or D 3 • Job J 5 must be assigned to driver D 4 Step 2b: For each job selected, randomly reassigned the job to a different driver in its potential driver list • Job J 1 was assigned to driver D 2 . We reassign this job to driver D 1 . • Job J 3 was assigned to driver or D 3 We reassign this job to driver D 2 • Job J 5 must remain with driver D 4 an effort to address potential conflicts, we aim to generate a Pareto optimal set consisting of solutions that are not dominated by either objective. For our problem, we define our first objective as minimizing the travel time and our second objective as minimizing the number of drivers. We use the weighted approach to formulate our multiobjective problem into the following single scalar objective function: where x represents a candidate solution and z i (x) represents the ith normalized objective function. Our normalized first objective, z 1 (x) is In this equation f 1 (x) is the travel time required for all jobs to be completed using the driver allocation specified by candidate solution x. A separate GA model, which we refer to as our Inner GA (discussed below), is used to compute the minimum time for a driver to complete the assigned jobs. The Inner GA returns the time and optimal order that jobs should be picked up/delivered. Thus, for a single candidate solution, each driver assignment must be processed through our Inner GA, and the sum of the minimum times returned for all drivers will serve as f 1 (x) for the candidate solution. The function f * 1 is the minimal travel time to complete all jobs if we consider time as our only objective. We discuss f * 1 further below (Initial GA Model Parameters). Conversely, our normalized second objective is computed as the total number of drivers that the candidate solution x allocated to complete all jobs, f 2 (x), divided by the minimal number of drivers, f * 2 . The denominator in this equation represents the minimal number of drivers to complete all jobs if we consider the total number of drivers as our only objective. We discuss f * 2 further below (Initial GA Model Parameters). One of the challenges in using the weighted sum approach for addressing multi-objective problems is determining the appropriate weights to accurately reflect the priorities of the objectives. Since BSL was unable to definitively prioritize their goals, this issue of how to weight objectives surfaced in our research. Management asserted that determining the best trade-off between minimizing time and the total number of drivers is a judgment call that is made on a case-by-case basis. Accordingly, we choose to vary the weight in our multi-objective fitness function (Eq. 4.1.1) from 0 to 1, using increments of 0.1; the rationale is that varying the weight captures a spectrum of driver allocation solutions by gradually shifting the priority for each objective. We use this approach to determine the Pareto front solution set for our problem. Initial GA model parameters Before running our GA using our multi-objective fitness function (described in Eq. 4.1.1) we must determine f * 1 and f * 2 . f * 1 is determined by running our GA using total travel time as our fitness function and completely ignoring the number of drivers required to complete all jobs. The best time produced by our single objective GA, determines our f * 1 value for Eq. 4.1.1. We determine the value of f * 2 by running our GA using the single objective of minimizing the number of drivers needed to complete all jobs, without regard to the requisite travel distance. These parameters are then used in Eq. 4.1.1 to determine driver allocations for our multiobjective fitness function. Termination condition The Outer GA continues for a fixed number of generations or until no improvement in solution quality is seen. The final population is then analyzed to determine the solution with the minimum time and then the fewest number of total drivers. Evolutionary algorithm for inner GA The job assignments for a single driver are input into the Inner GA. The Inner GA is used to process all driver assignments and return the corresponding minimum time and ordering that jobs should be picked up/delivered. For example, let us consider the following assignment for Driver 2: Originally the driver is scheduled to pickup job J 1 , deliver job J 1 , pickup job J 2 , then deliver job J 2 . However, the minimum time solution may be to pickup job J 1 , pickup job J 2 , deliver job J 2 , then deliver job J 1 . The Inner GA would accept the original driver assignment above, and return the candidate solution below with its respective travel time: In computing the total travel time, we assume that drivers always begin at a known home location, before making their first pickup. The travel time ends after the last job is delivered. For a candidate solution to be considered valid, the pickup for each job must precede its delivery in the assignment ordering. Additionally, a substantial time penalty is added to any potential assignment that cannot (due to travel time conflicts) meet the required pickup and delivery times of all assigned jobs. This penalty significantly reduces the likelihood that these solutions are selected to continue on to later generations. Candidate solution representation initial population generation We retain the encoding scheme shown above for encoding candidate solutions for our Inner GA. The initial pool of candidate solutions for the first generation is created by taking the list of pickup and drop off locations for a driver and randomly shuffling their respective positions to form a new initial candidate solution. The generated solution is then checked to ensure that it is valid. Figure 3 shows a sample initial generation that could be generated from an original driver assignment. In this figure the notation D k H denotes the home location of driver D k . Selection strategy We employ the same selection strategy used for the Outer GA described above. Genetic operator-mutation Once a candidate solution is identified through our tournament selection process, our model utilizes mutation as the genetic operator. To be consistent with common GA notation, we refer to a candidate solution as a chromosome and we refer to a single item within a solution as a gene. A candidate chromosome is mutated as follows: • Step1: Randomly select one drop off or pickup location within the chromosome (i.e., randomly select one gene). • Step2: With a 0.5 probability, exchange the selected gene with its neighbor to the left or right. • Step3: Check the validity of new solution by making sure that following conditions are met: (a) the first position must remain the driver's home location, and (b) drop off location must come after the respective pickup location. If the solution is not valid, repeat steps 1 and 2 above up to three times, in attempt to find a valid mutated solution. After 3 attempts, if a valid solution has not been found, the original candidate solution will be copied unaltered into the next generation. Figure 4 shows a mutation process that yields an invalid candidate solution. For this instance, we must abandon the mutated solution and make up to two additional attempts to mutate the original candidate solution. If the future attempts yield a valid solution, the mutated candidate solution will continue to the next generation. If the future attempts fail to produce a valid mutated solution, the original candidate solution will move on to the next generation, unaltered. Figure 5 illustrates a mutation process that yields a valid candidate solution. Fitness evaluation The fitness value for a candidate solution is calculated by adding the travel times to reach each successive location in the solution. For example, consider the following candidate solution: Step 1: Select a gene to mutate. Suppose the gene J 1 DL was selected. [D 1 H, J 1 PL, J 1 DL, J 2 PL, J 3 PL, J 2 DL, J 3 DL, J 4 PL, J 4 DL] Step 2: With a .5 probability, exchange the selected gene with its neighbor to the right or left. Suppose this probability process selected the left neighbor, J 1 PL. These two genes are exchanged. This solution is not valid since the ordering has job J1 being dropped off before it was picked up. Therefore we must abandon this solution. Selected Candidate Solution for Mutation: [D 1 H, J 1 PL, J 1 DL, J 2 PL, J 3 PL, J 2 DL, J 3 DL, J 4 PL, J 4 DL] Step 1: Select a gene to mutate. Suppose the gene J 3 PL was selected. [D 1 H, J 1 PL, J 1 DL, J 2 PL, J 3 PL, J 2 DL, J 3 DL, J 4 PL, J 4 DL] Step 2: With a .5 probability, exchange the selected gene with its neighbor to the right or left. Suppose this probability process selected the right neighbor, J 2 DL. These two genes are exchanged. This solution is valid since the ordering has jobs being picked up before delivery. Thus this mutated candidate solution will move on to the next generation. Its fitness value would be the travel time from driver D 1 's home to job J 1 's pickup location, plus the travel time from job J 1 's pickup location to job J 1 's drop off location, plus the travel time from job J 1 's drop off location to job J 2 's pickup location, and so on. Termination condition The Inner GA continues for a fixed number of generations or until no improvement in solution quality is seen. Once the termination condition is met, the Inner GA returns the minimum time solution (and ordering) for the driver's job assignments. Sample problem using BSL parameters Initially, we used a small sample problem to test our GA and ensure that all six variants were properly incorporated into the model. We used BSLs Midwest metropolitan area service area for the pickup/delivery locations in the sample problem. To determine the time and distance required to complete jobs within the service area, 141 zones were established based on geographical locations. GPS software was then used to ascertain the distance and time between any two zones within the service area; finally, this information was used to calculate the overall time and distance required to complete all jobs. There were fifty drivers, with unique driver identification numbers (ID#), available to service customers in this sample problem. Specifically, there were 18 car drivers (ID# D1-D18), 16 SUV drivers (ID# D19-D34), 13 Box Truck drivers (ID# D35-D47) and 3 tractor trailers drivers (ID# D48-D50). It is worth noting that SUVs have a unique degree of flexibility because they could be used for jobs, which require cars or SUVs. The sample input data are displayed in Table 2. Specifically, the table gives the earliest pickup times, the latest delivery times, the pickup/delivery location zones, the vehicle type required, maximum vehicle type weight capacity, and the job weights for 11 different jobs. A perusal of the vehicle types for jobs 2, 4 and 6 raises two questions: (1) why is an SUV required for Job 2 and the job weight is only 180 lbs? and (2) why is a box truck required for jobs 4 and 6, when the weights are only 1000 and 468 lbs, respectively? These jobs highlight the need for transportation field experts to be involved in the process of assigning vehicle types. Although these three jobs weigh considerably less than the maximum vehicle type weight capacity, other factors such as the length, width, surface area, fragility, density, materials handling requirements, etc. must also be considered when choosing the appropriate vehicle type; thus, factors other than simply the weight dictated that a larger vehicle was required to complete jobs 2, 4 and 6. Table 3 displays the four Pareto front solutions and specific coordinates (total travel time, number of drivers) for the sample problem. In this table, the MTPW is the maximum total package weight at any time and the VC is the maximum amount of cargo that the vehicle can carry (or the maximum vehicle type capacity). Let us examine the W = 0.3, Run 2, driver 19 data. For this solution, driver 19 uses an SUV to complete jobs 1, 7 and 8. Specifically, driver 19 would pickup job 8, pickup job 7, drop off job 7, drop off job 8, pickup job 1 and drop off job 1. Jobs 7 and 8 would be in the vehicle at the same time (MTPW = 525 lbs) and job 1 (MTPW = 280 lbs) would be in driver 19's SUV alone. The maximum total package weight at any time would be less than the maximum amount of cargo that an SUV can carry (VC = 2000). Figure 6 shows a graph of the 110 solutions found by our GA, under varying values of w. For each solution, the graph shows the travel time vs. the number of drivers required to complete all jobs. Our Pareto front solutions are depicted as square-shaped data points in the graph. The solutions in Table 3 and Fig. 6 encompass all six variants-pickup/delivery time windows, pickup/delivery locations and vehicle capacity constraints. Results from GA vs. BSL job allocations We conducted our experiments using actual BSL data from three different days. In Table 4, there is a complete list of all jobs that needed to be filled on three different days. This table also provides the earliest pickup times, the latest delivery times, the pickup/delivery locations, the type of vehicle needed for the job and the drivers that could potentially do the jobs. There were sixty-eight drivers, with unique driver identification numbers (ID#), available to service customers. BSL does not currently preserve information about the estimated weight or package dimensions of fulfilled jobs in their records. Thus, for these experiments we assume that BSL vehicle type assignments are made correctly and hence have the available capacity to fulfill all assigned jobs over the course of a day. However, it is worth noting that our model is designed to ensure that vehicle capacity constraints are not violated when package weights are provided (refer to the sample problem in the previous section). Preliminary experiments for both the inner and outer GAs, which systematically investigate alternative population sizes and maximum number of generations, were executed to determine efficacious GA parameter settings. Upon consideration of total computation time, overall job completion time, and marginal improvement of generated results, the population size and maximum number of generations were both set to 25. Prior to implementing our bi-criteria objective, the GA was run twice-once with the objective of minimizing the total travel time and once to minimize the number of drivers. The results from these two runs were used to determine f * 1 and f * 2 as described in Sect. 4 above. The minimum travel time, f * , 1 was determined to be 1260 for our Day 1 data set, 1673 for Day 2, and 2039 for Day 3. The minimum number of drivers, f * 2 , was determined to be 8 for our Day 1 data set, 6 for Day 2, and 15 for Day 3. Our multi-objective fitness function was investigated for w = [0, 0.1, 0.2, . . . , 1]. For each of the 3 data sets, the GA was run 10 times for each value of w (110 total runs per data set). All experimental results were performed using a Windows 7 Professional 64-bit Operating system with 6 GB of RAM and Inter(R) Xeon(R) CPU W3530 @ 2.8GHz. For the Day 1 jobs, Fig. 7 shows a graph of the 110 solutions found by our GA, under varying values of w. For each solution, the graph shows the travel time vs. the number of drivers required to complete all jobs. Similarly, Figs. 8 and 9 show the solutions yielded from our GA for Days 2 and 3, respectively. Table 5 displays the specific coordinates (total travel time, number of drivers) for our 7 Pareto front solutions for Day 1, 6 Pareto front solutions for Day 2, and 8 Pareto front solutions for Day 3. Tables 6, 7 and 8 show the specific job assignments for each Pareto solution, overall completion times and number of drivers for our GA for Days 1, 2 and 3. The final col-umn of these tables displays the job assignments that BSL experts gave to each driver; while the set of jobs are listed for each driver, the order that the jobs were picked up and delivered was not provided. Thus, we ran the job assignments that BSL provided through our program that determines the optimal route (i.e. the order for pickups/deliveries) for a set of jobs. We used these optimal routes to determine the minimum time that it would take the drivers to complete all orders. It is important to note that these times are based on mapping software's estimates of driving times between two locations and may not account for delays due to traffic or weather or alternative route selections made by individual drivers. If we examine our Day 1 results, there are 7 Pareto optimal solutions with total job completion times ranging from 1243 to 2533 min as compared with 1571 min from the BSL job assignments. This represents an improvement in completion time of up to 21 %. Additionally, while 14 drivers were required for the BSL solution, only 8-13 drivers were needed for the Pareto solutions from the GA. Thus, the GA was able to provide automated solutions that would reduce both travel time and the number of drivers. Specifically, one of our Pareto solutions yielded 1414 min and 9 drivers. This solution reduced the travel time by 10 % and showed a 36 % decrease in the number of drivers when compared to BSL's manual driver allocations. For Day 2, the Pareto front consists of 6 distinct points. The total job completion times for the Pareto solutions ranged from 1673 to 2479 min for the GA while the BSL solution was 1858 min. This represents time improvements of up to 10 %. Here, 12 drivers were required for the BSL solution in For Day 3, the 8 solutions in the Pareto front had job completion times spanning 2039-3883 min while utilizing 15-26 drivers. BSL's route allocation used 15 drivers who completed the jobs in 3698 min. The GA was able to improve the total completion time by up to 45 %. However, in providing this dramatic improvement in time, it utilized 11 more drivers than the BSL solution. One of our Pareto solutions yielded 2948 min with 18 drivers. With this solution, the travel time was reduced by 20 % with a 20 % increase in the number of drivers. The results from the GA on Days 1 and 2 revealed that the overall job completion times were markedly better than the BSL solutions while the total number of drivers was equal or better. For the Day 3 results, the GA provides dramatically better completion times but utilizes more drivers in doing so. Overall, our model was able to successfully automate their driver assignment problem while providing options for marked improvement of their combined objectives. Conclusion and future directions In our collaborative efforts with BSL, we address their need to automate the route assignment process by developing a nested GA for a heterogeneous fleet of vehicles with multidepot subcontractors, pickup/delivery time windows, varying pickup/delivery locations, job weights and vehicle capac-ity constraints. BSL can input job requests for a given day and the model will yield the Pareto optimal solutions for their problem. Our model specifies the driver assignments, the order that jobs should be picked up/delivered, and the total time and number of drivers needed to complete all jobs. This allows BSL the ability to fully explore the trade-offs between driving time and number of drivers. Once a particular solution representing the best trade-off has been selected, the model will specify the driver assignments that should be designated. Three different data sets from BSL were used to compare the results of the GA and BSL field experts. Our GA showed that the total job completion times were improved up to 21, 10 and 45 % on Days 1, 2, and 3 respectively. In addition, 12 of 13 Pareto front solutions for Days 1 and 2 reveal a decrease in the total number of drivers needed to complete all jobs. On Day 3, while results from the GA solutions yielded remarkably lower total job completion times, the BSL assignment yielded the fewest number of drivers. The Day 3 results indicate the need for us to continue to work with BSL to further understand how they prioritize both objectives (minimizing time vs. number of drivers required) on a daily basis. Overall, our GA results were promising and provided a substantive alternative to manually allocating job assignments. In the future, we plan to modify our algorithm so that we can explore how unforeseen changes in the environment or disruptions in transportation can impact route assignments. For instance, if a driver has delivered some jobs and has a breakdown, we will explore which drivers can take over the remaining jobs most efficiently. We also intend to solve this vehicle routing problem using other meta- Table 6 Pareto front solutions from our GA and the BSL job Allocations for the Day 1 jobs W=0.9, Run 1 W=0.9, Run 9 W=0.8, Run 5 W=0.5, Run 2 heuristic algorithms, such as a Tabu Search and Simulated Annealing. We plan to compare these metaheuristics and refine our methods of minimizing both objectives simultaneously.
2018-01-23T22:41:41.433Z
2016-04-01T00:00:00.000
{ "year": 2016, "sha1": "da344c92856439f6e44ba9f78112e63b17d508e6", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/106669/1/500_2016_Article_2112.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "72eec599e1ae44485923ad406d396fafa377e73d", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
15328289
pes2o/s2orc
v3-fos-license
Forgiveness from Emotion Fit: Emotional Frame, Consumer Emotion, and Feeling-Right in Consumer Decision to Forgive Three studies examine an emotion fit effect in the crisis communication, namely, the interaction between emotional frames of guilt and shame and consumer emotions of anger and fear on consumer forgiveness. Guilt-framing communication results in higher forgiveness than shame-framing for angry consumers, whereas shame-framing communication results in higher forgiveness than guilt-framing for fearful consumers. These effects are driven by consumers’ accessible regulatory foci associated with anger/fear and guilt/shame. Specifically, feelings of anger activate a promotion focus that is represented by guilt frames, while feelings of fear activate a prevention focus that is enacted by shame frames. Compared with emotion non-fit (i.e., anger to shame and fear to guilt), emotion fit (i.e., anger to guilt and fear to shame) facilitates greater feeling-right and consumer forgiveness. The findings offer novel insights for extant literature on emotion, crisis communication, and regulatory focus theory, as well as practical suggestions regarding the emotional frames. INTRODUCTION Corporations regularly face a myriad of potential crises, which are low-probability and high-risk events (Yu et al., 2008). Crisis is not a matter of "if " but "when" in corporate life. Therefore, the crisis communication -what and how the company says and does during and after a crisis -is basically essential for garnering consumer forgiveness and restoring consumer-company relations. However, the crisis communication can also make the crisis situation worse (Coombs et al., 2010;Tsarenko and Tojib, 2012). So, how to shape the appropriate strategies in response to corporate crises is critical for firms. Corporate crises trigger strong and frequent emotions for both consumers and companies (van der Meer and Verhoeven, 2014). In particular, the company is always trying to express "the right tone" during the crisis communication. For example, following the violence and vandalism of the first leg of the Europa League, Feyenoord apologized to Rome and said that "we feel ashamed for the behavior of our citizens in Rome." In contrast to Feyenoord's shame-framing response, the Canadian Red Cross "pleaded guilty" in the aftermath of the blood scandal, saying it "is deeply sorry for the injury and death caused to those who were infected by blood or blood products it distributed" in the 1980s and early 1990s. Are these two emotion-framed communications effective in facilitating consumer forgiveness? While considerable crisis communication research has focused on what the company should say (Kim et al., 2004;Tsarenko and Tojib, 2015) and when the company should respond to the crisis (Frantz and Bennigson, 2005), recently some researchers have begun to pay attention to how the company should communicate with consumers, which depicts the way in which emotions the company should frame and express in the crisis communication (Jin, 2010;Kim and Cameron, 2011). Although prior research has shown that communications of negative emotions (e.g., sadness) could substantially have an effect on consumer forgiveness and trust (e.g., ten Brinke and Adams, 2015), previous research has not adequately addressed the impact of distinct negative emotions on consumer forgiveness, nor has it explained those effects. In the current research, we will address this gap and investigate how and why the distinct negative emotions framed in the crisis communication impact consumer forgiveness. Drawing on prior literature on emotional frames and consumer emotion, we aim to examine how a company's crisis communication framed in terms of either guilt or shame influences forgiveness for consumers who are experiencing the specific emotions of angry or fear (induced by the crisis or incidental sources). Shame and guilt refer to the selfconscious emotions as they involve perceptions of the self (Tracy and Robins, 2004). These self-conscious emotional frames are frequently used in crisis communication for apologizing, excusing, and pleading, because they often signal that self is wrong (Wolf et al., 2010). Shame and guilt share many similarities, but prior research in consumer psychology has demonstrated the distinct effects of these two emotions on construal level (Han et al., 2014), defensive processing (Agrawal and Duhachek, 2010), and coping processes (Duhachek et al., 2012). In the studies just cited, researchers only examine how consumers experiencing either shame or guilt would react differently, yet little is known about how consumers would perceive and evaluate messages framed in either shame or guilt, especially the emotional frames are not aimed to prime consumers to feel shame or guilt. In addition, consumer emotions may interact with emotional frames (e.g., shame-framing and guilt-framing) to determine consumer forgiveness. Consumer emotions would affect information processing and decision-making when the onset of emotions is related or incidental to the crisis. Research has suggested that emotional carry-over from the past could cause an implicit cognitive predisposition to appraise subsequent information Lerner et al., 2004Lerner et al., , 2007, such as those apologies framed by the guilt and shame emotions. Still, there is reason to believe that consumer emotions and emotional frames of company may jointly influence consumer forgiveness. Collins' (2004) interaction ritual chains theory indicates that how one would feel and perceive not only depends on the current situation, but also the previous situations. Based on this rationale, consumer emotion can be seen as a manifestation of the past situation (i.e., the crisis). Specifically, we focus on negative emotions of anger and fear, which are well-established as the most common emotions in crisis (Lerner and Keltner, 2001;Lerner et al., 2003). In the current research, we show that guilt-framing crisis communication results in higher forgiveness than shame-framing for angry consumers, while shame-framing communication leads to higher forgiveness than guilt-framing for fearful consumers. We refer to the correspondence between consumers' emotion and companies' emotional frame as emotion fit. On the consumer's side, anger, which is induced by a demanding offense against self (Lazarus, 1991), makes people have a blame attribution to the wrongdoer (i.e., the company) and would prefer a promotionfocused coping such as punishment and attack (Nabi, 2003;Carver and Harmon-Jones, 2009). Fear, which is experienced by facing uncertain and existential threats (Lazarus, 1991), makes people more likely to make pessimistic judgment and prefer a prevention-focused coping such as risk-averse orientation, protective solutions, and precautionary plans (Strauman and Higgins, 1987;Lerner and Keltner, 2001;Nabi, 2003). On the company side, guilt-framing is associated with negative behavior evaluation (e.g., "we failed to control quality") and approach proneness (analogous to promotion focus), whereas shameframing focuses on negative self-evaluation (e.g., "we failed to keep our promise on quality") and avoidance proneness (analogous to prevention focus) (Wolf et al., 2010). Thus, drawing on the regulatory focus theory (Higgins, 1997(Higgins, , 1998(Higgins, , 2000Brockner and Higgins, 2001;Cesario et al., 2004;Santelli et al., 2009), we postulate when the company's communication frame (i.e., guilt vs. shame) fits with consumers' emotion (i.e., angry vs. fear) in regulatory focus (i.e., promotion vs. prevention), there would be more perceived feeling-right and in turn greater forgiveness compared with when there is incongruence (i.e., lack of fit). As a starting point, we review the literature on regulatory focus theory to provide a theoretical basis for the following hypotheses. Examined next is research that, respectively, differentiates angry/fear and guilt/shame in regulatory foci. This is followed by a section that documents how regulatory fit between consumers' emotion and a company's emotion in frame increases feelingright and forgiveness. Finally, we assess evidence that consumers' perceived feeling-right mediates the main effects. REGULATORY FOCUS THEORY Regulatory focus theory delineates how individuals use the basic hedonic principle on approach and avoidance to achieve selfregulatory motivations. At any given point in time, individuals may engage in one of two different types of regulatory focus: promotion focus and prevention focus (Higgins, 1997). Specifically, when promotion focused, one is motivated by ideals, advancement, aspiration, and accomplishment, thereby heightening the salience of attaining gains (i.e., the presence of positive outcomes; Higgins, 1998). When prevention focused, one is then concerned with safety, protection, duties, and responsibility, thereby increasing the salience of avoiding losses (i.e., the presence of negative outcomes; Higgins, 1998). Although much research treats regulatory foci as stable traits, very smaller research on the antecedents of self-regulation focus shows these regulatory patterns can be made temporarily accessible in situations (e.g., Crowe and Higgins, 1997;Lee et al., 2000). From this perspective, emotions might shape one's regulatory focus because emotions, both of incidental and message-relevant emotions, are always associated with motivational tendencies and appraisals (Lazarus, 1991;Nabi, 2003). In support of our emotion fit hypotheses, we next document two key premises. First, anger activates a tendency to rely on promotion, and fear induces an inclination focused on prevention. Second, guilt-framing crisis communication delivers a company's coping strategy associated with a promotion focus and shame-framing indicates a company's prevention-focused coping strategy. Anger/Fear Emotion and Regulatory Focus Anger and fear, two common negative emotions in risk situations, share many similar consequences. For example, as Lerner and Keltner (2001) suggested, both anger and fear elicit a negative valence appraisal and thus decrease consumers' purchase intention. However, motivated by the appraisaltendency framework that addresses specific emotions can evoke different appraisals, recent research suggests some different effects of the two emotions. Anger Activates a Promotion Focus Anger arises when "I" or "we" are offended by "the other person, either through neglect or intentionally" (Lazarus, 1991;Li et al., 2014). In a corporate crisis, consumers tend to experience anger when certain company causes an offense against the "self " (myself or ourselves). Literature on physiologic psychology has demonstrated that anger is always associated with an increase in pulse, blood pressure, and increase in epinephrine (Henry, 1986;Cuddy, 2015). In terms of consequential reactions, angry people always process information heuristically (Tiedens and Linton, 2001) and are very likely to blame the company Carver and Harmon-Jones, 2009). Moreover, feelings of anger drive a desire to take revenge and "fight" (see the customer revenge model; Grégoire et al., 2010; also see Skitka et al., 2006). In short, these physiologic and behavioral reactions suggest that individuals who are experiencing anger tend to pursue promotion-focused goals. From these findings, we theorize that anger can activate a promotion focus. Fear Activates a Prevention Focus The emotion of fear is induced when one is facing uncertain and existential threat, in which the threat is unpredictable and uncontrollable (Lazarus, 1991;Tiedens and Linton, 2001). In response to a fear-inducing crisis, consumers may benefit from an increase in proinflammatory cytokines that are related with submissive withdrawal (Moons et al., 2010). Additionally, feelings of fear or worried are always associated with uncertain appraisals, which in turn lead people to make pessimistic judgment and precautionary plans (Lerner and Keltner, 2001;Lerner et al., 2003;Skitka et al., 2006). Thus, as Lazarus (1991) proposed, fear would generate a sense of helplessness about protecting the loss. Importantly, Strauman and Higgins (1987) have demonstrated a significant relationship between fear/restless and actual-self/ought-self discrepancy (an index of prevention focus). Together, these findings suggest that experiencing fear leads individuals to adopt a prevention focus. Guilt-/Shame-Framing and Regulatory Focus Guilt and shame emotions in crisis communication could reveal information about the sender (i.e., the company; van der Meer and Verhoeven, 2014), including the sender's feelings, motives, and concerns. Although much research has demonstrated that both guilt and shame emotions can be elicited by one event and share many similarities (e.g., Tracy and Robins, 2004), recent research has begun to tease apart the different facets of the two emotions (e.g., Dearing et al., 2005;Agrawal and Duhachek, 2010;Duhachek et al., 2012). Guilt Frames Represent Promotion-Focused Coping Strategies Guilt arises when one − a person or a humanized object, in this case a company − realizes that he or she should take responsibility for the past actions have caused a violation that harms another (Lindsay-Hartz et al., 1995). Guilt frames focus on a specific behavior, such as "we made this mistake, " emphasizing tension, remorse and regret over the "wrongdoing done" (see the review by Tangney et al., 2007). According to research on the behavioral consequences resulted from experiencing guilt (e.g., Wolf et al., 2010;Tangney et al., 2014), the crisis communication framed by guilt emotion could deliver a company's desire to bring positive changes and show a company's motivation to repair damage done. Thus, information embedded in the guilt frames indicates a company's approach tendency that represents a promotion-focused coping strategy. Shame Frames Represent Prevention-Focused Strategies In contrast, shame results when a person − in this case, a company − judges its wrongdoing as conflicting with its internal standards, norms, and goals, while involving a global negative feeling about the self (Han et al., 2014). Shame frames focus on the deficiency of the company itself (e.g., "we failed to keep our promise on the quality") with a feeling of diminished, worthless, and powerless (see a review by Tangney et al., 2007). Drawing from previous findings that shame proneness leads individuals to escape, hide, and deny responsibility (e.g., Wolf et al., 2010;Treeby and Bruno, 2012;Tangney et al., 2014), the shame-framing crisis communication would be associated with a tendency to blame other factors for the misconduct and indicate a company's defensive motivation (Stuewig et al., 2010). As a consequence, shame frames treat as prevention-focused coping strategies because they deliver avoidance messages. EMOTION FIT, FEELING-RIGHT, AND CONSUMER FORGIVENESS Regulatory fit, an important term in regulatory focus theory (Higgins, 1998), occurs when one's behavior, cognition, or strategic mean naturally is congruent with his or her current regulatory focus (Higgins, 2000;Cesario et al., 2004;Lee and Aaker, 2004;Lee et al., 2010). In the crisis communication context, we postulate that emotion fit results from regulatory fit between the consumer's emotions and the company's emotional frames. Previously, we developed the proposition that angry consumers favor promotion-focused strategies that are represented by guilt-framing crisis communications. Therefore, an emotion fit arises between anger and guilt. In the same vein, fearful consumers prefer prevent-focused strategies that are enacted by shame-framing communication, which engenders an emotion fit between fear and shame. Then, the emotion-fit could result in higher consumer forgiveness. According to McCullough et al. (1998), forgiveness refers to the set of motivational changes whereby one becomes decreasingly motivated to retaliate against an offending partner, decreasingly motivated to maintain estrangement from the offender, and increasingly motivated by conciliation and goodwill toward the offender. That is, when a primary emotion is anger, consumers would prefer a promotion-focused solution such as punishment. The guilt-framing crisis communication focuses on the wrongdoing and shows a company's promotional motivation to repair damage done. This guilt frames would satisfy angry consumers' expectations and relieve their emotional tension, hence increasing consumer forgiveness of the company. In contrast, the shame-framing crisis communication that indicates a company's defensive motivation would be interpreted as the company's powerless, thereby decreasing consumer forgiveness. When primary emotion is fear, consumers would generate a sense of helplessness and would like to be protected from potential harm. Expressions of shame would communicate a negative feeling about the self. This self-depreciation apology would evoke empathy of fearful consumers, which in turn leads to forgiveness. On the other hand, for fearful consumers, the guilt-framing crisis communication may serve as a cue that the uncertain and existential threat is confirmed and the company is the exact wrongdoer, thereby enhancing feelings of uncertain and then decreasing consumer forgiveness. When there is an emotion fit, consumers tend to perceive the surroundings around them as more valuable and appropriate and are more likely to trust the company's communication (Higgins and Silberman, 2001;Laufer and Jung, 2010). Feelingright is the product of regulatory fit, such that when individuals experience regulatory fit, they feel right about what they are doing. The importance and correctness with which a message is evaluated is conceptualized as feeling of rightness (Cesario et al., 2004), which is subsequently used as evidence in consumer decision to forgive the company (Camacho et al., 2003). Santelli et al. (2009) have suggested that feeling-right would serve as an explanation for why apologies are successful at eliciting forgiveness. Thus, we posit that an emotion fit between a consumer and a company might positively influence the apology's perceived feeling-right, and then consumer forgiveness, as compared with a mismatch (i.e., lack of fit), which potentially could negatively influence the apology's perceived feeling-right and consumer forgiveness. Therefore, we hypothesize, H1. For consumers experiencing anger, a guilt-framing crisis communication leads to higher (a) feeling-right and (b) forgiveness than shame-framing. H2. For consumers experiencing fear, a shame-framing crisis communication leads to higher (a) feeling-right and (b) forgiveness than guilt-framing. As we discussed previously, regulatory foci serve as the mechanism linking consumer emotions and company emotions. Higgins (2000) has argued that regulatory fit increases feeling of rightness because it intensifies and sustains an underlying orientation. That is, the company's communication strategy that fits consumers' emotion follows the regulatory focus that is favored by consumers, whereas a non-fit communication strategy follows a different regulatory focus. Furthermore, research has shown that tasks and decisions would be evaluated more positively when they are conducted with regulatory fit (see Higgins, 2000;Cesario et al., 2004). Thus, this line of theorizing predicts that when angry consumers meet a guilt-framing rather than a shame-framing communication, a promotion focus is activated to a greater degree, which in turn results in higher feeling-right and forgiveness. Similarly, when fearful consumers meet a shame-framing rather than a guilt-framing communication, a prevention focus is activated to a greater extent, which in turn leads to higher feeling-right and forgiveness. Therefore, we hypothesize, H3. Angry consumers reading a guilt-framing communication rather than shame-framing engender the activation of promotion focus that, in turn, drives the effects of emotion fit on feeling-right and forgiveness. H4. Fearful consumers reading a shame-framing communication rather than guilt-framing engender the activation of prevention focus that, in turn, drives the effects of emotion fit on feeling-right and forgiveness. Table 1 completes the theorizing and predictions. In the three studies that follow, we examine the hypotheses. Study 1 documents the interactive effect of angry/fearful consumer emotion and guilt-/shame-framing communication on feelingright and forgiveness. Study 2 identifies the underlying process by showing that these effects of consumer emotion and emotional frame on forgiveness are mediated by feeling of rightness and that this feeling-right is the result of enhanced activation of consistent regulatory focus. Finally, in Study 3, we employ a moderation approach to further verify the mediating role of regulatory focus by manipulating promotion focus and prevention focus. STUDY 1 The core objective of Study 1 is to test H1 and H2. Study 1 employed a 2 (consumer emotion: anger vs. fear) × 3 (company's emotional frame: guilt vs. shame vs. no emotion) betweensubjects design. Besides measuring self-reported forgiveness, this study also records participants' time spent on viewing the company's communication. Viewing time captures the behavioral consequences of feeling-right induced by the company's crisis communication. According to the basic principles of Implicit Association Tests (IATs; Chang and Mitchell, 2011), viewing time of reading a text is a predictor of perceived incompatibility, which is a manifestation of feeling-right (Camacho et al., 2003). That is, people would read a text faster if they feel the content is "right" and "correct." Additionally, Chiles and Buslig (2012) have found that perceived incompatibility in apologies could decrease recipients' forgiveness. Thus, in the context of crisis communication, viewing time is a good estimate of feeling-right for reading the company's public letter. Materials The stimuli consisted of two piece of fictitious news. The first article described a cell phone explosion issue, aiming to induce either anger or fear. The second article portrayed that the cell phone company responded to the explosion issue with a public letter in either guilt frames or shame frames. Manipulation of Anger/Fear Emotion According to Nabi (2003), we primed participants to experience anger/fear by emphasizing information related to core relational theme of specific emotion. In the anger condition, the article emphasized the company's intentional wrongdoing, whereas in the fear condition, the article focused on how other consumers suffer from the company's wrongdoing. A pilot study confirmed this emotion priming. Sixtyfive graduate students (M age = 26.92; 53.8% males) were randomly assigned to either the anger condition or the fear condition. The participants first reported their basic emotions, which include ten items of emotion descriptors (see Table 2; Kim and Cameron, 2011). Then the participants read the article and rated a questionnaire. This questionnaire was designed to measure their post-crisis emotions, trustworthiness of the article, and perceived severity, all using 7-point scales ranging from 1 (not at all) to 7 (very much). See Table 2 for participants' emotions before and after reading the article. A series of analysis of variance (ANOVA) revealed that there was a significant difference between angry article condition and fearful article condition for anger emotion (M angry article = 2.33 vs. M fearful article = 1.31), F(1,63) = 6.22, p < 0.05, η 2 p = 0.09, and fear emotion (M angry article = 1.28 vs. M fearful article = 2.35), F(1,63) = 5.02, p < 0.05, η 2 p = 0.07, but no difference for experiencing sad, disgust, worried, and anxiety (ps > 0.18). In addition, no significant differences emerged between the two articles in trustworthiness and severity (ps > 0.20). Manipulation of Guilt/Shame Frames The guilt-framing article was entitled "A GUILTY letter: Apparel Corporate responds to the phone explosion, " while the shame-framing article was entitled "An ASHAMED letter: Apparel Corporate responds to the phone explosion." We designed the guilt/shame frames based on the research by Duhachek et al. (2012) and another research by van der Meer and Verhoeven (2014). Specifically, in the guilt-framing communication condition, the CEO said "we feel deeply guilty about this serious issue" and "this new phone model failed to be safe" (focusing on the misconduct), and promised to "take all responsibility regarding the event and recall all products" (showing approach-proneness) and finally "express deepest guilt and regret over this issue" again. In the shameframing communication condition, the CEO said that "we feel deeply ashamed to have allowed this issue to occur" and "we failed in our promise on quality and reliability" (focusing on the trait), and promised that we "will fire the product manager who is in charge of this model and identify other lingering issues in this regard" (showing avoid-proneness) and finally "express our deepest shame, and careful concern for our customers" again. See Appendix A for the details of manipulation. The manipulation was pretested using fifty-seven participants (M age = 21.47; 46% male) who rated the extent to which the company is ashamed of the crisis and the extent to which the company feel guilty of the crisis. Following Zemack-Rugar et al. (2007), guilt consisted of two items ("according to the letter, the company is ashamed/ humiliated, " r = 0.77, p < 0.001) and shame consisted of three items ("according to the letter, the company is guilt/culpable/remorseful, " α = 0.91), using 7-point scales ranging from 1 (not at all) to 7 (very much so). Participants exposed to the guilt-framing letter assessed that the company showed more guilt than shame (M's = 5.37 vs. 4.05, respectively), t(30) = 4.95, p < 0.001, whereas participants exposed to the shame-framing letter assessed that the company expressed shame to a greater extent (M's = 4.79 vs. 3.53, respectively), t(25) = 7.72, p < 0.001. Methods Two hundred and forty-three undergraduate students (M age = 21.26; 51% males) at a public university in China were recruited in exchange for partial course credit and were randomly assigned to the various cells. The study was conducted in a behavioral technology lab on the computer-based interface of E-Prime software. On entering the lab, participants were instructed to sit at individual cubicles with three-foot-high dividers, providing each participant with a private space to complete the experiment independently. After exposure to one of two crisis articles, participants reported the extent how they feel angry and fearful. Feeling of anger was measured using three items ("I feel angry/irritated/ aggravated, " α = 0.90) and feeling of fear was measured using two items ("I feel fearful/scared, " r = 0.42, p < 0.01; Dillard and Shen, 2007). Next, participants were randomly assigned to read another article, in which the Apparel Corporate responds to the cell phone explosion issue using one of three frames: guilt-framing, shame-framing, and no emotional framing (control condition). See Appendix A for the details of manipulation. Participants were told to read the company letter until they thoroughly understand it. Once they clicked the start bottom, viewing time was recorded. Upon viewing the letter, participants reported their forgiveness to the Apparel Corporate. Forgiveness was measured by modifying McCullough et al.'s (1998) Transgression-Related Interpersonal Motivation Inventory (TRIM; e.g., "I would be against to the Apparel Corporate"), which is composed of seven items of the Avoidance subscale and five items of the Revenge subscale using scales from 1 (not at all) to 7 (very much so). We did not incorporate the Benevolence subscale in the TRIM scale as benevolence is a representation of pro-social motivation and always used in the interpersonal context (McCullough et al., 2003). Ample research has demonstrated that the TRIM scale of negative motivations (i.e., Avoidance and Revenge) has good convergent and discriminant validity (McCullough et al., 1998;Santelli et al., 2009). See Appendix B for all measures. Finally, participants provided their demographic information. Upon completion of the experiment, participants were thanked and debriefed. Discussion The findings of Study 1 support for our proposed theorizing. First, when consumers are experiencing one specific emotion, the emotional communication is more effective than nonemotional communication. Consistent with van der Meer and Verhoeven's (2014) research, a lack of emotions in company's response may be interpreted by consumers as a sign, such that an emotionless response implies the absence of organizational involvement and sincerity and may be perceived as cold. Although the viewing time between no emotion condition and the "fit" emotion condition yields no significant difference (e.g., anger-guilt vs. anger-no emotion), the forgiveness between the two conditions is still different. Second, of central importance, the results demonstrate the interaction between consumer emotion and company emotional frame. Two separate dependent variables together provide convergent evidence in support of the conceptualization; relative to emotion non-fit scenarios (i.e., an angry consumer reads a shame-framing communication and a fearful consumer reads a guilt-framing communication), emotion fit scenarios (i.e., an angry consumer reads a guilt-framing communication and a fearful consumer reads a shame-framing communication) result in higher consumer forgiveness and feeling of rightness. STUDY 2 Study 2 has two goals. The first goal is to examine H3 and H4 that describe the underlying mechanism of emotion fit, so we test whether the regulatory fit, resulted from emotions, facilitates feeling-right and forgiveness. Second, in order to generalize H1 and H2, we examine another situation that anger and fear are unrelated to a crisis. Instead of inducing anger and fear by varying the message frames, we prime participants to recall either angry or fearful events in an ostensibly unrelated experiment before the main valuation of emotional crisis communication. Methods Two hundred undergraduate students (M age = 23.11; 56% Males) at a public university in China participated in this experiment in exchange for one course credit. Participants were randomly assigned to conditions in a 2 (consumer emotion: anger vs. fear) × 2 (company's emotional frame: guilt vs. shame) betweensubject design. The cover story told participants that they would take part in two unrelated studies: the first ostensibly conducted as a psychology experiment and the second conducted as a marketing experiment. At the psychology experiment part, participants were told that this experiment was seeking to understand how people recall. Participants in the anger (fear) condition were requested to recall three past events that made them feel angry (fearful). Specifically, participants were instructed to write down details about one event, in an attempt to recollect how they thought and felt during this episode. This procedure has been shown to be effective for manipulating a specific emotional state (Tiedens and Linton, 2001). After completing the recall task, participants evaluated the extent to which they felt angry (α = 0.94) and fearful (r = 0.66, p < 0.001), measured in anger and fear scales identical to Study 1. Participants then proceeded to an ostensibly unrelated marketing experiment. This task followed the same procedure as in Study 1. Participants read a fictitious news article that described the Life water Corporation making a public apology for the sub-standard mineral elements issue. Following the manipulation of Study 1, the shame-framing apology letter was entitled "An ASHAMED LETTER: Life water Corporation responds to spring water issue, " whereas the guilt-framing apology letter was entitled "A GUILTY LETTER: Life water Corporation responds to spring water issue." Next, participants indicated the extent to which the company is guilty and ashamed of the crisis, their promotion and prevention motivation regarding this event, the feeling-right of the apology content, and their forgiveness of the company. These variables were measured as follow. Guilt (α = 0.90) and shame (r = 0.79, p < 0.001) were measured as the second pretest in Study 1. By modifying the promotion/prevention scale (Lockwood et al., 2002) to fit the crisis context, promotion and prevention focus was measured using four items, respectively (e.g., promotion focus: "in my point, the current major goal of Life water Corporate should be to take actions to solve the problem, " α = 0.82; prevention focus: "in my point, the current major goal of Life water Corporate should be to avoid the more occurrence of negative issues, " α = 0.80). Feeling-right was measured using two items (e.g., "to what extent do you feel that the Life water's letter is right, " r = 0.45, p < 0.01; Cesario et al., 2004). Considering that we only measured the negative motivation of forgiveness in Study 1, we measured forgiveness by adopting a different scale that only consists of four items (e.g., "I would forgive the Life water Corporation, " α = 0.83; Finkel et al., 2002). See the Appendix B for all measures. All items used scales ranging from 1 (not at all) to 7 (very much so). Finally, participants provided demographic information, following which they were debriefed and thanked. Confirmatory Factor Analyses We conducted confirmatory factor analyses to examine the discriminant validity of the four participant self-reported variables, namely promotion focus, prevention focus, feelingright and consumer forgiveness. It can be seen from Table 3 that the Chi-square test of either of the other models shows a significant increase compared to that of the four-factor model, and the four-factor model is obviously better in the other fit indices (Hu and Bentler, 1999). Thus, results showed that the four variables were empirically distinct from each other, representing four distinct constructs. Sequential Mediation Analyses As H3a and H3b represent a case of moderated mediation, we analyzed the mediating role of promotion and prevention focus, respectively (cf. Muller et al., 2005). We expected that in each emotion condition (promotion focus for anger vs. prevention focus for fear), the promotion/prevention focus mediated the relationship between emotional frame and feeling-right, which subsequently influence consumer forgiveness. First, focusing on the anger condition, the bias-corrected bootstrap analyses (Model 6, 1000 resamples; Hayes, 2013) provided support for the sequential mediation chain: emotional frame → promotion focus → feeling-right → consumer forgiveness, indirect effect path = 0.15, SE = 0.08, 95% CI: [0.0407, 0.3729] (see Figure 3 for complete path coefficients and total indirect effect), but not for the prevention focus pathway: emotional frame → prevention focus → feeling-right → consumer forgiveness, indirect effect path = 0.00, SE = 0.03, 95% CI: [−0.0488, 0.0965], thereby supporting H3. Next, we focused on the fear condition and examined the indirect effect of emotional frame upon feeling-right and then consumer forgiveness through prevention focus, not through promotion focus. Of central importance, the biascorrected bootstrap analyses (Model 6, 1000 resamples) supported the sequential mediation chain through the prevention focus pathway: emotional frame → prevention focus → feeling-right → consumer forgiveness, indirect effect path = −0.03, SE = 0.02, 95% CI: [−0.1008, −0.0047] (see Figure 4 for complete path coefficients and total indirect effect), but did not support the mediation chain through the promotion focus pathway, indirect effect path = 0.01, SE = 0.01, 95% CI: [−0.0144, 0.0534], which thus supports H4. Discussion The results in Study 2 support our conceptualization. First, consistent with our basic theorizing, the results provide evidence in support of the key linkages between anger/fear and promotion/prevention activation and between guilt/shame and promotion-/prevention-focused strategies. Second, the results replicate the patterns of emotion fit effect found in Study 1. Third, the results support the sequential mediation, such that the emotional framings interact with consumer emotions to influence consumer's regulatory focus and hence feeling-right and forgiveness. STUDY 3 The objective of Study 3 is to provide further process evidence of promotion and prevention focus. By manipulating promotion and prevention focus directly, we strive to test the role of regulatory focus in the emotion fit effects using a moderation-of-process design to complement the mediation approach used in Study 2 (cf., Spencer et al., 2005). When a specific regulatory focus is made accessible, we should find that the fit between focus and emotional frame predicts forgiveness, which is consistent with the findings of Santelli et al. (2009). More specifically, the guiltframing communication should result in greater forgiveness and feeling-right than the shame-framing communication N = 243, All alternative models were compared with the hypothesized four-factor model. All are significant at p < 0.001. Three-factor model A combined promotion focus and prevention focus together. Three-factor model B combined promotion focus and feeling-right together. Three-factor model C combined prevention focus and feeling-right together. Three-factor model D combined feeling-right and consumer forgiveness together. Two-factor model combined promotion focus, prevention focus, and feeling-right together. Single-factor model combined promotion focus, prevention focus, feeling-right, and consumer forgiveness together. for promotion-focused consumers, while the shame-framing communication should result in greater forgiveness and feelingright than the guilt-framing communication for preventionfocused consumers. Methods Two hundred and ninety-five undergraduate students (M age = 21.67; 46% Males) at a public university in China participated in this experiment in exchange for 15 (approximately 2.32 US dollars). Participants were randomly assigned to conditions in a 2 (consumer emotion: anger vs. fear) × 2 (regulatory focus: promotion vs. prevention) × 2 (company's emotional frame: guilt vs. shame) between-subject design. As a cover story, participants were told that they would take part in three unrelated studies: the first would be a psychology experiment, the second would be a survey held by the Student Union, and the third would be a news evaluation held by the marketing department. The first task followed the same procedure as the first one in Study 2, involving recalling an experience of either anger or fear. Then, participants indicated how they feel angry (α = 0.93) and fearful (r = 0.41, p < 0.01). Proceeding to the second study, participants were told that the Student Union was collecting students' status quo. On this pretense, participants in the promotion-priming condition were asked to describe their current hopes and aspirations and why aspirations are important to people, whereas participants in the prevention-priming condition were asked to describe their current sense of duty and obligation and why obligations are important to people (cf., Higgins, 1998;Higgins and Silberman, 2001). Next, participants moved to the news evaluation task, which is designed in a manner identical to Study 2. Upon reading the news, participants indicated their feeling-right about the apology (r = 0.62, p < 0.01) and their forgiveness of the company (α = 0.89), assessed in a manner identical to Study 2. Finally, participants provided demographic information and were thanked and debriefed. Manipulation Check Participants in the anger condition experienced greater anger than those in the fear condition (M's = 4.46 vs. 3.77, respectively), F(1,293) = 11.87, p < 0.01, whereas participants in the fear condition experienced greater fear than those in the anger condition (M's = 4.40 vs. 3.59, respectively), F(1,293) = 24.45, p < 0.001, thus confirming that the emotion manipulation was successful. Feeling-Right Consistent with the patterns of consumer forgiveness, another 2 (regulatory focus) × 2 (consumer emotion) × 2 (emotional frame) ANOVA on feeling-right showed a significant two-way interaction effect between regulatory focus and emotional frame, F(1,287) = 27.42, p < 0.001, η 2 p = 0.09. Regardless of the anger and fear condition, there was always a significant interaction between regulatory focus and emotional frame in the anger condition, F(1,145) = 14.31, p < 0.001, η 2 p = 0.09, and in the fear condition, F(1,142) = 13.14, p < 0.001, η 2 p = 0.09. Further, no significant other interactions or three main effects emerged (ps > 0.09). Discussion The findings of Study 3 show that activation of regulatory focus drives the effects of emotion fit on forgiveness. When a specific regulatory focus is activated, regardless of consumer emotion, the fit between regulatory focus and emotional frame predicts feeling-right and forgiveness. That is, the promotion focus drives the effects of guilt frame on forgiveness and feeling-right, while the prevention focus underlies the effects of shame frame on forgiveness and feeling-right. Together with the findings of Study 2, these results converge in support of the regulatory focus mechanism in emotion fit. GENERAL DISCUSSION The current research suggests that in the corporate crises emotional frame of company interacts with consumer emotion to determine consumer forgiveness. In general, this research combines three different research streams − anger/fear, guilt/shame, and regulatory focus − into a comprehensive framework regarding how and why emotional frame impacts consumer forgiveness. Foremost, the effect of emotional frame (guilt-framing vs. shame-framing) on consumer forgiveness depends on consumer emotion -whether consumers feel angry or fearful. Across three experiments, we show that for consumers who are experiencing anger, the guilt-framing communication could result in greater forgiveness than shameframing communication, whereas for consumers who are experiencing fear, the shame-framing communication could result in greater forgiveness than guilt-framing communication (Studies 1 and 2). In addition, the emotion fit effects also exist when consumer emotion is not related with the crisis (Studies 2 and 3). Further, we pinpoint the specific mechanisms that underlie these effects. Specifically, drawing on the regulatory focus theory, we show that consumers feeling anger (fear) prefer promotion-focused (prevent-focused) strategies that are enacted by guilt-framing (shame-framing) communication, thereby enhancing their feeling-right and then their forgiveness (Studies 2 and 3). Theoretical Contributions First, this research contributes to the literature on crisis communication by identifying a new driver of communication effectiveness, namely, guilt/shame emotional frames. Prior empirical investigations of crisis communication have mainly focused on what the company says in the communication (Kim et al., 2004;Tsarenko and Tojib, 2015) and when the company responds to the crisis (Frantz and Bennigson, 2005). Motivated by the idea that "attitude is everything, " scholars have increasingly recognized the important role of emotion in crisis communication (Li et al., 2014). Claeys and Cauberghe (2014) found that crisis response with an emotional appeal influence consumers' interpretation of crisis, which may subsequently have an effect on consumer forgiveness. In addition, Kim and Cameron (2011) also showed that the presence (vs. absence) of a suitable emotional communication increases consumer trust. This series of studies has demonstrated the importance of emotional appeals, leaving the question regarding which specific emotion the company should frame and express in the crisis communication not fully explored. A recent study by ten Brinke and Adams (2015) indicated that corporate apologies with a negative emotion (i.e., sadness) would positively impact perceived sincerity as apposed to apologies with a positive emotion (i.e., happiness). However, this valence-based approach cannot account for the distinct effects of emotions similar in valence . Both guilt and shame are negative and self-conscious emotions; nevertheless, due to their distinct behavioral implications (Dearing et al., 2005;Agrawal and Duhachek, 2010;Duhachek et al., 2012;Tangney et al., 2014), we expected and identified the distinct effects of guilt-framing and shame-framing on consumer forgiveness. To the best of our knowledge, this is the first research to examine the relative effectiveness of guilt-framing and shame-framing on consumer forgiveness. Next, these findings also add to a burgeoning body of research on guilt and shame by providing the underlying mechanism of promotion and prevention foci. Previous research has shown that the differential effects of guilt and shame on construal level (Han et al., 2014), defensive processing (Agrawal and Duhachek, 2010), coping processing (Duhachek et al., 2012), and appraisal tendencies (Han et al., 2014) for people who are objects of emotions. Yet the notion how guilt and shame frames impact consumers has received little attention. By demonstrating the interaction effect of emotional frame and consumer emotion on forgiveness is driven by promotion/prevention focus and feeling-right, we provide a comprehensive picture in understanding effects of shame/guilt frame on consumer forgiveness. Although Duhachek et al. (2012) have found that problem-focused/emotion-focused coping drives the interactive effect of shame/guilt and gain/loss on message persuasiveness, we provide a distinct explanation of promotion/prevention focus. That is, problem-focused coping focuses on a problem-solving approach (e.g., rational thinking and action) and emotionalfocused coping focuses on an emotional approach to reduce stress (e.g., emotional venting), whereas both promotion and prevention foci aim at solving problems but use different ways. Furthermore, at a broader level, our work contributes to research on emotion by providing an early inquiry into the emotion fit effects. We show that an emotion fit arises between consumers' emotion and companies' emotion, such that consumers' anger fits the company's guilt and consumers' fear fits the company's shame. Indeed, previous research has documented some phenomenon related with emotion match. For example, Ludwig et al. (2013) proposed the linguistic style match, such that a communicator who is using emotional linguistic style would more prefer another communicator who is using emotional (vs. rational) linguistic style. Moreover, Bower (1981) suggested the mood congruity effect that people always look for moodcongruent others but avoid mood-incongruent others (also see Lee et al., 2013). Whereas these aforementioned studies have only focused on whether the two communicators have the same emotions (e.g., a sad person to another sad person), we investigate the fit between two different specific emotions (i.e., anger to guilt and fear to shame). Thus, the findings contribute to the emotion research and open the door to subsequent investigations of more emotion fit phenomenon. Lastly, the current study extends regulatory focus theory to the emotion domain. Literature on regulatory focus theory mainly focuses on the cognition domain, such as consumer goals (Lee et al., 2010), gain/loss frames (Lee and Aaker, 2004), and self-construal (Lee et al., 2000). Although Brockner and Higgins (2001) have suggested that regulatory foci can influence emotions, they did not imply that emotions can predict regulatory foci. Our results show that the basic principles of regulatory focus theory (e.g., regulatory fit), which are well established in the domain of motivation and cognition, also hold for emotions. Practical Implications Guilt and shame emotions are frequently used in the crisis communication as a consequence of begging forgiveness. Our research provides insight into how company's emotional messages and consumers' emotions jointly influence the effectiveness of crisis communication. When a crisis occurs, the company should think carefully about the emotional frames used in the public letter and consider clearly consumer emotions. Two recommendations follow. First, if a crisis elicits a strong emotional response from consumers, managers should use emotional frames and tactics rather than more "rational" approaches (cf. Luo and Yu, 2015). Second, managers should try to pin down the emotions of majority consumers, and then use the specific "fit" emotional frame to respond to the crisis. For instance, when anger is the dominant reaction toward a crisis, using an apology letter full of feeling guilty is effective to garner consumers' forgiveness. Yet, in managing a crisis that primarily evokes fear, providing an apology letter full of feeling ashamed of the relevant actions might be more important. Although a crisis undoubtedly triggers numerous emotions, the primary emotion could be predicted as emotions are related with the nature of the crisis. According to Jin's (2010) integrated crisis mapping model, more anger would be experienced as a function of perceived high crisis predictability and high crisis controllability, and more fear would be experienced as a function of perceived low crisis predictability and low crisis controllability. An additional practical implication of our research lies in the findings that specific regulatory focus drives feelingright and forgiveness. On the basis of using emotional frames in the crisis communication, managers could further increase consumers' forgiveness of the company by highlighting information regarding a particular regulatory focus. For example, the shame-framing apology letter can include a statement such as "protecting you against loss is our primary obligation, " which is an example of prevention focus; the guilt-framing apology letter can include a statement such as "we strive to make world-class products that deliver the best experience possible to our customers, " which represents a promotion focus. This statement could make the proper regulatory focus salient, thereby facilitating consumers' acceptance of the apology and consumers' forgiveness. All in all, managers need to understand the regulatory focus activated by consumer emotions and design public letters that aid the specific regulatory focus to maximize the effectiveness of crisis communication. Limitations and Future Research The present research has several limitations that suggest a number of potentially future research opportunities. Across the three experiments, we manipulated emotional frames using news articles. In reality, however, the misbehaving company also makes apologies using conference press. Considering that a lot of visual characteristics could influence consumer judgment (e.g., colors, subtle expressions), we only employ the content frames manipulation to examine the hypotheses. Also, our research only involves student participants and all participants were only Chinese nationals, which may raise external validity. Future research can further examine the emotion fit effect across different cultures and enhanced generalizability. Next, three experiments only examined the self-report forgiveness, which cannot directly predict consumers' real behavior. However, our findings were replicated using two different scales to measure forgiveness, suggesting the robustness of effects. Finally, inputs into forgiveness other than the emotional frames might exert similar effects, such as those that arise in the content of communication, the company-consumer relationship, the intentionality of crisis, the type of crisis (e.g., performance-related crisis vs. values-related crisis), and the severity of crisis. These salient predictors of forgiveness are likely to present important boundary conditions for our results and their effects. Another avenue for future research is to explore other emotion fit effects. We only found the fit between anger and guilt and between fear and shame, yet there are many other emotions. For example, how does a consumer who is experiencing pride react to a happy vs. a sad song? Given anxiety and anger are the two common emotions embedded in the negative product reviews (Yin et al., 2014), how does an anxious (vs. an angry) review influence a sad consumer's perceived helplessness? And for two donation advertisements that one highlights hope and another delivers love, which one is more effective for a consumer who feels embarrassed? These interesting issues merit further examination. The issue of mediators was also addressed in the current research. Although Studies 2 and 3 have demonstrated the mediating role of promotion/prevention focus, other factors could have been influenced in the casual chain predicting forgiveness as well. That is, does emotion also impact how consumers feel about other variables that are typically involved in the forgiving process? For instance, anger and fear may influence consumers' attributions on crisis Kim and Cameron, 2011), thereby impacting how consumers evaluate the emotional frames. The current research only shed light upon one regulatory focus mechanism, so further research could consider other underlying mechanisms and examine to what extent these different mechanisms can account for the emotion fit effects. CONCLUSION This paper has studied emotion fit when companies must conduct crisis communication. More specifically, the interaction between emotional frames of guilt and shame with consumer emotions of anger and fear and the effect on consumer forgiveness are examined. Guilt-framing communication results in higher forgiveness than shame-framing for angry consumers, whereas shame-framing communication results in higher forgiveness than guilt-framing for fearful consumers. Compared with emotion non-fit (i.e., anger to shame and fear to guilt), emotion fit (i.e., anger to guilt and fear to shame) facilitates greater feeling-right and consumer forgiveness. This means that managers should try to pin down the emotions of majority consumers, and then use the specific "fit" emotional frame to respond to the crisis. The findings offer novel insights for extant literature on crisis communication, emotion, and regulatory focus theory. It further underscores the importance in understanding different types of emotion and the responses they can elicit (Li et al., 2014) as well as providing practical suggestions regarding the emotional frames.
2017-05-05T09:08:38.534Z
2016-11-15T00:00:00.000
{ "year": 2016, "sha1": "37dd31229e9032d4bb20bb835ee37a5a884db304", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01775/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37dd31229e9032d4bb20bb835ee37a5a884db304", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
235363322
pes2o/s2orc
v3-fos-license
lncRNA-HEIM Facilitated Liver Fibrosis by Up-Regulating TGF-β Expression in Long-Term Outcome of Chronic Hepatitis B Background Chronic liver fibrosis is an inevitable stage for the development of patients with chronic hepatitis B (CHB). However, anti-fibrotic therapies have been unsuccessful so far. The biological functions and molecular mechanisms of long non-coding RNAs (lncRNAs) in the host immune system during chronic hepatitis B virus (HBV) infection, especially in fibrosis, are still largely unknown. Method The total RNA of peripheral blood mononuclear cells (PBMCs) from asymptomatic carriers (ASCs) or CHB receiving at least 8 years of anti-viral treatments was analyzed using Arraystar microarray and validated via quantitative real-time PCR (qRT-PCR). Correlation analysis was conducted based on correlation coefficients, Clusterprofile, and RNA Interactome Database (RAID). The functions of lncRNA in monocytes were determined via loss-of-function RNAi or gain-of-function lentivirus assays. The expression levels of mRNAs or proteins were evaluated using qRT-PCR, western blotting assay, or enzyme linked immunosorbent assays (ELISA). Results A total of 1,042 mRNA transcripts (630 up-regulated and 412 down-regulated) were identified being differentially expressed between ASC and CHB patients. Through enrichment analysis we focused on the transforming growth factor beta (TGF-β) signaling pathway and validated their expression in a larger cohort. Moreover, we found that lncRNA ENST00000519726 (lncRNA-HEIM) was highly expressed in monocytes and further up-regulated upon HBV infection. LncRNA-HEIM played an important role in CHB patients with long-term antiviral treatments, and its elevated expression was remarkably correlated with the TGF-β signaling pathway, especially with the two members namely TGF-β and SMAD4. Furthermore, altering the endogenous lncRNA-HEIM level in monocytes significantly affected the production of TGF-β, as well as the fibrosis of hepatic stellate cells by affecting the expression of collagen I and α-smooth muscle actin (α-SMA). Conclusion These findings not only added knowledge to the understanding of the roles of which lncRNA-HEIM played in the activation of HSCs in CHB patients with long-term medication, but also provided a promising therapeutic target in the future treatment for liver fibrosis. INTRODUCTION Viral hepatitis was the seventh leading cause of mortality worldwide and responsible for 1.4 million deaths per year (1). Especially, chronic infections of hepatitis B virus (HBV) can lead to immune mediated liver damage which could further progress to cirrhosis and hepatocellular carcinoma (HCC) and is still incurable because of the persistence of a covalently closed circular DNA (cccDNA) of HBV in hepatocytes during the infection (2). As a common outcome of chronic HBV infection, the patients with progressive liver fibrosis are at risk of developing into cirrhosis and liver cancer, and blocking the process of liver fibrosis becomes the key in treating chronic HBV infections (3). Despite the emerging non-invasive methods including serum biomarker algorithms and transient elastography assessments, percutaneous liver biopsy remained as the gold standard for the staging of fibrosis for its advantage in viability, accuracy, and risk factors for error (4). Although a considerable number of therapeutic targets in liver fibrosis have been identified, clinical trials of anti-fibrotic therapies have been unsuccessful so far (5). Non-invasive methods which are accurate and reliable for the diagnosis of early liver fibrosis are urgently needed for the treatment of CHB patients, as well as therapies that could effectively block the progression of fibrosis. The chronic HBV infection is a complex process, involving diverse factors and interactions between the host immune system and the virus (6). As the first step in the cascade of malignant progression, liver fibrosis is preceded by inflammation, where elements of both the innate and adaptive immune systems are pivotal in its regulations. Among these immune cells, macrophages exert critical functions ranging from eliminating pathogens to maintaining immunological tolerance, as well as initiating and perpetuating inflammation in response to injury, promoting liver fibrosis through activating hepatic stellate cells, and resolving inflammation and fibrosis by degrading extracellular matrix and releasing anti-inflammatory cytokines (7). The cellular heterogeneity of hepatic macrophage and its central role in the pathogenesis of chronic liver injury made it a promising potential target in combatting fibrosis. Recently, accumulating reports have suggested the existence of a novel class of RNA named long non-coding RNAs (lncRNAs), which is defined as transcripts with longer than 200 nucleotides in length but without protein translation capability. The expression pattern of lncRNAs can be highly specific among cells or tissues, and the dysregulation of lncRNAs is also significantly correlated with different diseases, including multiple cancers (8)(9)(10). As to viral-infectious diseases, more and more research attention was paid to the functions of lncRNAs in the host immune response against viral infection (11). Hao et al. performed microarray on liver tissues and identified a total of 203 differentially expressed lncRNAs in patients with chronic HBV infection, most of which might be involved in cytokinecytokine receptor interaction and varied biotransformation processes including fatty acid metabolism, amino acid metabolism, carbon metabolism, and drug metabolism (12). Nevertheless, the biological functions as well as the molecular mechanisms of lncRNAs in the host immune system during HBV infection still remain largely unknown. In this study, we identified an unreported lncRNA ENST00000519726 with high expression in monocytes and named it as lncRNA Highly Expressed In Monocytes (lncRNA-HEIM). The expression of lncRNA-HEIM could be up-regulated in the monocytes of CHB patients with long-term antiviral treatment, leading to the subsequent elevation of TGF-b and SMAD4. The increased expression of TGF-b could further promote the fibrosis of hepatic stellate cell. These findings expand the understanding of lncRNA's role in the immune response of long-term CHB patient, providing promising targets for the future research and treatments on patients with chronic HBV infection. Ethics and Blood Sample Collection Human blood samples from asymptomatic carrier (ASC) or chronic hepatitis B (CHB) patients or healthy volunteers were obtained from donors with their informed consent. This study and the usage of these blood samples were approved by the Ethics Committee of the First Affiliated Hospital, College of Medicine, Zhejiang University (Ethics number: 2018-970). Fifteen healthy volunteers (seven females and eight males) were randomly recruited from those who visited the First Affiliated Hospital of Zhejiang University for their physical examination at 2018 and 2021, with an average age of 37.9 (medium age 38, range 28-47). A total of twelve CHB patients and eighteen ASC patients were recruited from the First Affiliated Hospital, College of Medicine, Zhejiang University, between 2017 and 2018. CHB patients had received an anti-viral therapy of entecavir for eight years, with major hepatic functions like aspartate aminotransferase (AST) and normal alanine aminotransferase (ALT) levels. No observable radiological liver fibrosis was found in CHB patients or ASC patients. The summarized patient information was listed in Supplementary Table S1. RNA Preparation and Quantification of PBMCs About 5 ml blood samples were collected, and the PBMCs were isolated using standard density-gradient centrifugation on Ficoll-Paque [MultiSciences (Lianke) Biotech, China]. In particular, CD14 + monocytes were isolated from PBMCs of either healthy donors or CHB patients using EasySep human CD14 positive selection kit II (Stemcell Tech, Cambridge, MA) following the manufacturer's instruction. In brief, PBMCs were resuspended at 1 × 10 8 cells/ml in EasySep buffer. Isolation cocktail mix and magnetic particles from the kit were added into cells, and a series of incubation and centrifugation steps were performed. Finally, CD14 + monocytes were isolated, and the rest were CD14 − cells. CD19 + B lymphocytes, CD3 + T lymphocytes, and natural killer (NK) cells were isolated from the PBMCs of HDs using EasySep human CD19 positive selection kit II (Stemcell Tech), EasySep human CD3 positive selection kit II (Stemcell Tech), and EasySep human NK Cell isolating kit (Stemcell Tech) respectively, following the manufacturer's instruction. Total RNA was isolated using RNAiso Plus (Takara Bio, Dalian, China) according to the manufacturer's instructions. The concentration of RNA was determined by Nanodrop One (Thermo-Fisher Scientific, MA) and diluted using DEPCtreated water. RNA was transcripted into cDNA using PrimeScript RT Master Mix (Perfect Real Time) (Takara Bio, Dalian, China), and quantitatively amplified on QuantStudio 5 (Applied Biosystems, CA) using SYBR Premix Ex Taq II (Perfect Real Time) (Takara Bio, Dalian, China), following the manufacturer's instructions. All the primers for quantitative PCR (shown in Supplementary Table S2) were synthesized by Sangon Biotech (Shanghai). Data were normalized using glyceraldehyde 3-phosphate dehydrogenase (GAPDH) as internal controls and calculated using the 2 −DDCT method. Microarray Analysis Sample preparation and microarray hybridization of PBMC RNA of ASC or CHB patients with long-term antiviral treatments were conducted by KangChen Bio-tech (Shanghai, China). Generally, lncRNAs and mRNAs were purified from total RNA after removal of rRNA (mRNA-ONLY ™ Eukaryotic mRNA Isolation Kit, Epicentre). Then, each sample was amplified and transcribed into fluorescent cRNA along the entire length of the transcripts without 3′ bias utilizing a random priming method (Arraystar Flash RNA Labeling Kit, Arraystar). Each labeled cRNA was fragmented and hybridized on Human LncRNA Microarray V4.0 (Arraystar, containing 40,173 annotated lncRNAs or lncRNAs of high confidence, as well as an entire collection of 20,730 protein coding mRNAs). After washing, the hybridized arrays were scanned using the Agilent DNA Microarray Scanner. Data processing was performed with the GeneSpring GX v12.1 software package (Agilent Technologies), and reads of high quality or with raw intensity more than 40 were chosen for further analysis. Differentially expressed mRNAs and lncRNAs with statistical significance between the two groups were identified through Pvalue/FDR filtering and fold change filtering, with the cutoff for P-value/FDR at <0.01 and for fold change >2 (up-or downregulated). KEGG pathway analysis was applied to determine the roles of these differentially expressed mRNAs played in these biological pathways. Hierarchical clustering and combined analysis were performed using in-house scripts. The correlation analysis between lncRNAs and mRNAs was conducted as follows. The up-regulated mRNAs were subjected to KEGG annotation using R package namely ClusterProfile. Candidate correlated lncRNAs of which the interaction score with target gene was >0.95 based on RNA Interactome Database (RAID) v 3.0, as well as the Pearson correlation coefficient | cor | >0.7, were selected for further analysis. NetworkD3 and Hmise R packages were used to depict the alluvial diagram of the correlation between lncRNAs, mRNAs and corresponding KEGG pathways. Cell Culture Human monocyte cell line THP-1 and human hepatic stellate cell line LX-2 were from the State Key Laboratory for Diagnosis and Treatment of Infectious Diseases. The cells were cultured in either RPMI-1640 (for THP-1) or high glucose DMEM (Dulbecco's modified Eagle's medium, for LX-2) supplemented with FBS (fetal calf serum) to a final concentration of 10% and antibiotics at 37℃ with 5% CO 2 . THP-1 cells were stimulated with 100 ng/ml PMA (Phorbol-12-myristate-13-acetate) for 24 h unless indicated otherwise. LX-2 cells were seeded into a corning costar transwell plate (100,000 cells per well) in a co-culture system with THP-1 cells. Oligonucleotide Transduction Small interfering RNAs (siRNAs) targeting lnc-HEIM were designed and synthesized by GenePharma (Shanghai, China), of which the sequences are listed in Supplementary Table S3. Cells were transfected with 60 mM siRNAs at 50% confluence using Lipofectamine 2000 (Invitrogen) and harvested for analysis or co-culture assays 24 h after transfection. Lentivirus Infection Recombinant lentiviral particles expressing lncRNA-HEIM or negative control were obtained from Hangzhou Baixi Biotech (Hangzhou, China). THP-1 cells were infected with lentivirus at multiplicity of infection (MOI) of 50 for 24 h, together with 10 mg/ml polybrene (Genechem, Shanghai, China). Cells stably expressing lncRNA-HEIM or negative control were selected with blasticidin S HCl (Gibco, USA). The expressing levels were determined via qRT-PCR. Cytoplasmic and Nuclear RNA Purification Cytoplasmic and nuclear RNAs were isolated and purified using RNeasy midi kit (Qiagen, Valencia, CA) following the manufacturer's instruction. Briefly, cells were lysed using RLN on ice, and the lysates were centrifuged to separate the nucleus from cytoplasm. Cytoplasmic or nuclear RNAs were purified respectively through RNeasy spin columns and resolved in DEPC water. Relative RNA expression levels were determined via qRT-PCR. ELISA Assays Commercial ELISA kit for TGF-beta1 (Invitrogen, Human/ Mouse TGF beta 1 Uncoated ELISA # 88-50350, USA) was employed to measure the concentrations of these cytokines in patients and cell culture supernatant according to the manufacturer's instructions. Statistical Analysis Statistical analyses were performed using GraphPad Prism software (version 7.0). Comparisons over groups were analyzed using twoway ANOVA followed by Tukey's multiple comparisons test, and comparisons of multiple samples at one group were analyzed using two-tailed t test. The results were presented as the mean ± standard deviation values (SDs). In all figures with error bars, data are presented as mean ± SD. In all figures, N.S., not significant, p > 0.05; * p < 0.05; * * p < 0.01; * * * p < 0.001; * * * * p < 0.0001. RNA Profiling in Long-Term Chronic Hepatitis B Patients and Asymptomatic Carrier The PBMC transcriptomic profiles including both messenger RNAs (mRNAs) and long non-coding RNAs (lncRNAs) were obtained from the CHB patients with long-term antiviral treatment or ASC through microarray Arraystar LncRNA. Comparing with the ASC, we identified a total of 1,042 mRNAs transcripts (630 up-regulated and 412 down-regulated, respectively) differently expressed in CHB patients ( Figure 1A). Resorting to Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis, we focused on the up-regulated mRNAs and found that TGF-b (transforming growth factor beta) was the only related signaling pathway among the top ten most enriched ones ( Figure 1B). As an important cytokine, TGF-b played critical roles in the responses against viral infections (13), as well as in hepatic fibrosis and development of cirrhosis (14). Since TGF-b signaling pathway is activated in patients with chronic HBV infection (15), we verified the involving de-regulated genes in a larger cohort including 12 CHB patients with long-term antiviral treatment and 18 ASC. Results showed that SMAD4 and TGF-b1 genes were substantially up-regulated in CHB patients ( Figure 1C). Particularly, the plasma TGF-b level, as well as the protein level of SMAD4 in the PBMC, was significantly elevated in CHB patients with long-term antiviral medication ( Figures 1D, E), which was also in accord with the microarray results. Identification of Differentially Expressed lncRNAs Closely Related With TGF-b Signaling Pathway in Long-Term CHB Patients Although reports on the importance of lncRNAs during the hepatic tumorigenesis and further progression were increasing rapidly (16), their involvement in chronic hepatitis B is still largely unknown. In our microarray assay, a total of 4,718 lncRNA transcripts differentially expressed between ASC and long-term CHB patients had been identified, including 2,609 up-regulated and 2,109 down-regulated lncRNA transcripts (Figure 2A) To further illustrate their roles in chronic hepatitis B, we performed the correlation analysis between lncRNA transcripts and upregulated mRNA using ClusterProfile and RNA Interactome Database (RAID v 3.0). Focusing on the top 10 up-regulated KEGG pathways shown in Figure 1B, we found that the TGF-b signaling pathway (13)(14)(15), cell cycle signaling pathway (17)(18)(19), RNA degradation pathway (20,21), and ubiquitin mediated proteolysis pathway (22,23) were broadly reported to be involved in the progress of viral infection including HBV infection. Therefore, we analyzed the correlated de-regulated lncRNAs with up-regulated mRNAs among these four KEGG annotated pathways, and found seven lncRNAs correlated with 11 target mRNAs ( Figure 2B). Furthermore, gene set variation analysis (GSVA) indicated that only lncRNA ENST00000519726 was positively correlated with SMAD4 and TGF-b signaling pathway with statistically significant difference ( Figure 2C). The following real-time PCR validation also showed that lncRNA ENST00000519726 was remarkably up-regulated in the longterm CHB patients ( Figure 2D). lncRNA ENST00000519726 Was Highly Expressed in Monocyte Recently, the importance of liver macrophages (including Kupffer cells and liver monocyte-derived macrophages) during hepatitis B infection has been increasingly valued. Their antiinflammatory responses favored by HBV resulted in liver tolerance, activation of hepatic stellate cells, and subsequently HBV-associated liver pathologies including fibrosis, cirrhosis, and the progression to hepatocellular carcinoma (24). In light of these findings, we managed to isolate CD14 + monocyte cells from the PBMCs of CHB patients and examined the RNA levels via real-time PCR. As depicted in Figure 3A, the expression levels of TGF-b, SMAD4 was notably higher in CD14 + monocytes compared with the CD14 − cells. Interestingly, lncRNA ENST00000519726 was also highly expressed in CD14 + cells from PMBCs of either healthy donors (HDs) or CHB patients ( Figure 3B). We further detected the expression of ENST00000519726 in CD14 + monocytes, CD19 + B lymphocytes, CD3 + T lymphocytes, and natural killer (NK) cells isolated from the PBMCs of HDs and found that its expression in CD14 + monocytes was much higher than the other lymphocytes (Supplementary Figure 1). Given these findings, this lncRNA ENST00000519726 in the following context was named as lncRNA Highly Expressed In Monocytes (lncRNA-HEIM). lncRNA-HEIM Promoted the Expression of TGF-b Upon Long-Term HBV Infection In order to validate the correlation between lncRNA-HEIM expression and HBV infection, we co-cultured PMA-differentiated THP-1 macrophages with HepG2.2.1.5 supernatant, which contained intact HBV particles. Upon HBV infection, the expression of lncRNA-HEIM in THP-1 started to increase 2 h after the addition compared with negative control and reached climax at 8 h after. Even though dropped at 16 h, the relative lncRNA-HEIM level was still higher than that in negative control throughout the whole experiment. However, the fold changes of SMAD4 and TGF-b expression were less than that of lncRNA-HEIM within the first 8 h, until 16 h and later ( Figure 3C). This indicated that lncRNA-HEIM might respond prior to TGF-b signaling pathway members upon HBV. As shown in Figure 3D, the sub-cellular location of lncRNA-HEIM was mainly in the cytoplasm, indicating its involved biological functions might occur at post-transcriptional levels. To better understand the potential functions of lncRNA-HEIM in monocyte, the specific targeting siRNAs were designed and transfected in THP-1 monocytes, and the siRNA with the best silencing effect was used afterwards ( Figure 3E). With the downregulation of HEIM, the endocellular expression levels of TGF-b and SMAD4 were significantly reduced at both RNA and protein levels, while the protein levels of both phosphorylated SMAD2 and SMAD3 showed no obvious changes ( Figures 4A, B). Accordingly, the TGF-b level in the culture supernatant was also notably down-regulated when transfected with HEIM siRNAs (Figures 4C), indicating that lncRNA-HEIM might play an important role in the TGF-b signaling pathway. To further confirm the role HEIM played in monocytes, we constructed THP-1 cells which stably over-expressed lncRNA-HEIM via lentivirus. As demonstrated in Figure 4D, the average expressing level of HEIM in stable THP-1 cells (LV-HEIM) was about 80 folds compared with that in negative control cells (NCs). With the up-regulation of HEIM, the mRNA levels ( Figure 4E) and the protein levels ( Figure 4G) of TGF-b and SMAD4 rose correspondingly in LV-HEIM cells, which was consistent with the results in HEIM knock-down cells. Also, the up-regulation of the supernatant TGF-b level in LV-HEIM cells was of statistical significance ( Figure 4F), which suggests that lncRNA-HEIM could prompt TGF-b signaling pathway by up-regulating the expression of both TGF-b and SMAD4. reduced by about 30 and 50%, respectively ( Figure 5A). The western blotting assay further confirmed that the protein levels of both collagen-I and a-SMA were decreased in LX-2 cells which were co-cultured with HEIM-siRNA transfected THP-1 cells ( Figure 5B). On the contrary, while co-cultured with LV-HEIM cells, both the transcript and protein levels of collagen-I and a-SMA in LX-2 cells were substantially increased ( Figures 5C, D). Resorting to immunofluorescence staining using confocal imaging, we again observed that co-culturing with LV-HEIM dramatically increased the protein levels of both collagen-I and a-SMA in LX-2 cells, which was in agreement with previous assays ( Figure 5E). DISCUSSION In this study, we evaluated the transcriptional expression of both mRNAs and lncRNAs of CHB patients with long-term antiviral treatment. By focusing on the TGF signaling pathway, we identified an unreported lncRNA named ENST00000519726 (lncRNA-HEIM). The expression of lncRNA-HEIM could be up-regulated upon HBV infection, leading to the following rising expression of both TGF-b and SMAD4 and further facilitating the fibrosis of hepatic stellate cells. Collectively, our findings enrich the understanding of the immune response during the HBV chronic infection and shed light on the future treatment strategies by targeting lncRNAs. It is widely accepted that chronic HBV persistent infection could cause the dysfunction of innate and adaptive immune responses. HBV infection could increase the secretion of TGF-b (26) and interleukin-10 (IL-10) (27) from monocyte/ macrophage, while inhibit the secretion of tumor necrosis factor a (TNF-a) and IL-12 induced by toll-like receptor 2 (TLR2) (28,29). Li et al. also found that in the patients with chronic HBV infection, monocytes expressed significantly higher levels of anti-inflammatory cytokines (TGF-b and IL-10) than those of healthy control. Furthermore, experiments in vitro showed that monocytes induced by hepatitis B surface antigen could educate NK cells to secrete IL-10 by mediating PD-L1/PD-1 and HLA-E/CD94 and inhibit autologous T cell activation (30). Our findings are consistent with the previous reports. Moreover, we identified a novel lncRNA-HEIM and verified its connection with TGF-b secretion in the CHB patients with long-term antiviral treatment, offering new evidence to the active role of lncRNA in the immune response upon HBV chronic infection. Liver fibrosis is the common result of the liver's healing response to chronic damage, characterized by an abnormal and excessive accumulation of the extracellular matrix (ECM) constituents (31). The main mediators of fibrosis are usually hepatic stellate cells (HSCs), which could be activated and transdifferentiated to myofibroblasts (MFBs) upon the liver's inflammatory response and cytokines and secrete ECM proteins (32,33). Among the cytokines, the TGF-b family, or TGF-b1 to be more specific, played a critical role in the development of hepatic fibrosis (34). Our data showed that increased TGF-b secretion from monocyte Thp-1 upon HBV infection could promote the fibrosis in HSCs at both mRNA and protein levels. More importantly, interfering the endogenous level of lncRNA-HEIM could significantly decrease the secretion of TGF-b and attenuate the fibrosis of HSCs. These findings provide new insights into the understanding and potential therapeutic strategies on chronic HBV infection; however, more work is needed to be done in the future to further demonstrate the detailed molecular mechanism between lncRNA-HEIM and TGF-b signaling pathway. Evidence of the TGF-b autocrine loop in monocytes has been documented in the development and homeostasis of alveolar macrophages (35), in the proliferation, cell aggregation, and differentiation of human myelomonocytic U937 leukemia cells (36) (37), in myelofibrotic monocytes of patients with myelofibrosis (38), and so on. The downstream effectors of canonical TGF-b are SMADs including receptor-regulated SMADs (R-SMADs, i.e., SMAD2 and SMAD3), common mediator SMAD4, inhibitory SMADs (I-SMAD, i.e., SMAD6 and SMAD7) (39). As a critical effector of intracellular signaling, SMAD4 interacts with two R-SMAD molecules to form a heteromeric complex. This complex, presumably a trimer, is then translocated into the nucleus, where it activates transcription of defined genes. In our study, we found that HBV could significantly up-regulate the expression of SMAD4 at both mRNA and protein levels, affecting neither the total nor the phosphorylated R-SMADs. It indicated the possibility that altering the expression of SMAD4 only is sufficient for the activation of TGF-b signaling pathway. Meanwhile, we've noticed that there was a report indicating that HBV-encoded pX oncoprotein directly interacts with SMAD4, stabilizing the complex of SMAD4 with components of the basic transcriptional machinery and contributing to HBV-associated liver fibrosis (40). Li et al. also reported that SMAD4 could form a positive feedback signaling loop of TGF-b1-CD147 by direct interaction with CD147 promoter and modulating the active phenotype of HSCs, promoting liver fibrosis (25). These reports provided useful information for the following research of molecular mechanism of lncRNA-HEIM. Also, we've analyzed the lncRNA-HEIM interacting proteins by RNA pulldown assay followed by mass spectrometry and found several interesting candidates (data not shown). We are still verifying the authenticity of these lncRNA-interacting mRNAs. Currently, the detailed underlying molecular mechanism, including the involving role lncRNA-HEIM played during the process, is still unclear, and future research is needed to fully understand how it works upon HBV infection. As one of the emerging research hotspots, lncRNAs are increasingly emphasized, especially on cancer research. As on fibrosis, accumulating evidence has revealed that lncRNAs could play both promotive and inhibitory roles in the process of multifaceted processes of fibrosis in various organs including the liver, heart, lung, and kidney (41). Wu Relative intensity of density normalized by DAPI was depicted on the right. * * * p < 0.001. up-regulation of lncRNA metastasis-associated lung adenocarcinoma transcript 1 (MALAT1) in liver fibrosis could activate hepatic stellate cells (HSCs) by suppressing silent information regulator 1 (SIRT1) (42). Yu et al. found that lncRNA Alu-mediated p21 transcriptional regulator (APTR) activated HSCs by negatively regulating the expression of p21, leading to the advancing of liver fibrosis in mouse model (43). Moreover, Yu et al. reported that lncRNA Homeobox transcript antisense RNA (HOTAIR) could enhance the methylation of phosphatase and tensin homolog deleted on chromosome 10 (PTEN) and contributed to the progression of liver fibrosis (44). However, most of these findings focused on the roles that lncRNAs played in either hepatocytes or HSCs. Here we laid eyes on monocytes, which are a smaller family of PBMCs, but vital actors in the immune-response against HBV infection. For the first time, the biological functions and potential molecular mechanism of lncRNA-HEIM during HBV chronic infection were documented. These findings increased the understanding of the roles lncRNAs played in the activation of HSCs and liver fibrosis in chronic HBV infection, providing a novel therapeutic target in the future treatment of chronic HBV infection. DATA AVAILABILITY STATEMENT The microarray data used in this research has been uploaded into GEO, with an accession number GSE166759. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the First Affiliated Hospital, College of Medicine, Zhejiang University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS JY, CL, and HD conceived and designed the study. CL, JJ, XZ, and TL performed the experiments. JY and FL analyzed and interpreted the data. JY and HD drafted the manuscript. JY, TL, and HD revised the manuscript. All authors contributed to the article and approved the submitted version.
2021-06-08T13:23:00.664Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "70a897dcc0f4f2cb8d7aa65a10ca9536c2e62def", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.666370/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70a897dcc0f4f2cb8d7aa65a10ca9536c2e62def", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236950951
pes2o/s2orc
v3-fos-license
High Correlation of Static First-Minute-Frame (FMF) PET Imaging after 18F-Labeled Amyloid Tracer Injection with [18F]FDG PET Imaging Dynamic early-phase PET images acquired with radiotracers binding to fibrillar amyloid-beta (Aβ) have shown to correlate with [18F]fluorodeoxyglucose (FDG) PET images and provide perfusion-like information. Perfusion information of static PET scans acquired during the first minute after radiotracer injection (FMF, first-minute-frame) is compared to [18F]FDG PET images. FMFs of 60 patients acquired with [18F]florbetapir (FBP), [18F]flutemetamol (FMM), and [18F]florbetaben (FBB) are compared to [18F]FDG PET images. Regional standardized uptake value ratios (SUVR) are directly compared and intrapatient Pearson’s correlation coefficients are calculated to evaluate the correlation of FMFs to their corresponding [18F]FDG PET images. Additionally, regional interpatient correlations are calculated. The intensity profiles of mean SUVRs among the study cohort (r = 0.98, p < 0.001) and intrapatient analyses show strong correlations between FMFs and [18F]FDG PET images (r = 0.93 ± 0.05). Regional VOI-based analyses also result in high correlation coefficients. The FMF shows similar information to the cerebral metabolic patterns obtained by [18F]FDG PET imaging. Therefore, it could be an alternative to the dynamic imaging of early phase amyloid PET and be used as an additional neurodegeneration biomarker in amyloid PET studies in routine clinical practice while being acquired at the same time as amyloid PET images. Early-phase brain amyloid PET images haven been shown to correlate positively with [ 18 F]FDG and [ 15 O]H 2 O brain PET images and offer perfusion information. The studies investigating early-phase brain amyloid PET images use mainly [ 11 C]PiB [9][10][11][12][13][14][15][16][17][18][19][20][21], [ 18 F]FBP [22][23][24][25][26][27], and [ 18 F]FBB [19,[28][29][30][31]. In the case of [ 18 F]FMM, the dual-time protocol has been studied [32] and the perfusion information has previously been evaluated [33] but early-phase [ 18 F]FMM has not been compared to [ 18 F]FDG. Dynamic brain PET scans are usually acquired during the 10 min after intravenous injection of the radiotracer. Perfusion images can be generated as the sum or average of the individual dynamic images for radiotracer-specific time frames. While dynamic scans may offer higher accuracy, static PET images are still recommended in clinical guidelines for [ 18 F]FDG PET brain imaging and considered sufficient for diagnostic purposes [34,35]. In brain amyloid imaging, dynamic scans are not required clinically and static scans may be used for visual and semiquantitative interpretation [36]. Moreover, the evaluated dynamic early-phase protocols would require at least an additional 5 min of scan time, while an easier-to-acquire static scan of a fixed shorter time window might also result in comparable perfusion data to [ 18 While most of studies evaluated the usefulness of early-phase amyloid PET acquisitions for patients with AD due to its relationship with Aβ, cerebral patterns of hypometabolic regions in [ 18 F]FDG PET images can also differentiate AD from other neurodegenerative diseases such as frontotemporal dementia (FTD), dementia with Lewy Bodies, and Parkinson's disease (PD) [37,38]. Kuo et al. [24] compared early-phase [ 18 F]FBP PET images of patients with primary progressive aphasia (PPA) to AD and healthy controls, and Asghar et al. [25] based their analysis specifically on the differentiation between behavioral variant FTD (bvFTD) from AD and cognitively normal elderly. Lastly, Yoo et al. [39] analyzed early-phase [ 18 F]FBB PET in patients with PD. The aim of this study is to quantitatively evaluate the perfusion information of static brain amyloid PET images obtained in the first minute after radiotracer injection, hereafter called FMFs (first-minute-frame), with 18 F-labeled radiotracers, compared to the cerebral metabolism as measured in [ 18 F]FDG PET images. Therefore, a common time frame for early-phase images of the three 18 F-labeled amyloid radiotracers is tested, which also does not require a dynamic acquisition protocol. Additionally, we present, to the best of our knowledge, the first study evaluating early acquisitions with [ 18 F]FMM in comparison to [ 18 F]FDG PET. The correlation between FMFs and [ 18 F]FDG PET images is evaluated in a retrospective cohort of patients with cognitive impairment or dementia. Different interpatient and intrapatient correlation tests between regional quantitative values of radiotracer uptake are performed to quantitatively assess the comparability of both images. Image Analysis All images are preprocessed using Statistical Parametric Mapping 12 (SPM12) (Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom) [40]. First, a manual orientation by rotating the images and setting the origin of the native space is performed. Then, the images are normalized to a standard space defined by the Montreal Neurological Institute (MNI) using the corresponding CT images as an anatomical reference as described in [41], resulting in images with a matrix size of 91 × 109 × 91 and voxel size of 2 × 2 × 2 mm 3 . The spatially normalized PET images are smoothed with an 8-mm full width at half-maximum Gaussian kernel. The Automatic Anatomical Labelling (AAL2) atlas is then used to segment a total of 8 volumes of interest (VOI) per brain hemisphere based on 74 cortical and subcortical brain regions (specified in Table S1 of the Supplementary Material) [42,43]. These VOIs correspond to the frontal, occipital, parietal, and temporal lobes, and the anterior (ACC) and posterior cingulate cortices (PCC). The 7th region represents the precuneus, which is analyzed separately as it is one of the core regions in typical AD and its variants (except logopenic) [44,45]. Lastly, we also included the striatum in our study. Mean image intensities (activity concentration, Bq/mL) of each region are extracted and normalized to the mean intensity of a reference region using a custom MATLAB script. The resulting ratios are denominated SUV ratios (SUVR) as they are mathematically equivalent. In this study, the atlas regions corresponding to the grey matter part of the cerebellum is used as a reference for all images as it has been demonstrated that it is not affected in AD [46]. Statistical Analysis Quantitative variables are represented as mean ± standard deviation (SD). FMFs and [ 18 F]FDG images are compared by Pearson's correlation analysis. Pearson's correlation coefficients (r) are calculated based on the SUVRs of the regions defined by the AAL2 atlas and of the grouped regions. First, the correlation between the intensity profiles of FMFs and [ 18 F]FDG is calculated, computed by averaging the regional SUVRs of the corresponding subcohort (study cohort, dividing by radiotracer and amyloid state). Intrapatient correlations are calculated to evaluate the comparability of the images and assess whether FMFs present the same information as their corresponding [ 18 F]FDG images. Differences between radiotracers are evaluated by the Kruskal-Wallis tests and subcohort-specific differences between Aβ+ and Aβ− groups by Mann-Whitney U-tests. Additionally, interpatient correlations of regional SUVRs are calculated. Correlations are computed in the whole study group, as well as divided by the amyloid state and used amyloid radiotracer to account for the effect these might have on the FMFs. Differences of regional correlation coefficients are evaluated by Friedman's test and post hoc pair-wise tests with Bonferroni adjusted α values. The Wilcoxon signed-rank test is used to evaluate subcohort-specific differences of regional correlation coefficients between Aβ+ and Aβ− groups. Using the Benjamini-Hochberg procedure, false discovery rate (FDR) is controlled at level 0.05 where necessary to adjust for multiple comparisons. Statistical analyses are performed in MAT-LAB R2019a (The MathWorks, Inc., Natick, MA, USA) and SPSS software version 28.00 (IBM Corp., Armonk, NY, USA). Demographics and Image Data A total of 60 patients (age 66.27 ± 8.26 years, female: 31) with available FMF and [ 18 F]FDG PET scans comprised the final retrospective study cohort. The standard latephase amyloid PET scans of 25 patients were visually diagnosed as Aβ+ and 35 as Aβ−. Fourteen subjects were clinically classified as having mild cognitive impairment (MCI) and 36 patients with dementia, in which 16 of them classified as having dementia probably due to AD and 20 with dementia possibly due to AD, based on [47]. The remaining 10 patients were classified as having other dementia, with 7 patients as a differential diagnosis between FTD and 3 patients with unclassifiable primary progressive aphasia (ucPPA). All diagnoses were extracted from clinical records and reviewed for this study by neurologists and nuclear medicine specialists. Patient demographics of the study cohort, as well as of the subcohorts based on the used radiotracer, are summarized in Table 1. A visual comparison of [ 18 F]FDG brain PET images and their corresponding FMFs for each amyloid radiotracer is exemplified in Figure 1. Overall, [ 18 F]FDG PET and FMFs present the same patterns of normal and reduced metabolism. SUVR Analysis Regional SUVRs are summarized in Table 2. Through analyzing by specific regions, it can be observed that usually the lowest SUVRs are present in the temporal VOI and striatum. This hypoperfusion/hypometabolism in the temporal region is consistent with patterns found in patients of AD and in the majority of diagnoses of the patients comprising the study groups [37,38]. In contrast, the occipital VOI shows the highest mean SUVR in all subcohorts, which could be attributed mainly to the usual visual activity during the PET scan. In all cases, regional SUVRs are lower in FMFs than [ 18 F]FDG PET. This trend can also be seen in the corresponding intensity profiles which include the 74 brain regions that define the grouped VOIs ( Figure 2). Moreover, the two profiles are strongly correlated as can be seen in the correlation charts of the intensity profiles in Figure 2. The correlation coefficient is r = 0.98 (p < 0.001) for all patients and in the [ 18 F]FBB subcohort, and r = 0.97 (p < 0.001) in the [ 18 F]FBP and [ 18 F]FMM subcohorts. Correlation coefficients between intensity profiles in Aβ+ and Aβ− patients range from r = 0.96 (p < 0.001) to r = 0.98 (p < 0.001). Regional SUVRs, intensity profile graphs, and correlation charts with the respective correlation coefficients of the Aβ+ and Aβ− subcohorts can be found in Section S2.1 of the Supplementary Materials. Intrapatient Correlation Analysis Intrapatient correlation coefficients are summarized in Table S1 of the Supplementary Material (left hemisphere followed by right hemisphere). Interpatient Correlation Analysis Regional interpatient correlation coefficients of the grouped VOIs are summarized in Table 4. The highest statistically significant correlation is r = −1.00 (p < 0.001), observed in the left occipital of Aβ+ patients in the [ 18 F]FBB subcohort. However, given the small number of patients (N = 3), we do not consider this result representative. In the [ 18 F]FBB subcohort, the correlation coefficients for the subcohort (mean r = 0.77 ± 0.26), and after separating it by amyloid state, are naturally lower or often not statistically significant or representative. Comparing the three first groups (study cohort, [ 18 F]FBP subcohort, and [ 18 F]FMM subcohort) and without separating by amyloid state, the highest correlation coefficient is observed in the right temporal VOI of the [ 18 F]FBP subcohort (r = 0.94 (p < 0.001)). Strong and statistically significant correlations are observed in all brain regions for the study cohort (mean r = 0.82 ± 0.06) as well as for the [ 18 F]FBP (mean r = 0.86 ± 0.08) and [ 18 F]FMM (mean r = 0.78 ± 0.10) subcohorts. No statistically significant differences are observed between the regional correlation coefficients of all patients of the three radiotracers (p = 0.126). Discussion The correlation between the FMF, static brain PET images corresponding to the first p.i. minute frame of a 18 F-labeled Aβ-binding radiotracer, and [ 18 F]FDG PET images is investigated in this study. Previously, dynamic PET images based on Aβ-binding radiotracers were acquired during the first 10 min p.i. and perfusion-like early-phase amyloid PET images were then computed based on the dynamic data and radiotracer-specific time frames usually by averaging or summing the specific frames [10][11][12][13][14][15]17,18,[20][21][22][23][24][25][26]28,29,33]. The FMF is therefore studied as an alternative method more widely available with a common time frame for the three 18 F-labeled Aβ-binding radiotracers, which is also easier to acquire and has higher usability in routine clinical practice. Additionally, the possibility to study both neurodegeneration and Aβ deposition with one radiotracer injection and in one scan session would be clinically highly beneficial. In this study, FMFs were acquired with the three approved and commercially available 18 [19,[28][29][30][31], no study comparing early-phase amyloid PET acquisitions with [ 18 F]FMM to [ 18 F]FDG PET images was found during the writing of this manuscript. However, the optimal time windows for the acquisition of dual-phase [ 18 F]FMM PET images are being investigated [32] in addition to its perfusion information [33]. The aim was also to study the comparability of the FMF to [ 18 F]FDG PET images independently of the cognitive stage using a study cohort with patients situated in a broad spectrum. Subjects from routine clinical practice with different preliminary diagnoses classified as probable AD or possible AD as defined in [47], differential diagnosis with FTD, and other types of dementia were included. However, future studies could focus on specific conditions as it has been done for PPA [24] and FTD [25], or other diseases such as PD [39]. SUVR analyses revealed that while [ 18 F]FDG PET images present higher mean SU-VRs in all regions, the intensity profiles show high similarity and correlation up to r = 0.98 (p < 0.001) (Figure 2). This is consistent with previous studies comparing earlyphase [ 11 C]PiB [11,15,20], [ 18 F]FBP [22], or [ 18 F]FBB [28,29] to [ 18 F]FDG PET images. Segovia et al. [29] specifically analyzed the intensity profile of early-phase [ 18 F]FBB and [ 18 F]FDG PET images and found similar mean regional intensity values for both images. Lin et al. [23] evaluated global SUVR values of early-phase [ 18 F]FBP PET images of patients with different stages of MCI and AD. The authors report a progressive decrease of the global SUVR for more advanced stages, indicating more extensive regions of hypoperfusion. This is similar to [ 18 F]FDG PET images and decrease of the global SUVR based on the extension of the hypometabolic regions. The high comparability of the FMF and the patients' corresponding [ 18 F]FDG PET images is confirmed in the intrapatient analysis. The mean correlation coefficients of the different studied cohorts reach as high as r = 0.95 ± 0.05. This is in agreement with other studies evaluating within-subject correlation between early-phase amyloid and [ 18 F]FDG PET images [10,23,25]. In all cases, the intrapatient correlation coefficients are comparable or higher than those reported previously. Visual analysis of [ 18 F]FDG brain PET images is generally based on the evaluation of radiotracer uptake patterns in anatomo-functional cortical regions and meta VOIs are created from more detailed brain parcels for regional quantitative analysis [48]. The majority of studies comparing early-phase amyloid and [ 18 F]FDG PET images are based on a meta-VOI analysis. Regional interpatient correlation analyses of the 16 grouped VOIs of this study result in overall strong correlation values between FMFs and [ 18 F]FDG PET images. The parietal, posterior cingulate cortex, precuneus, and temporal VOIs are of special interest due to being specific core regions for the diagnosis of AD, but which are also affected in other types of dementia evaluated in routine clinical practice [37,38,44]. Correlation is usually higher than r = 0.80 in these regions between FMFs and [ 18 F]FDG PET with a few exceptions. In the case of early-phase acquisitions with [ 18 F]FBP, Hsiao et al. [23] reported correlation coefficients of up to r = 0.93 in the superior temporal region, compared to r = 0.94 in the right temporal VOI in the corresponding subcohort in our study. Other studies evaluated imaging characteristics of early-phase [ 18 F]FBP PET images specifically in patients with PPA [24] or bvFTD [25] to demonstrate the viability of early-phase brain amyloid PET images in neurodegenerative diseases other than purely AD. Evaluating FMFs acquired with different Aβ-binding tracers, the correlation values obtained in our study are comparable, ranging from r = 0.71 in the left occipital VOI to r = 0.92 in the striatum. Tiepolt et al. [19] analyzed a mixed study group with early-phase [ 18 F]FBB and [ 11 C]PiB PET images, yielding correlation values ranging from r = 0.61 in the frontal region to r = 0.79 in the posterior cingulate cortex. In another study employing [ 18 F]FBB, Daerr et al. [19] obtained correlation values between r = 0.59 and r = 0.86 with a time frame of 0-5 min and cerebellar normalization. The authors achieved higher correlation values with global mean normalization and separating Aβ−positive from negative cases. While not directly comparable to our results due to the small sample size of the [ 18 In this study, smaller brain regions are also analyzed to evaluate the correlation between the images at a more detailed level. A detailed brain parcellation and subsequent correlation analysis was conducted by Segovia et al. [29] using early-phase [ 18 F]FBB and [ 18 F]FDG PET images. The authors obtained the highest correlation value in the caudate but only the caudate and right Heschl gyrus are shown to present correlation coefficients above r = 0.5. In general, correlation coefficients for AAL2-defined regions are higher in our study, especially considering the different sample sizes (mean in the study cohort r = 0.80 ± 0.09). While we believe that the statistically significant differences found between the [ 18 F]FBB subcohort and the others is due to its small sample size and are thus not representative, [ 18 F]FBP also showed higher correlation coefficients in these regions than in the [ 18 F]FDG PET (r = 0.97, p < 0.001), strong intrapatient correlations (r = 0.94 ± 0.05), and regional interpatient correlations (up to r = 0.93, p < 0.001). While the dual-time protocol and perfusion information of [ 18 F]FMM has been studied [32,33], the correlation of early-phase [ 18 F]FMM images to [ 18 F]FDG was not analyzed before and therefore no comparisons of our results in this subcohort to previous studies could be made. In this study, four main limitations are identified. After a preliminary visual comparison of FMFs, a static amyloid PET image of the first minute p.i. was selected for quantitative analysis and possible alternative to [ 18 F]FDG PET and dynamic early-phase amyloid PET images. However, in several of the above-mentioned studies, different time frames for the acquisition of perfusion amyloid PET images were identified. Therefore, static amyloid PET images of the first 10 min will additionally be evaluated, comparing them to the FMF and [ 18 F]FDG PET in future studies. Moreover, it should be noted that it would be desirable for FMFs to be acquired a standardized 5-10 s after the injection of the radiotracer to avoid a loss of initial counts in some images. It can be assumed that the quantitative correlation between FMF and [ 18 F]FDG PET images would be slightly higher if there is optimal coordination between injection and acquisition time. Regarding the subject cohort, it is only composed of patients with various degrees of cognitive decline from MCI to AD and no cognitively normal subjects have been included. The main aim of this study is to assess the viability of the FMF for the diagnosis and evaluation of neurodegenerative diseases, as it is one of the main clinical applications of brain [ 18 F]FDG PET in routine clinical practice. A healthy control group would have been of interest to complete the analysis and validation of the FMF method. Therefore, in future studies, the comparability of the FMF to [ 18 F]FDG will be studied in cognitively normal subjects. Another limitation of the study group concerns the absence of FMFs acquired with [ 11 C]PiB and the especially the low number of FMFs with [ 18 F]FBB, which result in not statistically significant or not representative results. Finally, the time between the FMF acquisition and the corresponding [ 18 F]FDG PET images was not synchronized, with scans acquired between 2 and 334 days apart. While changes of the perfusion/metabolism due to neurodegenerative diseases are slow (years), they might introduce slight differences in uptake patterns in cases where the dates of the image acquisitions are further apart and therefore reduce the correlation coefficients. Even though some of the presented results are conditioned by the available study cohort, the potential of the FMF and its comparability to [ 18 F]FDG PET images is demonstrated. It is shown that in most cases the correlation between the images is not significantly affected by the radiotracer or amyloid state. Moreover, early acquisitions with [ 18 F]FMM demonstrate to offer the same comparable information to [ 18 F]FDG as [ 18 F]FBP and [ 18 F]FBB. Further retrospective and prospective studies will be conducted to confirm the findings and optimize the acquisition protocol. While in this study all images were acquired using a PET/CT scanner, it would be interesting to evaluate the utility of FMFs with PET/MRI. Additionally, studies evaluating the visual comparability will be performed to validate the clinical usefulness of the FMF as a diagnostic tool of neurodegeneration. Conclusions The FMF shows quantitative similarities to [ 18 F]FDG brain PET images, with strong correlations using [ 18 F]FDG PET, whose early acquisitions are evaluated quantitatively and compared to metabolic PET for the first time in this study. The FMF protocol is a viable static PET alternative to dynamic early-phase brain amyloid PET imaging in clinical practice, providing information of neuronal injury similar to that shown by [ 18 F]FDG PET imaging. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/s21155182/s21155182/s1: Table S1: AAL2 brain regions that compose the grouped VOIs; Institutional Review Board Statement: Ethical review and approval were waived for this study due to involving a retrospective image database. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding authors. The data is not publicly available due to clinical patient information. Conflicts of Interest: The authors declare no conflict of interest.
2021-08-09T05:29:07.863Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "9770df11bcdb16b16f831793c601656a05471ec5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/15/5182/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9770df11bcdb16b16f831793c601656a05471ec5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
257986882
pes2o/s2orc
v3-fos-license
Carbon-sink potential of continuous alfalfa agriculture lowered by short-term nitrous oxide emission events Alfalfa is the most widely grown forage crop worldwide and is thought to be a significant carbon sink due to high productivity, extensive root systems, and nitrogen-fixation. However, these conditions may increase nitrous oxide (N2O) emissions thus lowering the climate change mitigation potential. We used a suite of long-term automated instrumentation and satellite imagery to quantify patterns and drivers of greenhouse gas fluxes in a continuous alfalfa agroecosystem in California. We show that this continuous alfalfa system was a large N2O source (624 ± 28 mg N2O m2 y−1), offsetting the ecosystem carbon (carbon dioxide (CO2) and methane (CH4)) sink by up to 14% annually. Short-term N2O emissions events (i.e., hot moments) accounted for ≤1% of measurements but up to 57% of annual emissions. Seasonal and daily trends in rainfall and irrigation were the primary drivers of hot moments of N2O emissions. Significant coherence between satellite-derived photosynthetic activity and N2O fluxes suggested plant activity was an important driver of background emissions. Combined data show annual N2O emissions can significantly lower the carbon-sink potential of continuous alfalfa agriculture. Ln 549: Repository information needed (e.g. website link or citation). Figures 1,2,and 4 contain data that is too inter-connected and sometimes redundant to be stand-alone figures. It is not immediately clear to me why they are separate, so I suggest combining into one sevenpanel figure. Alternatively, I would suggest pulling soil NH4+ and NO3-data from Fig 4 and 2, respectively, into their own figure, since these were not sensor-based measurements and took place over a shorter time period, and combine all sensor-based measurements. The potential offsetting of soil carbon sequestration through N2O emissions from an irrigated alfalfa crop is the focus of this study. Eddy covariance measurements were conducted over 4 years capturing several emission events (hot moments) associated with irrigation. The methodology is sound, results are clearly presented for the most part and the manuscript is well written. However, I have two main concerns: 1) the interpretation of the N2O flux results and its generalization to 'alfalfa agriculture'; 2) a lack of clear explanation of what the 'novel sensor suite' is exactly, and how it provided integrated information that improved our understanding of GHG fluxes. I am providing detailed comments on each of these concerns below. 1) the interpretation of the N2O flux results and its generalization to 'alfalfa agriculture'; The hypothesis is "that elevated NO3 concentrations and irrigation during the growing season would stimulate hot moments of N2O emission, offsetting a significant portion of the net CO2-equivalent (CO2e) sink." Alfalfa is a perennial legume crop and hence nitrate levels would be expected to be minimized during the growing season in comparison to annual crops receiving N inputs in the form of manure or synthetic fertilizer such as corn or wheat. In fact, increased use of perennials is recognized as an N2O-mitigating practice. I was at first surprised by the hypothesis statement, but results of high N2O emissions were corroborated with measurements, surely indicating elevated NO3 levels likely from soil organic matter mineralization. A closer look at the site's soil characteristics such as C levels indicates this may be an organic soil (or at least has very high levels of C), which are well known to have very high N2O emissions. In fact, Pärn et al. (2018) identify N rich organic soils under well-drained conditions as global N2O hot spots. Pärn et al. (2018) refer to a US-CA hot-spot based on a sampling site at Lat. 38.0169 N, W, close to the experimental site of 38.11ºN, 121.5ºW. Anthony and Silver (2021) did measure N2O fluxes with automated chambers at what appears to be an adjacent site to this study also identifying this area as a hot-spot of N2O fluxes (in this case the site had soil C values about 3x as reported here). However, in this manuscript the relatively high C levels of the site are not brought up as a potential major driver. In addition, it appears that the Ryde soil series does have relatively low pH (not reported in manuscript). However, the role of soil organic matter and of the low pH in driving high soil N2O production is not considered in the interpretation of N2O flux results. Rather, the focus is on the role of alfalfa plants. I think it is very difficult to separate the two effects (soil vs. plants) based on the set of measurements conducted, even though the authors collected a relatively long and complete time series of N2O fluxes. The main short-coming is the lack of a comparison (fallow field vs. alfalfa?, annual unfertilized crop vs. alfalfa?) that would allow for teasing out the main drivers. I expect a fallow or annual crop, irrigated field at this site would see even higher N2O fluxes than observed due to the high carbon and nitrogen substrates fueling denitrification and the acidic conditions favoring a higher N2O/N2 ratio. This means that alfalfa could actually be working to decrease emissions under these very favorable conditions for N2O production. Although measurements in were made at a different sites (and different soils) in the same area, it would be interesting to compare these datasets. I also miss an interpretation of the unique conditions of the experimental site vis-à-vis areas where alfalfa is grown in the US or the world. For example, the statement "Our results show that N2O emissions can significantly lower the C sink potential of this globally important crop" (L. 206) implies that experimental results could perhaps be applicable to other areas and any caveats are not identified. Instead, it is suggested that the high N2O fluxes are due to the intensive sampling (see L. 72 "Annual mean N2O fluxes were 624.4 ± 26.8 mg N2O m-2 yr-1 (Table 1, range: 247.0 ± 5.7 to 901.9 ± 74.5 mg N2O m-2 yr-1) and were significantly greater than other N2O flux estimates in alfalfa using less intensive periodic sampling (16-19)") and the unique soil conditions are not considered. 2) The lack of clear explanation of what the 'novel sensor suite' is exactly, and how it provided integrated information that improved our understanding of GHG fluxes. Several times reference is made to a 'novel sensor suite'. For example, in the Abstract: "Using a novel suite of continuous soil sensing, eddy covariance, and satellite imagery we found that N2O emissions offset the ecosystem greenhouse gas sink by up to 14% annually" and L. 23 "Data from this novel sensor suite show that N2O emissions significantly lower the carbon-sink potential of alfalfa agriculture." However, the main finding of a 14% emission offset is based on the eddy covariance measurements alone. Although, EC measurements of N2O fluxes are not nearly as widespread as for CO2 and H2O, or even CH4, I would not classify the EC technique as novel. Similar high temporal resolution data as presented here can also be derived with automated chambers as Anthony and Silver (2020) have done for an adjacent site, but do not have the advantage of fluxes being spatially integrated over large areas like EC has. The continuous soil sensing was used to provide some explanation for the flux drivers as is often done in N2O studies. The satellite imagery was used to find "Significant coherence between satellite derived vegetation growth and N2O fluxes suggested that plant activity was an important driver of background emissions" (Abstract) and "Background fluxes varied with moisture, temperature, and NIRv, an index of GPP. Lagged relationships between NIRv, CO2, and N2O fluxes suggested that plant inputs were likely an important driver of soil CO2 fluxes and background N2O emissions." (L. 204). I consider the latter conclusion a bit tenuous since plant inputs (presumably substrate for nitrifiers and denitrifiers) were not specifically quantified and compared to soil substrate supply. In addition, this relationship was only RESPONSE TO REFEREES Reviewer #1 (Remarks to the Author): Overview and general recommendations: The submitted manuscript quantifies the contributions of soil nitrous oxide (N2O) fluxes, and particularly from hot moments, to CO2e balance for alfalfa agroecosystems. Using a novel dataset of multi-year, continuous, high-resolution chamber flux measurements coupled with belowground sensors and periodic soil sampling, the study highlights novel findings that hot moments of N2O, although appearing in less than 2% of measurements, accounted for almost half of annual N2O emissions and substantially offset the ecosystem carbon sink previously estimated for alfalfa. Consistent with recent literature, most hot moments occurred during soil re-wetting events, when anaerobic microsites may be generated, and were additionally modulated by temperature and by crop activities. Data collection, processing, and analytical methods are described well and are consistent with common practices of soil and ecosystem scientists. My primary concern regarding the quality of the current manuscript lies in the contextualization of results to the broad readership of Nature Communications, and I believe this concern can be alleviated by addressing my comments below. I am not completely convinced of the relevance of the CH4 measurements to the study and suggest a couple of different ways to integrate this dataset better with CO2 and N2O. I am also concerned that the broader impacts of this work to agroecosystems and/or to drylands are lost due to the focus on one particular crop species, albeit one that is grown widely. Finally, although the results are presented well, they don't fully match with the introductory material and hypothesis provided; I suggest expanding your hypothesis to include other drivers of N2O and CO2e that you measure and that are more holistic to addressing N2O contributions to alfalfa C balance. I am confident that addressing these concerns will increase the quality of the manuscript and will accelerate its timeline to publication. We thank the reviewer for their comments. We appreciate the suggestions to broaden the contextualization and better integrate the CO2 and CH4 data and have made the detailed changes suggested (outlined below). We have also expanded the first hypothesis as suggested and added a second hypothesis that states, "We also hypothesized that background patterns in N2O emissions would follow patterns in plant activity indicative of potential changes in C or substrate availability." Line numbers cited below represent numbers with track changes on. Specific comments: Abstract: Since considerable text and one figure are dedicated to describing patterns and mechanisms of diel and seasonal fluxes of N2O, I anticipated those to be reported somewhere in the abstract. I suggest a few words or short sentence to describe main findings. Introduction: Given the title and focus of the paper, the connections between your trace gases of interest needs more explicit description. For example, CO2 also produces respiration pulses during irrigation but at different timing and using different metabolic mechanisms, but ecosystem CO2 fluxes are not described in the introduction. I suggest to more fully describe CO2 fluxes to complement the description of CH4 and round out the background of all three gases of interest, or to instead to describe CH4 and CO2 together as having hot moments of carbon or CO2e exchange that may be enhanced by N2O flux and offset the non-hot-moment C sink. For a further biogeochemical perspective, there is considerable literature showing interactions between methanotrophy and N2O production that could support the inclusion of CH4 and N2O data together, and why a soil CH4 sink could lead to an N2O source. This may be useful to return to in your discussion as well. Additional text regarding CO2 and CH4 fluxes was added in the introduction (L61-L77) and discussion (L213-216). We also now mention that eddy covariance studies suggest that alfalfa agroecosystems are net C sinks even with potential pulses of CO 2 and CH 4 emissions, but data on continuous N2O emissions are lacking. Results/Discussion: Since you described a hypothesis in your introduction, I would like to see a conclusion in the last paragraph (or elsewhere if more appropriate) about whether that hypothesis was supported. You address pieces of it throughout but I don't see them integrated explicitly. I also suggest moving your supplemental GWP results to the main text as I find those particularly compelling. We have added a conclusions section and made the suggested changes which we agree really help highlight the findings and significance. (L248-254). We also integrated the supplemental GWP results to the main text as suggested (L110-117). Ln 61: A word is missing in this sentence: "…patterns and associated [mechanisms maybe?] of CO2…" We have added the word "controls". Ln 66-68: I was not expecting this hypothesis given the diversity of measurements reported here; particularly, I would have expected NH4+ and O2 to be included as hypothesized drivers of N2O emission since they were described earlier in the introduction. Is there a reason these were not included but NO3-was? Temperature/climatic drivers of emissions are not really introduced in the introduction and do not appear in hypotheses but are contained in multiple figures and results; consider including them here and in the intro at large. We expanded the hypotheses as suggested and added some text to the introduction to better justify the hypotheses and provide background for the measurements made to test the hypotheses. Ln 103: A word is missing in this sentence: "…from greater [?] and C and substrate availability…" This has been changed to "greater C and N substrate availability" (L140) Ln 184: I suggest including this as a new subsection titled "Aboveground C dynamics" or "Alfalfa C sequestration" as this paragraph includes alfalfa C sink and eddy covariance data rather than just soil CO2 emissions. Or, make the heading of Ln 173 more inclusive as "Agroecosystem CO2 balance" or similar to capture both above and belowground fluxes. We have changed the heading to "Agroecosystem CO2 balance" (L218) Ln 200: I suggest including this as a new subsection titled "synthesis" or "conclusions" as it integrates all data rather than just CO2 fluxes. We added a conclusions section as suggested. Ln 216: How much land area is used for alfalfa? This information, with included citation, would be useful for upscaling. We clarify that alfalfa and corn are the dominant agricultural land uses in the region, with alfalfa representing 20% of agricultural land area in the Sacramento-San Joaquin Delta (The Delta Protection Commission, 2020) and the largest crop by area in California and the (Putnam et al. 2007). Nearly 100% of alfalfa in California is irrigated (Putnam et al. 2007). (L270-L274). Ln 341: Suggest to replace "gappy" with "discontinuous" or "noncontinuous." We have replaced "gappy time series" with "time series with missing observations" (L400). Repository information has been added as Source Data and as: https://datadryad.org/stash/share/igfrCACBTOMTNEi8KsL3auVLnSqKiN51WFRUlFf04Ds. Figures 1,2,and 4 contain data that is too inter-connected and sometimes redundant to be stand-alone figures. It is not immediately clear to me why they are separate, so I suggest combining into one seven-panel figure. Alternatively, I would suggest pulling soil NH4+ and NO3-data from Fig 4 and 2, respectively, into their own figure, since these were not sensorbased measurements and took place over a shorter time period, and combine all sensor-based measurements. We have addressed the suggestions above and combined and rearranged the figures: Figure 1 contains N2O, CH4, and CO2 fluxes with soil moisture, temperature, O2, and NIRv. Figure 3 now contains NO3, NH4, and soil pH observations. Table 2: Do the + signs in hot moment % of total flux add additional information? To me, a percentage alone seems sufficient. We have removed the + signs from Table 2. Reviewer #2 (Remarks to the Author): Review of 'Hot moments of nitrous oxide emissions lower the carbon-sink potential of alfalfa Agriculture' by Anthony et al. The potential offsetting of soil carbon sequestration through N2O emissions from an irrigated alfalfa crop is the focus of this study. Eddy covariance measurements were conducted over 4 years capturing several emission events (hot moments) associated with irrigation. The methodology is sound, results are clearly presented for the most part and the manuscript is well written. However, I have two main concerns: 1) the interpretation of the N2O flux results and its generalization to 'alfalfa agriculture'; 2) a lack of clear explanation of what the 'novel sensor suite' is exactly, and how it provided integrated information that improved our understanding of GHG fluxes. I am providing detailed comments on each of these concerns below. We thank the reviewer for their comments and address them specifically below. 1) the interpretation of the N2O flux results and its generalization to 'alfalfa agriculture'; The hypothesis is "that elevated NO3 concentrations and irrigation during the growing season would stimulate hot moments of N2O emission, offsetting a significant portion of the net CO2equivalent (CO2e) sink." Alfalfa is a perennial legume crop and hence nitrate levels would be expected to be minimized during the growing season in comparison to annual crops receiving N inputs in the form of manure or synthetic fertilizer such as corn or wheat. In fact, increased use of perennials is recognized as an N2O-mitigating practice. We agree that the assumption is often that an N-fixing perennial crop does not experience hot moments of NO3availability or N2O emissions. However, the high-resolution measurements we were able to achieve showed that NO3concentrations were not uniformly low, and this together with periods of high soil moisture led to hot moments of N2O emissions. We have added text in the introduction to better summarize these issues (L88-95) and expanded the hypotheses and explanations as suggested by reviewer one. I was at first surprised by the hypothesis statement, but results of high N2O emissions were corroborated with measurements, surely indicating elevated NO3 levels likely from soil organic matter mineralization. A closer look at the site's soil characteristics such as C levels indicates this may be an organic soil (or at least has very high levels of C), which are well known to have very high N2O emissions. In fact, Pärn et al. (2018) identify N rich organic soils under welldrained conditions as global N2O hot spots. (2021) did measure N2O fluxes with automated chambers at what appears to be an adjacent site to this study also identifying this area as a hot-spot of N2O fluxes (in this case the site had soil C values about 3x as reported here). However, in this manuscript the relatively high C levels of the site are not brought up as a potential major driver. We now mention soil organic matter can be a potential driver of greenhouse gas emissions but note that this site has lower soil C and N than others in the regions (L145-146,L220). The three sites in the Sacramento-San Joaquin Delta described above have widely ranging surface soil C concentrations ( In addition, it appears that the Ryde soil series does have relatively low pH (not reported in manuscript). However, the role of soil organic matter and of the low pH in driving high soil N2O production is not considered in the interpretation of N2O flux results. Rather, the focus is on the role of alfalfa plants. We agree the low pH can be a contributor to the magnitude of hot moments. Soil pH was measured weekly alongside soil mineral N values (added as Figure 2c) and relevant text has been added clarifying the effects of an acidic soil pH (L53-55, L143-146). I think it is very difficult to separate the two effects (soil vs. plants) based on the set of measurements conducted, even though the authors collected a relatively long and complete time series of N2O fluxes. The main short-coming is the lack of a comparison (fallow field vs. alfalfa?, annual unfertilized crop vs. alfalfa?) that would allow for teasing out the main drivers. I expect a fallow or annual crop, irrigated field at this site would see even higher N2O fluxes than observed due to the high carbon and nitrogen substrates fueling denitrification and the acidic conditions favoring a higher N2O/N2 ratio. This means that alfalfa could actually be working to decrease emissions under these very favorable conditions for N2O production. Although measurements in were made at a different sites (and different soils) in the same area, it would be interesting to compare these datasets. We understand the reviewer's comments, but respectively disagree that a comparison to other crops or a fallowed field is needed to determine the main drivers of N2O in this context. Here we used wavelet coherence to parse relationships among variables measured. It is a useful tool for studying fluctuations across data-rich time series and can be used to determine significance and elucidate scale-emergent interactions between variables (Chamberlain et al. 2018, Rodríguez-Murillo and Filella, 2020, Sturtevant et al., 2016). A more detailed description of wavelet coherence has been added to the text (L389-L392). Comparisons with a fallowed field or irrigated annual crop, while interesting, would be asking a different question than the one addressed here. Fallow and annual cropping are different management activities that change multiple environmental and biogeochemical conditions when compared to alfalfa agriculture. Annual cropping is conducted on different soils. Thus, we don't feel that a comparison with corn cropping would help us determine the impacts of alfalfa on N2O emissions. In this instance we explored the potential role of plant productivity in greenhouse gas fluxes when compared with other drivers occurring under the same environmental conditions. We now clarify and qualify this result more carefully to address the reviewer's concerns. We included text stating that these degraded peatland soils that have lost a significant proportion of their initial organic matter and contain significantly higher mineral content than intact or nutrient-rich peatland soils by citing Anthony and Silver (2020). We have added text throughout to clarify these issues. I also miss an interpretation of the unique conditions of the experimental site vis-à-vis areas where alfalfa is grown in the US or the world. For example, the statement "Our results show that N2O emissions can significantly lower the C sink potential of this globally important crop" (L. 206) implies that experimental results could perhaps be applicable to other areas and any caveats are not identified. Instead, it is suggested that the high N2O fluxes are due to the intensive sampling (see L. 72 "Annual mean N2O fluxes were 624.4 ± 26.8 mg N2O m-2 yr-1 ( Table 1, range: 247.0 ± 5.7 to 901.9 ± 74.5 mg N2O m-2 yr-1) and were significantly greater than other N2O flux estimates in alfalfa using less intensive periodic sampling (16-19)") and the unique soil conditions are not considered. We agree that more information regarding our site in relation to alfalfa agriculture is warranted and has been included (L31-L33, L50-L51, L267-L274). These are highly degraded peatland soils that have lost a significant proportion of their initial organic matter content and contains significantly higher mineral concentrations than intact or nutrient-rich peatland soils (Anthony and Silver 2020). We clarify that the management of the site is representative of alfalfa agriculture across the region as nearly 100% of alfalfa in California in irrigated, and approximately 82% is similarly flood irrigated (Long et al. 2022). We have added text (L272-L273) comparing our site to general irrigation management and soil types, adding caveats regarding soil pH (L143-L145, L170-L172). In L121-L125 we also clarify why we believe intensive sampling is important. Not missing these hot moments (which represent <1% of observations but an average of 44% of emissions) more accurately captures short-term hot moment N2O emissions where noncontinuous sampling methods can miss rare high flux events. 2) The lack of clear explanation of what the 'novel sensor suite' is exactly, and how it provided integrated information that improved our understanding of GHG fluxes. Several times reference is made to a 'novel sensor suite'. For example, in the Abstract: "Using a novel suite of continuous soil sensing, eddy covariance, and satellite imagery we found that N2O emissions offset the ecosystem greenhouse gas sink by up to 14% annually" and L. 23 "Data from this novel sensor suite show that N2O emissions significantly lower the carbon-sink potential of alfalfa agriculture." However, the main finding of a 14% emission offset is based on the eddy covariance measurements alone. Although, EC measurements of N2O fluxes are not nearly as widespread as for CO2 and H2O, or even CH4, I would not classify the EC technique as novel. Similar high temporal resolution data as presented here can also be derived with automated chambers as have done for an adjacent site, but do not have the advantage of fluxes being spatially integrated over large areas like EC has. We agree clarification in the novel sensor suite is needed and believe this caused some of the misunderstanding described here. We have added text clarifying that this study used "a novel suite of automated flux chambers (to quantify continuous CO2, N2O and CH4 fluxes) combined with continuous soil sensing, eddy covariance (to quantify net ecosystem CO2 exchange) , and satellite imagery" in the abstract (L18-19) and introduction (L81-L83). We do not suggest that EC, even with N2O, is novel, rather the novelty is provided by the combination of long-term automated chamber, continuous soil sensing, satellite imagery, and eddy covariance observations. The continuous soil sensing was used to provide some explanation for the flux drivers as is often done in N2O studies. The satellite imagery was used to find "Significant coherence between satellite derived vegetation growth and N2O fluxes suggested that plant activity was an important driver of background emissions" (Abstract) and "Background fluxes varied with moisture, temperature, and NIRv, an index of GPP. Lagged relationships between NIRv, CO2, and N2O fluxes suggested that plant inputs were likely an important driver of soil CO2 fluxes and background N2O emissions." (L. 204). I consider the latter conclusion a bit tenuous since plant inputs (presumably substrate for nitrifiers and denitrifiers) were not specifically quantified and compared to soil substrate supply. In addition, this relationship was only found for background N2O emissions while as described in this manuscript, hot moments are the main overall drivers of N2O emissions. Here we used wavelet coherence to compare variables across time series and infer relationships among variables. We have added text in the introduction regarding substrate availability (L65-71) and clarified that our plant activity metric (NIRv) specifically represents photosynthetic activity (L24-25, L82-L83, L277-282). NIRv is an accurate method to determine canopy photosynthesis , and photosynthesis is the primary source of C inputs into terrestrial ecosystems. Root exudates are well-known labile soil C sources that can prime microbial activity (Panchal et al. 2022), with up to 20% of C fixed by photosynthesis released by root exudation (Guyonnet et al. 2018, Haichar et al., 2008). Thus, the relationships with plant photosynthetic activity are an index of plant C inputs and activity (L65-L70, L280-L282). We have clarified that the observed lagged relationships may represent delays between photosynthetic C uptake and root exudation processes (L242-244). We have also soften the language to suggest that these patterns are one possible driver of the background patterns observed. Hot moments of emissions appear to have been driven by O2, and moisture at daily scales, and that lagged (weekly to monthly) relationships between NIRv and N2O fluxes suggests plant inputs were likely an important driver background N2O emissions (L176-L187). The submitted manuscript quantifies the contributions of soil nitrous oxide (N2O) fluxes, and particularly from hot moments, to CO2e balance for alfalfa agroecosystems. Using a novel dataset of multi-year, continuous, high-resolution chamber flux measurements coupled with belowground sensors and periodic soil sampling, the study highlights novel findings that hot moments of N2O, although appearing in less than 2% of measurements, accounted for almost half of annual N2O emissions and substantially offset the ecosystem carbon sink previously estimated for alfalfa. Consistent with recent literature, most hot moments occurred during soil re-wetting events, when anaerobic microsites may be generated, and were additionally modulated by temperature and by crop activities. Data collection, processing, and analytical methods are described well and are consistent with common practices of soil and ecosystem scientists. After reading responses to reviewers and the revised manuscript, I am pleased with the changes the authors have made and I think revisions have made clear the importance and nuance of these results. I have a few comments regarding minor detail changes, but I think generally this manuscript is ready to move forward to publication. Great work! Specific comments: Abstract: I think somewhere in here you need to mention CO2 and CH4 as measured carbon fluxes since those are a big chunk of your results and you spend a lot of text describing them. The least invasive way to include them might be in Ln 18-19: "…offsetting the ecosystem carbon (CO2 and CH4) sink by …". Or something more elegant. I think it's important to specify somehow that you looked at CH4 in addition to CO2, since readers may assume only CO2 was measured here. Ln 59-60: This citation does not have a matching reference in References section and is different annotation than other citations. Ln 163-165: I'm getting confused by the wording here; can you write this sentence more simply? Is it that lower GPP tended to match low background N2O? Reviewer #2 (Remarks to the Author): Thank you for addressing many of my comments and clarifying certain issues. I still have major concerns as to how the results are interpreted and conveyed to readers. The main take away message is that 'alfalfa agriculture' emits significant amounts of N2O, enough to offset the C-sink potential. The high overall annual N2O emissions (=4 kg N/ha/yr with range of 1.6 to 5.7) occur over somewhat frequent and short periods of time associated with irrigation and driven by high N levels released by the high SOM content at the measurement site (Hot moments). I commented on the high SOM in my previous review and although the reply was that the site is on highly degraded peatland soil, the SOM content is still quite high compared to regular mineral soils. This means that the results obtained here can not be easily generalized to 'alfalfa agriculture' as the authors have done in Abstract and Conclusions. A much more nuanced interpretation of the measurements is needed. In fact, N2O emissions are notoriously variable with soil type, management and weather events and generalizing results from one site is not possible. For example, Tenuta et al. (2019) using a micromet approach show that including perennial crops such as alfalfa in crop rotations leads to much lower N2O emissions that annual crops, in contrast to this study. In addition, the authors argue the large N2O emissions measured in this study are due to the measurement method used. For example in L. 89-91 "Annual mean N2O fluxes were 624.4 ± 26.8 mg …… and were significantly greater than other N2O flux estimates in alfalfa using less intensive periodic sampling30-33.". While I strongly agree that more continuous measurements as provided by micromet methods are sorely needed and provide a much better flux time series, I think a more nuanced interpretation is also needed here. Firstly, the citations given here are not appropriate to support the argument made. Ref 30: reports on emissions that are of a similar order of magnitude (2.3 and 5.7 kg N/ha/yr) with measurements made using static chambers; Ref 32: is a micromet study with frequent observations; Ref 33: reports on canola and wheat N2O emissions following alfalfa termination for a semi-arid region. Secondly, as demonstrated by Ref30 the high emission peaks could be captured with other methods in an irrigated alfalfa system since they are quite predictable and timed with the irrigation. In fact, there may be an issue with overestimation given that chamber studies will typically target high emission events and then interpolate between these weekly data points to obtain annual emissions. Responses by the authors are in bold. Corresponding line numbers refer to line numbers with track changes on. Reviewer #1 (Remarks to the Author): Overview and general recommendations: The submitted manuscript quantifies the contributions of soil nitrous oxide (N2O) fluxes, and particularly from hot moments, to CO2e balance for alfalfa agroecosystems. Using a novel dataset of multi-year, continuous, high-resolution chamber flux measurements coupled with belowground sensors and periodic soil sampling, the study highlights novel findings that hot moments of N2O, although appearing in less than 2% of measurements, accounted for almost half of annual N2O emissions and substantially offset the ecosystem carbon sink previously estimated for alfalfa. Consistent with recent literature, most hot moments occurred during soil re-wetting events, when anaerobic microsites may be generated, and were additionally modulated by temperature and by crop activities. Data collection, processing, and analytical methods are described well and are consistent with common practices of soil and ecosystem scientists. After reading responses to reviewers and the revised manuscript, I am pleased with the changes the authors have made and I think revisions have made clear the importance and nuance of these results. I have a few comments regarding minor detail changes, but I think generally this manuscript is ready to move forward to publication. Great work! We thank the reviewer for their comments. Specific comments: Abstract: I think somewhere in here you need to mention CO2 and CH4 as measured carbon fluxes since those are a big chunk of your results and you spend a lot of text describing them. The least invasive way to include them might be in Ln 18-19: "…offsetting the ecosystem carbon (CO2 and CH4) sink by …". Or something more elegant. I think it's important to specify somehow that you looked at CH4 in addition to CO2, since readers may assume only CO2 was measured here. We removed this sentence as it is repetitive of third sentence of the paragraph, which is cited. Ln 59-60: This citation does not have a matching reference in References section and is different annotation than other citations. This has been corrected. Ln 163-165: I'm getting confused by the wording here; can you write this sentence more simply? Is it that lower GPP tended to match low background N2O? The text (L171-176) has been changed to: "Increases in background (low-level) N2O emissions were positively correlated with periods of high gross primary productivity (GPP), measured with satellite observations of near infrared reflectance of vegetation(NIRv) 1 ." Reviewer #2 (Remarks to the Author): Thank you for addressing many of my comments and clarifying certain issues. I still have major concerns as to how the results are interpreted and conveyed to readers. The main take away message is that 'alfalfa agriculture' emits significant amounts of N2O, enough to offset the Csink potential. The high overall annual N2O emissions (=4 kg N/ha/yr with range of 1.6 to 5.7) occur over somewhat frequent and short periods of time associated with irrigation and driven by high N levels released by the high SOM content at the measurement site (Hot moments). I commented on the high SOM in my previous review and although the reply was that the site is on highly degraded peatland soil, the SOM content is still quite high compared to regular mineral soils. This means that the results obtained here can not be easily generalized to 'alfalfa agriculture' as the authors have done in Abstract and Conclusions. A much more nuanced interpretation of the measurements is needed. In fact, N2O emissions are notoriously variable with soil type, management and weather events and generalizing results from one site is not possible. For example, Tenuta et al. (2019) using a micromet approach show that including perennial crops such as alfalfa in crop rotations leads to much lower N2O emissions that annual crops, in contrast to this study. We thank the reviewer for their thorough review and careful reading of the paper. We now better understand the reviewers concerns regarding SOM stocks described above and the need to include a more nuanced interpretation of our results. We have included additional text regarding this in L139-142. With regard to generalizing these results to continuous alfalfa agriculture, we note that apart from this study, combined multi-year continuous flux measurements of CO2, CH4, and N2O in continuous flood-irrigated alfalfa are essentially non-existent, even though this is the predominant practice for alfalfa in regions like the Western United States 2 (Text added in L93-96). Understanding the drivers of interannual variability, and accurately quantifying differences in emissions with stand age 3 are needed to upscale emissions, particularly for N2O which we show is inherently variable. To address the reviewers concerns we clarify that continuous measurements are needed to assess greenhouse gas emissions and the net C balance of continuous alfalfa ecosystems, as these are likely to differ from other agricultural activities including those that incorporate alfalfa in short-term rotations (L37-40). The reviewer cites Tenuta et al. (2019) as an example of alfalfa agriculture leading to lower N2O emissions than annual cropping. As mentioned previously, our goal here was not to compare alfalfa with annual cropping but to quantify continuous fluxes from multi-year continuous alfalfa agriculture, an important crop globally. This is fundamentally different from the mixed cropping system described in Tenuta et al. 2019. Regardless we have added some text to qualify our results stating, "This suggests net N2O emissions from irrigated alfalfa may not always be reduced relative to other agricultural ecosystems receiving inorganic N inputs, particularly on relatively C-rich soils". (L101-103). In addition, the authors argue the large N2O emissions measured in this study are due to the measurement method used. For example in L. 89-91 "Annual mean N2O fluxes were 624.4 ± 26.8 mg …… and were significantly greater than other N2O flux estimates in alfalfa using less intensive periodic sampling 30-33.". While I strongly agree that more continuous measurements as provided by micromet methods are sorely needed and provide a much better flux time series, I think a more nuanced interpretation is also needed here. Firstly, the citations given here are not appropriate to support the argument made. Ref 30: reports on emissions that are of a similar order of magnitude (2.3 and 5.7 kg N/ha/yr) with measurements made using static chambers; Ref 32: is a micromet study with frequent observations; Ref 33: reports on canola and wheat N2O emissions following alfalfa termination for a semi-arid region. We have changed and added nuance to the corresponding text (L92-96) and have added the following to the conclusions (L255-258): "This combination of automated chambers, eddy covariance, soil sensing, and satellite imagery is the most comprehensive dataset of multiyear annual budgets from continuous alfalfa agriculture to date, allowing us to determine the importance of both hot and non-hot moment emissions on total N2O budgets and explore scale-emergent drivers of N2O emissions." Long-term (> 2 years) continuous flux measurements, specifically of continuous alfalfa, are essentially non-existent. However, the above references, with the addition of Tanuta et al. 2019 4 (although a grass/alfalfa mixture), and two additional references 5 , were some of the only potentially comparable annual estimates. We have removed the Malhi 2010 6 reference and have clarified that some of these estimates are not from continuous alfalfa ecosystems (L94-96). Secondly, as demonstrated by Ref30 the high emission peaks could be captured with other methods in an irrigated alfalfa system since they are quite predictable and timed with the irrigation. In fact, there may be an issue with overestimation given that chamber studies will typically target high emission events and then interpolate between these weekly data points to obtain annual emissions. We agree that capturing high emissions events are important for calculating total budgets, and that our data show that irrigated fields may have more predictable hot moment fluxes than other agroecosystems. In L115-119 we state that continuous measurements ensure all hot moments are captured as well as non-hot moment emissions that accounted for ~50% of the flux. We do not think that the chamber-based approach here was likely to overestimate fluxes because it was continuous. Micromet approaches cover a wider land area, but the comparatively high detection limit of most tower-based micromet N2O measurement approaches (for example see Tenuta et al. 2016 7 , 2019 4 ) could potentially lead to underestimated fluxes. The final point of concern (which I missed in my first review) is the lack of consideration in C removal in harvest when determining if the field is a carbon sink. In fact, some information about forage yields and C content should be included.
2023-04-07T13:40:24.837Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "44fc67d825fbd7392e7682c35f5c9dcace2d5c7f", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "3ee348a49f22d8eec42077d9d822c2b3f03722a1", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
265083235
pes2o/s2orc
v3-fos-license
Attitudes toward work and parenthood following family-building transitions in Sweden: Identifying differences by gender and education OBJECTIVES This paper examines how family-building transitions (union formation and first birth) affect the attitudes of Swedes toward work and parenthood. The literature finds that these life course transitions have a traditionalizing effect on gender roles. Is this also the case in Sweden, one of the most gender-equal countries in the world? METHODS Our study uses the longitudinal Young Adult Panel Study database. We run first-difference OLS regressions on the relationship between family-building transitions and work and parenthood attitudes, distinguishing men from women, and those with more education from those with less. The challenge of gender and socioeconomic differences 3.2 The Swedish context of our study 4 Data, measures, and methods 4.1 Data 4.2 Measures 4.3 Methods Introduction The separate-spheres construction of men's and women's adult roles in industrialized countries has long defined men as providers (in the public sphere) and women as homemakers (in the private sphere).Recently, however, the separation appears to be breaking down; men are increasingly expecting to be involved fathers, and women are increasingly expecting to provide for their families and themselves (Goldscheider, Bernhardt, and Lappegard 2015).However, this process, sometimes called the gender revolution, is far from complete (Hochschild and Machung 1989) and appears to have stalled (England 2010) or to be only partial, progressing primarily among the more educated portions of the population (Cherlin 2016). At the individual level, decisions about couples' division of tasks in the public and private spheres are likely to depend on their attitudes.How committed are men to their career progress -their good provider roles?And how committed are women to being mothers -a key element of their domestic roles?Women have certainly increased their investment in their own careers in the past half century (e.g., Goldin 2006), and more recently men have increased their engagement in fatherhood (Hofferth et al. 2013;Evertsson and Boye 2018).These changes in men's and women's attitudes toward work and parenthood are contributing to the breaking down of the separate spheres. Nevertheless, in most countries, two pressures encourage couples to transition to or intensify attitudes supporting a traditional division of labor.One is the pressure that arises from the family-building transitions of union formation and parenthood (Dribe and Stanfors 2009).These transitions normally lead women to relinquish some of their work commitments as they spend more time with their children and on domestic chores, while men's work commitment grows as they respond to the increased pressure to provide.The second pressure on the progress of the gender revolution appears to be the level of economic inequality, given that those with low levels of education are normally more supportive of a traditional division of labor (Cherlin 2014). We examine a recent cohort of young men and women in Sweden, a country with a high level of gender as well as socioeconomic equality, to discern the impact of familybuilding transitions on attitudes toward work and parenthood and how any impact is shaped by gender and education.Our study uses the longitudinal Young Adult Panel Study (YAPS) database, a three-wave survey of Swedish young adults.Running firstdifference OLS regressions on the effects of family-building transitions on work and parenthood attitudes separately for men and women, as well as for those with more than a secondary education compared to those with a secondary education or less, reveals that even in Sweden, family transitions influence attitudes toward both parenthood and work and that these effects do indeed differ by gender and socioeconomic status.However, gender matters only among the less educated, and overall, the education and gender differences are very small compared to those that appear in less egalitarian societies. Background The separate-spheres construct of men's and women's work and family roles appears to be breaking down (Goldscheider, Bernhardt, and Lappegård 2015).At the macro level, for 100 years or more, the construct of the separate spheres seemed deeply entrenched.Men were providers in the public sphere, and women were homemakers in the private sphere.This role structure was the ideal in much of the developed world from approximately the mid-19th century, when the Industrial Revolution was well underway, causing men's massive departure from agricultural occupations.The ideal began to erode in the mid-20th century, when many forces began to undermine the need for married women to be in the home full-time (Goldscheider and Sassler 2018;Stanfors and Goldscheider 2017).Other changes provided women with paid opportunities to better care for their families (Oppenheimer 1970).Thereafter, the growth in female labor force participation (Goldin 2006;Björklund 1992) and men's increasing involvement in their homes, most particularly with their children (Coltrane 1996;Hwang and Lamb 1997;Kaufman 2013), have been challenging the separate-spheres construct, initiating a major gender revolution. It has been argued that the gender revolution is thoroughly structural (Goldscheider, Bernhardt, and Lappegård 2015).According to rational choice theories, people maximize their well-being, given structural constraints and opportunities, and people's attitudes generally reflect these opportunities and constraints and tend to change as conditions change.The movement of women into the labor force has been reinforcing egalitarian gender roles in both the public and the private spheres, as well as egalitarian gender attitudes.However, to understand the breakdown of the separate-spheres construct, it seems necessary to analyze men's and women's attitudes toward work and parenthood separately instead of examining a combined construct such as gender role attitudes.In this paper we analyze attitudes as a kind of proxy for how couples decide about the division of tasks in the private and public spheres. Given this understanding of the gender revolution, when it is "completed," one would expect no, or very minor, gender differences vis-à-vis attitudes toward work and parenthood.Although progress varies, the gender revolution is by no means complete (Esping-Andersen 2009).It is often characterized as stalled (England 2010) or as proceeding unevenly (Sullivan, Gershuny, and Robinson 2018) -progressing only among the more educated portions of the population (Cherlin 2016;Gähler and Olàh 2020).More educated men tend to have more egalitarian gender role attitudes (Pampel 2011a) and are more involved with their children (Evertsson and Boye 2018;Hofferth et al. 2013). Attitude research: Gender, work, and family There is extensive research on attitudes toward work and family (e.g., Bernhardt and Goldscheider 2006;Kalleberg and Mastekaasa 2001).Work attitudes have frequently been analyzed using the concept of work commitment (Crompton and Harris 1998;Evertsson 2013).Family attitudes cover a wide spectrum, such as attitudes toward marriage and divorce, the importance of motherhood and fatherhood, and the costs and benefits of becoming a parent.Of particular relevance for the current paper is research on attitudes toward the importance of children (Nauck 2007;McQuillan et al. 2008). There is also research linking attitudes about work and family together, particularly in a gender context, including studies of attitudes about the consequences of maternal employment for families and children (Bianchi 2000;Scott 1999) and even about paternal employment (Kaufman and Blair 2020).Considerable research has focused on attitudes regarding the appropriate roles, rights, and responsibilities of women and men, using the concept of gender ideology (e.g., Davis and Greenstein 2009).Both women and men can occupy roles such as partner, parent, and worker, but the relative importance (or value) attached to these respective roles may differ, depending on age, gender, education, and societal context. A central question in attitude research is whether attitudes are formed in childhood and adolescence through the socialization process and then generally remain stable, shaping subsequent behavior, or whether adult life situations and transitions can also influence attitudes.This is usually conceptualized as the question of selection versus adaptation.If adaptation is important, it seems reasonable to assume that important life course transitions may result in resocialization of work and family attitudes (Moors 2002(Moors , 2003)).Holland and Keizer (2015), using YAPS data, found that individuals holding positive views with regard to survey items such as "children give life meaning" and "having children is important in life" were the most likely to transition to parenthood in the ten years following the initial survey.Thus there was clear evidence of a selection effect.Whether there were gender differences in this regard was not tested.There was also some indication that being strongly committed to work had a negative effect on the transition to parenthood.Again, gender differences in this regard were not tested. To study adaptation -i.e., to analyze change over time -it is necessary to have access to longitudinal data on attitudes, which are relatively rare.Thornton, Alwin, and Camburn (1983) analyzed a panel study of women and their children in the United States and thus were able to study sex role attitude change between 1960 and 1980, very early in the gender revolution.Female labor force participation was found to both influence and be influenced by attitudes toward the appropriate roles of men and women.That is, it had both a selection effect and an adaptation effect for women's career orientation.On the other hand, there was a selection effect but no adaption effect in the relationship between parenthood and family attitudes. Two studies of the effects of life course transitions on changing gender role attitudes (more recent than Thornton, Alwin, and Camburn [1983]), are Corrigall and Konrad (2007) and Katz-Wise, Priess, and Hyde (2010), both with data from the United States.Each study includes both women and men, and while Corrigall and Konrad examined the impact of marriage and children, Katz-Wise, Priess, and Hyde (2010) focused specifically on the birth of first and second children.The former study concluded that individuals act on their preferences but also accommodate themselves to situational constraints.That is, there is evidence of adaptation.Both studies found gender differences in the adaptation process: Katz-Wise, Priess, and Hyde found that while parents generally became more traditional in their gender role attitudes following the birth of a child, women changed more than men.Corrigall and Konrad, on the other hand, found that having children had a stronger negative effect on gender egalitarianism for men than for women.Hence life course position, gender, and parenthood have effects on attitudes toward the gender roles of men and women.The stability of gender attitudes related to the timing of family formation has been studied by Florian (2022), who found that respondents who reported frequent changes in attitudes entered marriage later than other respondents. As mentioned above, a central aspect of attitude research is whether childhood socialization is more important than current life situations or life course transitions in shaping adult attitudes toward work and family roles (Hitlin and Piliavin 2004;Lesthaeghe and Moors 2002;Schwarz and Bohner 2001).Existing research seems to suggest that this is an empirical question, and the limited research on this issue has yielded different results, reinforcing the finding that the relationships might depend on context and type of attitude (Thornton, Alwin, and Camburn 1983).For example, using YAPS data, Evertsson (2013) found that Swedish women's work commitment decreased following the transition to motherhood, but this effect seemed to be transitory.Kaufman, Bernhardt, and Goldscheider (2016) found mostly enduring attitudes toward gender equality despite life course transitions among young adults in Sweden.Andersson's study of Swedish attitudes toward divorce (2015) suggested a relatively small influence of family life course events. The challenge of gender and socioeconomic differences The effects of individual life events and transitions often differ by gender.Therefore it is plausible to hypothesize that, to the extent that changes in attitudes are observed, the patterns may differ between men and women.To the best of our knowledge, no previous study of life transitions and changing attitudes toward work, as well as toward parenthood, has involved both men and women.In addition to gender differences, macro studies suggest that there are also likely to be socioeconomic differences in the effects of family-building transitions on work and parental attitudes.At one extreme are the results of studies of family patterns in the United States, which led McLanahan (2004) to describe the "diverging destinies" of children depending on their socioeconomic background.Cherlin (2014Cherlin ( , 2018) ) also emphasizes the dramatic differences by education on patterns of marriage, cohabitation, and union dissolution, as well as on bearing and raising children out of wedlock.Sweden, in contrast to the United States, has not exhibited large status differences in recent years (Goldscheider, Bernhardt, and Lappegård 2015), which is consistent with findings by Pampel (2011a, b).He has demonstrated, both for European countries and the United States, that the growth in egalitarian gender role attitudes begins among the more educated, leading to a strong socioeconomic divide, but over time those with less education join this growth, reducing differences by socioeconomic status. The Swedish context of our study The specific context of our current study is Sweden early in the 21 st century.Gender rolerelated behavior is certainly affected by country-level (or even global-level) factors that shape opportunities to choose work and family behaviors.Sweden, like only a few other countries, provides parents with inexpensive, high-quality child care and well-paid, jobprotected parental leave, allowing parents to maintain their careers while engaging closely with their children. Thus Sweden is a country in which combining employment and child raising is a social norm for both men and women, both expected and facilitated by state policies (Olàh and Bernhardt 2008).The explicit aim of these policies is to make it possible for both mothers and fathers to combine work and family.In addition, there has been an extensive expansion of the preschool system, guaranteeing a place to all children above the age of 1 year, at a highly subsidized cost.Hence, in this paper, we consider two particular types of attitudes, namely those relating to (1) work and (2) parenthood, and we examine whether and how they change across the life course in response to family transitions in a country with such a strong egalitarian context.Our study benefits from the existence of a longitudinal database (YAPS), the central aim of which is to enable studies of the mutual relationship between attitudes and demographic behavior in the early adult life phases in Sweden. Based on the research reviewed above, we formulate the following five hypotheses: 1) Life course transitions such as union formation and childbearing influence individuals' attitudes regarding work and parenthood, with a negative effect for work commitment and a positive effect for parenthood attitudes.2) There are gender differences in the effect of life course transitions on attitudes toward work, with women experiencing stronger negative effects than men, but the differences will be small in an egalitarian country such as Sweden.3) There are gender differences in the effect of life course transitions on attitudes toward parenthood, with women experiencing stronger positive effects than men, but the differences will be small in an egalitarian country such as Sweden.4) There are educational differences in how life course transitions influence work attitudes, with the more educated experiencing stronger negative effects than those with less education, but the differences will be small in a relatively redistributive society such as Sweden.5) There are educational differences in how life-course transitions influence parenthood attitudes, with the more educated experiencing stronger positive effects than those with less education, but the differences will be small in a relatively redistributive society such as Sweden. Data The main source of data for this analysis is the YAPS, a longitudinal survey that follows three cohorts of Swedish young adults (born in 1968, 1972, and 1976) over two successive periods, with a first interview in 1999 and follow-up interviews in 2003 and 2009.The YAPS was conducted as a nationally representative postal survey, obtaining a response rate of 65%.This response rate is in line with response rates attained for contemporaneous Swedish surveys (de Heer 1999) and is comparable with rates observed in other longitudinal surveys carried out in developed countries, such as the United States (Abraham, Maitland, and Bianchi 2006).Nevertheless, YAPS, like other longitudinal surveys, shares the problem of bias due to non-response or attrition.Out of the 2,820 individuals first interviewed in 1999, 1,575 were successfully reinterviewed in 2009, with an attrition rate of 44%, raising concern about attrition bias.Wanders (2012) conducted a study of attrition in the YAPS and concluded that the highest risk of dropping out was found for men and those with low education.Switek (2016), using YAPS data, conducted an extensive analysis of attrition and the sociodemographic characteristics of those lost to attrition and concluded that the possible attrition bias in the survey would not have a strong effect on the main results of her study. We found the same results.Following the method described by Wooldridge (2010), we tested for selective attrition associated with our two main dependent variables.This test consists of adding a lead attrition indicator (which turns to 1 in the period before attrition) in a regression of the dependent variables on the explanatory variables.According to Wooldridge, if the attrition indicator turns out to be not significant, attrition does not present a source of bias.We use data on changes in attitudes toward parenthood and work in the period 1999-2003 to construct the model, with a lead attrition indicator as an explanatory variable (see the Appendix).The attrition indicator was not significant in either case.In addition to the problem of attrition bias, unfortunately there is also the risk of reverse causality due to the relatively long periods between the three surveys (four and six years, respectively), which we discuss in the methods section, below. As we are interested in changing attitudes over time, we restrict our analysis to respondents who are included in all three waves of the survey.This leaves us with 1,385 respondents, of whom 802 (approximately 58%) are women and 583 are men.To complement information obtained from the respondents, Statistics Sweden linked the YAPS data with Swedish Register records, providing access to additional sociodemographic information (such as the highest attained education level).The final dataset includes a comprehensive set of variables related to a person's family life, attitudes, and various demographic characteristics. Measures The outcome variables are two attitude measures.One, attitudes toward parenthood, was constructed by combining two questions from the YAPS that capture respondents' attitudes toward parenthood as expressed at each interview.Our measure of respondents' attitudes toward the importance of work is based on a single item.The parenthood index is constructed as the sum of (1) the extent to which a person believes that children give life meaning, and (2) the importance to a person of having children, both indicators of the importance of parenthood.These two components correlate very closely, with a reliability coefficient (Cronbach's alpha) of 0.827, suggesting that the second component contributes considerable stability.The exact wording of the question on attitudes toward work is "How important is work in your life?" Responses range from 1 (one of the least important things) to 5 (one of the most important things). The parenthood index results in a total score that ranges from 2 (if the respondent has a weak general attitude toward parenthood on both questions) to 10 (if the respondent has a strong general attitude toward parenthood on both questions).The answers to the work importance measure ranged from 1 to 5. Given our interest in the effect of life course transitions on attitudes, we also construct two variables that capture the common life course transitions undergone by young adults: union formation and becoming a parent.Union formation and dissolution, which we include as a control, are constructed by following respondents' changes in relationship status between surveys.Respondents are identified as having undergone union formation if they have become a part of an intimate coresidential relationship since the previous survey. Methods To analyze changes in attitudes toward career and parenthood over different ages, we construct five age groups: ages 22, 26, 30/32, 34/36, and 40, respectively, at the time of a given interview.The 22-year-old age group includes respondents born in (interviewed in 1999); the 26-year-old age group includes respondents born in 1976 and 1972 (interviewed in 2003 and 1999, respectively); the 30/32-year-old age group includes respondents born in 1976, 1972, and 1968 (interviewed in 2009, 2003, and 1999, respectively); the 34/36-year-old age group includes respondents born in 1972 and (interviewed in 2009 and 2003, respectively); and the 40-year-old age group includes respondents born in 1968 (interviewed in 2009).Each respondent therefore contributes two observations (1999-2003 and 2003-2009), so there are twice as many observations as there are respondents.We then analyze changes in attitudes toward work and parenthood between these ages (22 to 26, 26 to 30/32, 30/32 to 34/36, and 34/36 to 40). We exploit the panel structure of our data by using first-difference OLS regressions to estimate the partial effect of a life transition on changes in attitudes toward career and parenthood.The use of a first-difference model allows us to control for all observable and unobservable individual-and cohort-level fixed effects that may affect a person's attitude.More explicitly, it allows us to eliminate all time-invariant heterogeneity among the survey respondents, which greatly reduces the risk of an omitted variable bias in our analysis.Our first-difference model is a close relative of the fixed effects model commonly used in literature to control for individual time-invariant characteristics.Unlike the fixed effects model, however, the first-difference model does not assume strict exogeneity, since instead of demeaning the data (subtracting the mean, as in the case of fixed effects), the first-difference model eliminates time-invariant heterogeneity by means of subtraction.(For a detailed discussion of the exogeneity assumptions in these models, see Leszczynski and Wolbring 2022.) Formally, we represent a person's attitude toward parenthood or career at age a as a function of their age, sociodemographic characteristics (such as marital status or whether the person is currently parenting a child), and any fixed effects (or traits, such as personality or characteristics related to the person's birth cohort) that are constant and affect a person's disposition.In terms of equations, we can represent a person's attitude index at age a as: where αa is the person's age, Xia is a vector of sociodemographic characteristics that are allowed to change over the life course, δi captures all individual-level fixed effects that are constant and affect a person's attitude, and εia is the error term.Since we observe each survey respondent at three points in time, we collect information on their attitudes and other characteristics at ages a0, a1, and a2.We can therefore calculate two first differences in their attitude indexes as a function of changes in time-variant characteristics, such as a change in partnership (union) or parenting status (i.e., life course transitions): ∆AttIndexi,a1-a0 = (AttIndexi,a1 -AttIndexi,-a0)=(αa1 -αa0) In Equation ( 2), ∆AttIndexi,a1-a0 is the change (first difference) in a person's attitude between ages a0 and a1; (αa1 -αa0) is the change in a person's age (in other words, their progress over the life course); (Xia1 -Xia0) are the changes in a person's sociodemographic characteristics (including life course transitions) whose effect is captured by µ; (δiδi ) are time-invariant characteristics that drop out of the model as a result of the first order subtraction; and (εia1 -εa0) is the new error term.The analogous reasoning can be followed for Equation (3). As mentioned above, since δi disappears from the first-difference equation, this model does not require an exogeneity assumption about the relationship of time-invariant heterogeneity and the error term.However, our model does rely on two important assumptions: (1) absence of time-varying heterogeneity and (2) absence of reverse causality.With respect to the first, we argue that, given the relatively short time between our surveys and that major personality changes are unlikely to occur unexpectedly, our focus on a relatively short period of a person's lifetime should mitigate any bias from this assumption (should any such bias exist). The second assumption, of absence of reverse causality, merits more serious consideration.Absence of reverse causality requires that a change in a person's attitude toward work or parenthood does not result in this person forming a union or becoming a parent.Because long-lasting behavioral changes in attitude toward important life conditions such as work or parenthood are unlikely to occur spontaneously, and because decisions about life-changing events such as forming a long-term union or having a child are unlikely to be affected by transitory changes in attitudes, we consider it more likely that in the context of our question, causality runs from the life course transition to change in attitude rather than the other way around. Despite the above reasoning, we must acknowledge the possibility that an external cause that is not controlled for in our study might result in a long-lasting change in attitude that leads to a life course transition, resulting in reverse causality and bias in our analysis.Nevertheless, given our context and data limitations, we consider the firstdifference model to be the best specification for our analysis. Keeping in mind its limitations, the precise specification used in our econometric analyses is: By controlling for the age period during which a person undergoes a transition, this specification has the benefit of not confounding the effect of the transition with the pure effect of aging.Given the inclusion of notrans (a binary variable that captures the change in attitude that takes place irrespective of any of the transitions), the econometric model is run without an intercept.This is methodologically equivalent to an OLS first-difference regression that includes an intercept but omits notrans (or one of the other transition variables) from the regression.The specification with no intercept is preferred because it avoids the use of a reference group in measuring the effects of each transition on personal attitudes and instead captures the "pure" partial effect of the transition. Model (4) is used to run separate regressions where the dependent variable AttIndexi represents either the work and career or the parenthood indices, respectively.The models are estimated separately by (a) gender, (b) education, and (c) gender and education. Results Women hold somewhat more positive attitudes toward parenthood than do men at all ages, as the separate-spheres construct predicts, suggesting that parenthood in Sweden is still more central to women's identity than it is to men's.However, the differences are not great (half a point at most ages), and their patterns are otherwise very similar (Table 1, Columns 1 and 2).Rather than showing diverging trends, as would be expected if women embraced their domestic roles while men ceded some parental involvement to women in order to invest in their good provider responsibilities, both men and women attach greater importance to parenthood as they age from the early 20s to the late 30s, with increases in the index of close to a point and even some convergence between men and women.Men and women are even more similar when it comes to attitudes toward work importance, with almost identical levels at most ages (Table 1, Columns 3 and 4).For both sexes and over age, Swedes are generally more positive toward parenthood than toward work, although differences in enthusiasm are small at the youngest ages but increase with age, leading to substantial divergence.What is very clear, however, is that gender similarity dominates patterns for both scales. Our paper aims to explain to what extent these trends are related to family-building transitions (union formation and parenthood) over these ages and whether there are differences by gender and/or education in these relationships, or whether they are simply a result of the aging process.Tables 2-5 present the results of regression analyses of changing attitudes related to family-building transitions, focusing on differences by gender and education. Table 2 compares individuals by gender, while Table 3 compares individuals by education.Finally, Tables 4 and 5 compare individuals with the same education levels but different genders. Parenthood attitudes.Attitudes toward parenthood are positively and significantly related to family-building transitions: union formation and parenthood (Table 2, Columns 1 and 2).The relationship is most marked for those who have recently become parents, with an increase in the parental orientation scale by about a point (1.05 for women, 0.92 for men); the relationship is somewhat weaker for parents for whom two or more years have passed since the birth of the child (0.67 for men, 0.61 for women).The positive relationship with union formation is weaker than the effect of parenthood (less than a half point increase) but is also significant for both genders.Once they have entered a coresidential relationship, men and women seem to think parenthood more important than they had before, indicating that most individuals see childbearing as an integral part of a partnership.Notes: Robust standard errors in parentheses.Plus sign indicates significant differences between men and women at p < 0.1. The parenthood attitudes of men and women seem to relate in very similar ways to these family-building transitions.There are no significant gender differences in the relationship between parenthood attitudes and family-building transitions.Clearly, young Swedish women and men are surprisingly similar in their reactions to family building, at least when it comes to attitudes toward parenthood. Nevertheless, the simple fact of aging (holding family transitions constant) is related to having less favorable attitudes toward parenthood, particularly among men, with declines between each age group -declines that are significant over the four years between ages 22 and 26 and between ages 30/32 and 34/36 for men.This trend is much less marked among women.Having a post-secondary education was not significantly related to either men's or women's parental attitudes. Work attitudes.Comparing men's and women's changes in attitudes toward the importance of work following family-building transitions (Table 2, Columns 3 and 4) also shows similar patterns for young men and women, but with somewhat less dramatic effects.The work attitudes of both sexes are negatively related to recent parenthood; once they become parents, they view work as less important, although the coefficient is much larger for women and only significant for them (-0.432versus -0.185 for men).Nevertheless, the relationship seems to be transitory, given that neither men nor women who have only older children have significantly lower levels of work attitudes than they had before they became parents. Moreover, when it comes to the relationship between union formation and work attitudes, both women and men find work and career significantly less important once they have entered a coresidential relationship, which would seem to testify to how far the gender revolution has advanced in Sweden.Aging contributed to the reduced relationship with work-related attitudes, a process that began earlier for women (between age 22 and 26) than for men (between age 34 and 36/40).Union dissolution had a significant, negative relationship between family building and men's work attitudes, but this was not the case for women.Having attained a post-secondary education had no impact on changes in work attitudes. Differences by education.As we noted earlier, attaining a post-secondary education had no significant additive relationship with attitudes toward either work or parenthood, for men or for women (cf.Table 2).Nevertheless, it is possible that the relationship between family-building transitions and attitudes toward work and parenthood might differ between those with different levels of education, as Cherlin (2014Cherlin ( , 2018) ) found so dramatically for family issues in the United States.We address this question in Table 3, which subdivides the regressions on changes in parenthood and career attitudes by levels of education (but not by sex).There were no significant differences in the relationships between family-building transitions and work importance between our two educational levels, and just one for attitudes toward parenthood.The only important educational difference in the relationship between parental attitudes and family building involves having older children.Those with older children (none younger than 2 years old) maintained their strengthened positive attitudes toward parenthood if they had not attained any postsecondary education, while those who did have a post-secondary education seem to have lost some (but not all) of their enthusiasm for parenthood by the time their children had left babyhood. Differences in parental attitudes by gender and education.When we reintroduce gender into our analysis of educational differences, we find more interesting differences between men and women (Table 4).Union formation has a far stronger relationship with attitudes toward parenthood for women than for men among those with only a secondary education (0.707 versus 0.052), while the opposite pattern prevails among those with some post-secondary education (0.211 for women versus 0.722 for men), although only the latter gender difference is significant.Having children older than 2 also shows a significant difference by education in women's attitude toward parenthood (1.02 for those with only a secondary education and 0.45 for those with a post-secondary education).The relationship for women with only a secondary education is significantly stronger; results not presented.There were fewer significant differences in parental attitudes by education for men; the only significant interaction was for union formation.Men with a post-secondary education exhibit a stronger orientation toward parenthood after union formation, which was not the case for men with only a secondary education.These results seem to suggest that changes in attitudes toward parenthood with family-building transitions lead to a more traditional orientation among those with lower levels of education in Sweden, with stronger effects on women than on men, as is commonly found elsewhere.What is dramatic is how much these transitions appear to increase more educated Swedish men's attitudes toward parenthood. Differences in work attitudes by gender and education.Turning to gender and educational differences in the relationships between family-building experiences and work attitudes (Table 5), the patterns are somewhat similar to those for parental attitudes, generally reinforcing the overall picture of less educated young adults holding more gender-traditional attitudes about being good providers.Among those with no more than a secondary education, family building, including union formation, recent parenthood, and less recent parenthood, are related to a lower level of work-importance orientation for women than for men, although only the gender difference for recent parenthood is significant (-0.652 versus 0.164).There is no evidence of such gender differences in the relationship between family formation and work attitudes among men and women with a post-secondary education; they both experience reduced feelings about the importance of work after family-building transitions. Summary and discussion In this paper we investigate whether family-building transitions influence attitudes toward parenthood and work in Sweden, and if so how.Our study of attitudinal responses to family-building events over the early adult life course demonstrates that attitudes toward work and parenthood change with the experience of family-building transitions (union formation and childbearing), supporting Hypothesis 1.In fact, the influence of family-building transitions on changes in attitudes (positive for attitudes toward parenthood and negative for attitudes toward work) aligns well with the observed overall increase in positive attitudes toward parenthood and the decrease in positive attitudes toward work over the life course.This suggests that family-building transitions could be largely responsible for the attitudinal changes regarding work and parenthood that people experience as they age through their early adult years. Unlike most other research on this subject, we study attitudes toward parenthood and career separately, for both young men and young women, allowing us to observe more clearly how family building affects each of the separate spheres.Research in countries with less support for families has found strongly traditionalizing impacts of family transitions, far more so than in Sweden, with women becoming more domestic and men reinforcing their good provider roles.In contrast, our analysis of Swedish young adults shows that both sexes now generally follow a "female" pattern: With age and family building, both men and women become more positively inclined toward parenthood and attach less importance to career.Interestingly, while both gender and educational level matter for the effects of family transitions on attitudes toward parenthood and career, the differences are small (supporting Hypotheses 2-5).To reveal gender differences of any magnitude, it was necessary to separate parental attitudes from career attitudes, as a combined gender role measure would likely have missed them.(Kaufman, Bernhardt, and Goldscheider [2016], using the same dataset, found enduring egalitarianism following family-building transitions.)And to reveal differences in socioeconomic status it was necessary to separate men from women, because education makes little difference for men but does make a difference for women.What is quite remarkable is the finding that both men and women with post-secondary education saw work as less important after the family-building transition. The relationship between family-building transitions and attitudes toward parenthood and work seems to indicate that young Swedes adapt their attitudes to their new circumstances, suggesting that selection is often reshaped by adaptation.Holland and Keizer (2015), using the same dataset, have shown that there are distinct selection effects of family and work attitudes on the transition to parenthood.However, by using a first-difference regression, we analyze change and thus control for the attitude at the beginning of the observation interval.Nonetheless, we cannot rule out the existence of reverse causation due to the relatively long time intervals between observation points.It has been pointed out by Leszcsenska and Wolbring (2022) that while the first-difference regression model provides protection from endogeneity arising from unobserved heterogeneity, it can also yield biased estimates in cases of reverse causality.We therefore need to interpret our results with some caution. The results are of course limited to Sweden and need to be tested more generally, preferably with panel data with shorter time intervals.The relatively weak importance of education for the effect of transitions on attitudes may be specific to the relatively redistributive society of Sweden.The availability of strongly subsidized child care and job protections for those taking family leave make the trade-off between commitments to work versus family much easier to manage than in countries with neither policy in place.More research on countries with less progressive policy regimes could shed further light on these issues.Larger samples would also be useful, given the challenges to significance in this analysis To summarize, the overall pattern is one of similarity between men and women, seemingly testifying to the fact that the gender revolution is fairly well advanced in Sweden (Goldscheider, Bernhardt, and Lappegård 2015), even if traditional gender structures remain to some extent, particularly when it comes to attitudes toward work and career among those with less education.Evertsson (2013) has already shown that women's predispositions toward work and career are weakened after exposure to the experience of parenthood, although the effect seems to be transitory.This study demonstrates that this is also the case for men, although the effect is less pronounced than for women.Studying gender differences in attitudes toward career and parenthood following family-building transitions appears to be a good way to monitor the progress of the gender revolution in different societies.Our results also reinforce the value of studying attitude differences in the context of the changing social structures that underlie them.Men's and women's changing roles, and the changing structures of inequality within a given society, clearly shape patterns of attitudes by gender and socioeconomic status. Table 3 : OLS regressions: changes in attitudes toward parenthood and career explained by life transitions, by final educational level (secondary/post-secondary) Robust standard errors in parentheses.Plus signs indicate significant differences between educational levels at p < 0.1. Table 5 : OLS regressions: change in attitudes toward the importance of work explained by life transitions, by gender and final educational level (secondary/post-secondary) Notes: Robust standard errors in parentheses.Plus signs indicate significant differences between educational levels at p < 0.1
2023-11-10T16:10:32.608Z
2023-11-09T00:00:00.000
{ "year": 2023, "sha1": "1a644600b32b326dc8168505f3fdebb17be8e392", "oa_license": "CCBYNC", "oa_url": "https://www.demographic-research.org/volumes/vol49/30/49-30.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e25104f2ac949099da4fec7d3b9b7b3fea1aabf9", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [] }
238786844
pes2o/s2orc
v3-fos-license
Endoscopic Management of Lumbar Disc Prolapse Background: Endoscopic management of lumbar disc herniation as a minimally invasive procedure become more popular around the world. Although accepted surgical outcomes of the endoscopic approach to manage lumbar disc herniation [LDH], this procedure still to be relatively challenging and needs a high learning curve, so operative failures and complications may occur. The Aim of The Work: To assess using the endoscope in the management of lumbar disc prolapse by interlaminar approach using Easy Go and Destandau systems. Patients and Methods: This is study included twenty patients, who had lumbar disc herniation, and operated by using Easy Go and Destandau's endoscopic systems after the failure of conservative treatment. They were included between March 2016 and April 2020. They followed up for at least three months postoperatively. All were selected from the Neurosurgery Department, Al-Azhar University Hospitals, Egypt. Results: Low back pain was the main complaint reported by all patients. The radicular side was mainly the left side [70.0%] and L4/L5 was the most common affected level [65.0%]; the disc protrusion was mainly paracentral [80.0%]. There was a significant pain reduction after surgery when compared to before surgery. The outcome was excellent for 55.0%, good for 25%, fair for 15% and poor for 5%. Complications were in the form of unintended durotomy among 10.0%, nerve injury among 10.0% and infection among 5.0%. Conclusion: Endoscopic lumbar discectomy through interlaminar approach by Destandau's and Easy Go systems become a golden procedure to manage lumbar disc prolapse at any level especially L5-S1 as a minimally invasive technique with some accepted complications that can easily be managed compared to classic traditional open techniques. INTRODUCTION Currently, the treatment modalities for lumbar disc herniation [LDH] comprise conventional discectomy [CD] and percutaneous endoscopic lumbar discectomy [PELD]. Because of its high success rate of approximately 90% and good result, CD is considered the standard surgical method in the management of LDH unresponsive to conservative therapy. However, CD is associated with complications, including epidural scarring, destabilization of spinal canal structures, and tissue traumatization [1] . The technical advancement in endoscopes and instruments have led to the development of multiple approaches including the transforaminal, the extra foraminal and the interlaminar approach. The interlaminar approach is used in lumbar spinal stenosis and disc herniation mainly located inside the spinal canal, which is technically difficult to manage through the transforaminal technique, and especially at L5-S1 due to the large transverse processes, facets, the narrow disk space and the iliac crest [2][3] . Ruetten et al. performed for the first time the full-endoscopic discectomy by transforaminal [4] and interlaminar [5] approaches. After that, the fullendoscopic discectomy has become the most common, and minimally invasive approach for the management of lumbar disc herniation [LDH]. Due to the high rate of success, cost-effectiveness, and minimally invasive nature, fully endoscopic interlaminar discectomy [FILD] become more familiar for both surgeons and their patients for management of LDH. This technique for treating LDH specially L5-S1, has obtained popular validation, and also produced satisfied effects of lumbar spinal stenosis [6] . Spine surgeons are accustomed to interlaminar [IL]-PELD as the anatomic orientations are similar to open surgery, although there is a learning curve. The systems for endoscopic interlaminar approach are either a conic "freehand" working channel [the Endospine by J. Destandeau] or a tubular retractor, introduced by Foley and Smith. Irrespective of the remarkable development of endoscopic procedures and instrumentation leading to good results comparable to open surgery, surgeons still have some challenges in PELD [7][8][9] . AIM OF THE WORK The current study aimed to assess the usage of the endoscope in the management of lumbar disc herniation through the interlaminar approach using Easy Go and Destandau systems. PATIENTS AND METHODS This study, included twenty patients have lumbar disc herniation operated by using Easy Go and Destandau's endoscopic systems after the failure of conservative treatment between March 2016 and April 2020 at the Neurosurgery Department, Al-Azhar University Hospitals. All patients have lumbar disc herniation with the following criteria: Unilateral sciatica, no response to nonsurgical management for at least 1.5-month, one level of lumbar disc herniation. The following patients were excluded from this study: Cases proved to have bilateral sciatica, multiple lumbar disc herniation, and ossified disc, any degree of spinal instability, recurrent lumbar disc herniation or lumbar canal stenosis. All patients in this study were subjected to the following: Clinical assessment [history and examination, radiological assessment by MRI lumbosacral spine and plain X-ray lumbosacral spine [A-P and lateral views], operated by Easy Go and Destandau's endoscopic spine systems. Duration of post-operative stay, postoperative clinical outcome and sequel were recorded. Follow-up for at least three months postoperative and clinical outcomes were assessed by using Visual Analogue Scale [VAS] [for Mean pre-and post-operative pain score measurement]. Patients Satisfaction was measured by Modified Macnab Criteria at three months postoperatively. Ethical considerations: The study protocol was revised and approved by the local research and ethics committee of Al-Azhar Faculty of Medicine. In addition, each patient signed an informed consent after full explanation of study protocol. The study completed in line with research ethics code of Helsinki Declaration. The data are available on request. Recorded data were coded and fed to the statistical package for social sciences software, to be analyzed, we used version 20.0 [SPSS Inc., Chicago, Illinois, USA]. Frequency and percentages were used to report qualitative data, while mean ± standard deviation [SD] were used to represent quantitative variables. A one-way analysis of variance [ANOVA] used to compare multiple means. Paired sample t-test was used to compare different points of times of the same variable. Chi-square [x 2 ] was used to test association between categorical parameters. The p-value was considered significant if < 0.05. RESULTS This study includes 20 patients with age ranged 25 DISCUSSION Open surgery still the ideal technique for treating lumbar disc herniation. However, the disadvantages of this surgery are the massive retraction and dissection of back muscles, more operative consumption of time, larger scars and bone removal [10] . The current study aimed to assess the results of endoscopic management of lumbar disc herniation. Overall, the results were excellent for 55.0%, good for 25%, fair for 15% and poor for 5%; with a statistically significant pain reduction after surgery, and pain reduction continued until the end of the third month after surgery. The complications were in the form of unintended durotomy among 10.0%, nerve injury among 10.0% and infection among 5.0%. There was mild intraoperative blood loss with a reasonable time of postoperative hospital stay duration. These data reflected the efficacy and relative safety of the procedure. Choi et al. [11] reported that, for full endoscopic inter-laminar discectomy, the complications rate was 18.5% [compared to 25.0% in the current one]. Epstein [12] reported that, surgeries under the direct vision could better distinguish between the nerve root and other tissues. However, nerve root injury remains one of the common complications of full endoscopic lumbar discectomy. In our study we have two cases 10% of unintended accidental nerve injury, one of them was just transient impairment of nerve function and cause partial foot drop that improved by physiotherapy and this patient return to work and daily activities after three months; the other patient was complete nerve injury with foot drop that not improved after two years of follow-up. In studies reported by Zhou et al. [13] , nerve root injury occurred in 1.2% of cases. Choi et al. [14] noted that the working sheath could crush the exiting nerve root during the operation, and thus a prolonged operative time could lead to nerve irritation. Furthermore, motor weakness and temporary dysesthesia was reported as common complications in percutaneous endoscopic interlaminar discectomy [PELD]. The complications incidence was 2.00-6.53% according to previous study of Lee et al. [15] . Other common complications that have been reported in the literature include dural injuries, which are very serious complication of FILD [16][17] . Patients with small tears may be asymptomatic and may only need bed rest with a pressure dressing. However, patients with larger tearing, which can cause sciatica, uncontrolled CSF leakage, and development of a nerve root herniation, will always require secondary open repair surgery [18] . In the current research, we have two cases [10 %] of unintended dural injury, one of them was just arachnoid bleb without CSF leakage intra-operative or post-operative, the other case was open dural injury that needed open repair at the same session. Ahn et al. [18] reported nine patients [1.1%] experienced symptomatic dural tears. In the series reported by Lee et al. [19] and Xia et al. [20] reported that, there was no intraoperative incidental durotomy or leakage of cerebrospinal fluid [CSF] after surgery. In series reported by Zhou et al. [13] , dural tears occurred in 0.9%. In series reported by Chen et al. [21] , dural tears and CSF leakage were detected in three patients due to adhesions between the calcification of disc and nerve root. However, their symptoms improved, and discharged after just one week of bed rest. Recurrent lumbar disc herniation [RLDHs] reported after different surgeries for lumbar discectomy. Phillips [22] defined RLDH as "disc herniation at the same level with a pain-free interval longer than six months after surgery regardless of whether the herniation is ipsilateral or contralateral". The risk factors include smoking, gender, obesity, and diabetes [23] . In this study we have a single case [5%] of recurrent lumbar disc prolapse after six months that operated again by open technique. Kaushal and Sen [24] have reported RLDH rate of 5.5, 5.7, and 3%. In addition, Joswig et al. [25] reported recurrent lumbar disc herniation occurred in 28%. Recurrence rates after discectomy vary between 5 and 20% being independent from the technique employed. Patient satisfaction was evaluated by "Modified Macnab Criteria [MMC] after three months of the operation and was excellent in 55%, good in 25%, fair in 15% and poor in 5%. In series reported by Oertel et al. [26] , patients went back to work within 1.5 month postoperatively with a range of one up to 20 weeks. Of the patients who evaluated by MMC, 83% [45/54] considered their postoperative status as excellent, 13% as good [7/54], 4% were not satisfied [2/54]. In this study, the infection occurred in one case [5%] and the patient had multiple risk factors and cured by antibiotics with medical improvement. In series reported by Cao et al. [27] , no patient with infections after PELD. In series reported by Zhou et al. [13] , there were no instances of posterior surgical site infection. In the current trial, there was a significant decrease of pain in immediate and after three months of follow up when compared to values before surgery and no medication after three months in 16 patients [80%], 2 cases [10 %] with interrupted medication for occasional radicular pain and another case [5%] needed local steroid injection and another patient [5%] with RLDH that operated again after six months. In the series reported by Oertel et al. [26] , a significant radicular pain reduction permits the normal continuation of the patient's daily activities. No pain medication was reported in 89%. However, 6% reported recurrent pain without evidence for recurrent disc herniation or re-stenosis. Another 5% had a recurrent disc herniation during the follow-up period and were subsequently submitted to second surgical intervention. Despite the significant advancement of endoscopic methods and instruments leading to successful outcomes comparable to open surgery, surgeons still have some difficulty in PELD. Most are about the inadequate elimination of a disc fragment, a learning curve, recurrence rate and radiation exposure. The risk of failure may be a major obstacle to perform PELD. In addition, PELD procedure and experience can affect the success of the technique. During the phase of steep learning curve, longer operative times are needed and the incidence of complications may be higher than those reported for more expert surgeons [14] . One of the driving forces behind the minimal invasive spine surgery is economics, shorter hospital stay, reduced postoperative morbidity, and quicker recovery times. Depth perception in these techniques comes from experience rather than observation. Hence, surgeons keen to learn these techniques must combine these procedures during the early phase of learning with standard procedures in clinical practice [24] . In conclusion, the current study revealed the effectiveness of endoscopic management of lumbar disc herniation. In addition, it is a relatively safe procedure with low complications rate. Thus, we recommend this technique to replace the traditional open surgery, unless there is absolute contraindication. Financial and Non-financial Relationships and Activities of Interest None
2021-10-15T00:09:35.918Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "ecc5c1a452454cea1fabead0f0085c87c76e8502", "oa_license": "CCBYSA", "oa_url": "https://ijma.journals.ekb.eg/article_183894_1a2522b02482e3afcdcf7642fb7a39e9.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fd7c5ef02596f343404fbf833a6b1fc31d701d4f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55759511
pes2o/s2orc
v3-fos-license
A Preliminary Study: Esterification of Free Fatty Acids (FFA) in Artificially Modified Feedstock Using Ionic Liquids as Catalysts The exploration of non-edible oils as a feedstock has been positively affect the economic viability of biodiesel production. Due to the high level of free fatty acid (FFA) in non-edible oils, esterification is needed to remove the acidity to the minimum level before base-catalyzed transesterification. In this study, 1-hexyl-3-methylimidazolium hydrogen sulphate (HMIMHSO4) was self-synthesized and compared with the commercialized ionic liquid, 1-butyl-3-methylimidazolium hydrogen sulphate (BMIMHSO4). HMIMHSO4 and BMIMHSO4 were characterized by 1H NMR prior to use in the esterification reaction. The reaction was carried out in a batch reactor and variables such as types of alcohol, oil: alcohol molar ratio, temperature and types of stirring were investigated. The highest conversion for each catalyst was achieved using ethanol as a solvent at the condition of 343 K reaction temperature, 12:1 alcohol to oil ratio in 8 h reaction time. BMIMHSO4 showed higher conversion (98%) as compared to HMIMHSO4 with only 82% conversion. Clearly, BMIMHSO4 shows considerable potential to reduce the FFA in the feedstock as it is exhibit excellent catalytic activity due to lower alkyl chain of BMIMHSO4 compared to HMIMHSO4. Copyright © 2016 BCREC GROUP. All rights reserved Introduction With the development of the global economy and increasing of environmental pollution problems, the energy crisis becomes steadily more serious.The environmental problems caused by the use of fossil fuels also raised a great concern as the carbon dioxide produced from the fossil fuel contributes to the greenhouse effect.This has prompted many researchers to search for efficient, safe and renewable energy sources.Biodiesel, a monoalkyl ester of fatty acids has recently gained considerable attention as an alternative energy sources [1].It was reported that the use of 100% pure biodiesel could reduce carbon dioxide by 78.5% as compared to the petroleum-based diesel [2]. Biodiesel can be produced from transesterification of triglycerides or esterification of free fatty acids (FFA).Alcohols, such as methanol and ethanol, are usually used as the acyl acceptor due to the wide availability and low in price [3].Non-edible or low costs feedstock usually contain high FFA, which needs to be reduced to less than 1% to prevent saponification of FFA from occurring, especially when alkali catalysts were employed [4].Thus, esterification became one of the important pre-treatment processes in biodiesel synthesis.Conventional esterification method was conducted in the presence of homogeneous acid catalyst [5][6][7].However, the utilization of these catalysts raised few drawbacks (i.e.equipment corrosion problems and generation of acidic wastewater from the neutralizing process [8]).Since then, different types of catalysts have been developed and investigated in order to obtain higher biodiesel yield.Different type of heterogeneous catalyst has been investigated in the esterification reaction such as sulfated zirconia [9,10], heteropolyacids [11], and ion-exchange resins [12].These catalysts successfully solved the problems of equipment corrosion and environmental pollution.However, the preparation of these catalysts is relatively complicated and they are difficult to be recycled, hence contribute to higher production cost.Therefore, it is necessary to develop environmentally friendly, efficient and recyclable catalysts that able to produce an economical esterification process. In recent years, there have been growing interests in the usage of ionic liquids (ILs) as catalysts in biodiesel synthesis.ILs are salts consisting of organic cations and inorganic or organic anions, which presence in liquid form at room temperature or at relatively low temperature (<100 °C).Attractive characteristics offered by IL are by having a high thermal stability, negligible vapor pressures and excellent solubility and miscibility with reactants.The uniqueness of this catalyst lays on the acidity and basicity level flexibility as it can be tailored using different types of cations and anions [13].The acidic or alkaline behavior depends on the type of anion attached to the bulky cation, which can be from Brønsted acid, Lewis acid or alkali groups.The utilization of ILs as catalysts in biodiesel synthesis has been studied recently [1,8,13].Fang et al. [14] conducted esterification of FFA using dicationic ILs as catalyst.They and found that this type of catalyst performed better in terms of catalytic activity as compared to monocationic ILs. Guo et al. [15] studied on the performance of ILs in the transesterification of biodiesel using Jatropha oil.They found that the addition of metal chlorides to the ILs increases the catalyst's acidic sites which simultaneously enhance the transesterification reaction. Recently, Brønsted acidic ILs has become one of the potential catalysts for green biodiesel synthesis.The catalytic performance was proven to be comparable and better compared to conventional catalysts. a n d 1 -m e t h y l -3methylimidazolium hydrogen sulfate (HMIMHSO4) are Brønsted acidic ILs with acidic counterion, which influence its catalytic performance in reactions.Both of the ILs prevails as the catalyst with a good catalytic activity in esterification [8,16].Elsheikh et al. [16] used different types of Brønsted imidazolium ILs as catalysts in biodiesel production.They found that the higher acidity of BMIMHSO4 resulted in the highest conversion of crude palm oil (CPO) in a two-stage biodiesel process.The conversion of FFA in the CPO was reduced to 91.2 % in the pre-treatment step prior to transesterification process. In this paper, HMIMHSO4 and BMIMHSO4 were used as catalysts in the esterification of FFA using simulated used cooking oil (SUCO) as the feedstock.The reaction was carried out in a batch reactor and variables such as types of alcohol, oil to alcohol molar ratio, temperature and types of stirring have been investigated. Preparation of simulated used cooking oil (SUCO) The simulated used cooking oil (SUCO) was prepared by mixing the oleic acid and virgin palm oil (refined palm olein) to produce feedstock with 6% of FFA content.The solution Bulletin of Chemical Reaction Engineering & Catalysis, 11 (2), 2016, 183 Copyright © 2016, BCREC, ISSN 1978-2993 were mixed in the conical flask and stirred by using magnetically stirred for 10 minutes.The SUCO was then tested for its acid value using titration method of ASTM D974.The acid value was approximately 12.63 mg KOH/g.Usually, the average value of FFA content in used cooking oil from industrial or house hold cooking oil is between 5% and 15% [17]. Ionic liquid preparation A 1-hexyl-3-methylimidazolium chloride (HMIMCl) was dissolved in anhydrous acetronitrile in a 250 mL round bottom flask fitted with reflux condenser, magnetic stirrer and under nitrogen purge.The reaction was kept under cold condition and vigorous stirring.Then, concentrated sulphuric acid (H2SO4) was slowly added to the mixture.The mixture was stirred continuously to ensure all the components are completely reacted.For immediate removal of HCl from the reaction phase, N2 gas was used to flush it.Then, the Brønsted ionic liquid, 1hexyl-3-methylimidazolium hydrogen sulphate (HMIMHSO4) was washed several times with trimethyl-1-pentene and dried in vacuum for 6 h.It was sent for H-NMR analysis to determine the purity of the synthesized ILs.While BMIMHSO4 was used as received. Esterification of UCO The esterification reactions were carried out using a batch reactor in a 250 mL three-neck flask attached to a reflux condenser.A programmable temperature controller was used to monitor the reaction temperature.A specified amount of SUCO and alcohol was added to the reactor and the stirring and heating of the reaction mixtures were started.When the reaction mixtures reached the desired temperature, a known amount of catalyst was added and this time was taken as the zero time for the reaction.The experiments were carried out for 8 h with a constant stirring rate.The sample tube was fitted with syringe to withdraw the sample.The samples were taken periodically from the reactor for FFA analysis. Determination of acid number The acid numbers of the samples were determined according to the ASTM D974 standard method, which is described as follows.Firstly, approximately 2 g of sample was weighted and dissolved in 100 mL of a titration solution of toluene, 2-propanol and water with a volume ratio of 100:99:1.The mixture was then titrated potentiometrically with alcoholic KOH solution. The acid number was the quantity (in mg) of KOH per 1 g of sample required to titrate the sample to its neutral point.The equation used for the acid number determination is presented in Equation ( 1). (1) where: M is a concentration of KOH, A is a volume of KOH used to reach the neutral point, B is the volume corresponding to the blank titration and W is the weight of the mass of the sample. Determination of % conversion of FFA The conversion of the FFA was defined as the fraction of FFA that reacted during the esterification process with the alcohol.The conversion of FFA (% FFA) was determined from the acid number ratio using the Equation ( 2). (2) where: A is a volume in ml of titration solution, B is the volume in ml of the blank, N is a normality of the titration solution and w is the weight of the sample of oil in grams. Effect of different IL catalysts on the esterification of SUCO with ethanol The esterification reaction is started when the fatty acid accepts a proton (H + ) from proton donor (catalyst).Then, alcohol molecule attacks the protonated carbonyl group to give a tetrahedral intermediate.Proton is lost at one oxygen atom and gained at another to form an- Bulletin of Chemical Reaction Engineering & Catalysis, 11 (2), 2016, 184 Copyright © 2016, BCREC, ISSN 1978-2993 other intermediate and further loses a molecule of water to gives a protonated ester.Finally, a proton is transferred to a water molecule to give the ester [18].Catalyst is required by the esterification to accelerate the process by promoting the protonation of carbonyl group on fatty acid group.Two ILs were chosen as catalyst and tested in this present study to reduce the FFA content in the SUCO.The activity of the IL catalyst is closely related to its acidity and its solubility towards the substrate.During the initial stage of the esterification reaction, the acidity of the IL plays an important role in the reaction and was significantly dependent on the anion characteristics of the IL.In addition, the cation of the IL also plays a crucial role in the reaction because the hydrophilicity of the IL can be tuned mainly by the cation.This affects the miscibility of the IL with the ester product, the degree of phase separation, and the reaction efficiency.HMIMHSO4 and BMIMHSO4 were tested in this study and gave the results depicted in Figure 1. As can be seen in Figure 1, both of the catalysts showed a good conversion of FFA in SUCO with the optimum reaction conditions.However, BMIMHSO4 showed the highest catalytic activity and gives the highest conversion of 97% after reaction for 7 h.Meanwhile, HMIMHSO4 able to reduce 82% of FFA in the first 7 h.It was found that the rate of reaction of HMIMHSO4 is much lower compared to the BMIMHSO4.A similar statements reported and cited in literatures [19,20,21].The ability to donate the electron of alkyl group increases with increasing the alkyl chain.Thus, it will lowers the hydroxylation and limiting the electrophilic attack by the acid.Increasing number of carbon atoms will also decrease the polarity consequently lowering the miscibility of the IL with the ester.So, decreasing alkyl chain will increase the reaction efficiency.Since BMIMHSO4 showed the best catalytic performance as compared to HMIMHSO4, it was used for further experimental work. The effect of different type of alcohol Three types of alcohol i.e. ethanol, propanol and n-butanol, has been evaluated to study the influence of alkyl chain alcohol on the esterification reaction.The experiment was conducted at 70 °C for 8 h using 5 wt% BMIMHSO4 as the catalyst.As shown in Figure 2, the conversion decreases as the alcohol carbon number increases from 98% to 84%.This could be due to better miscibility of fatty acid and alcohols with alkyl chain in homogeneous system.Probably the presence of a double bond in the alkyl chain increases the miscibility of fatty acid and as a result the alcohol has more chance to attack the carbonyl group.The electron donating ability of alkyl group toward the hydroxyl group increases with increasing the alkyl chain of alcohol thus lowers the hydroxylation and limiting the electrophilic attack by the acid.Increasing number of carbon atoms in linear alcohols will also decrease the alcohol polarity consequently lowering the miscibility of alcohol and SUCO.Besides that, the conversion of FFA using ethanol was higher than propanol and n-butanol because mass transfer problems were reduced due to the higher solubility of triglyceride molecules in Bulletin of Chemical Reaction Engineering & Catalysis, 11 (2), 2016, 185 Copyright © 2016, BCREC, ISSN 1978-2993 ethanol.This was an agreement with the literature reported earlier by Aghabarari et al. [22] and Ghiaci et al. [23].Lastly, a further increase in the number of carbon atoms in the alkyl chain resulted in a decrease in the conversion, probably due to steric hindrance restricting the attack of propanol and n-butanol at the carbonyl groups of the triglyceride.The steric component affecting the reactivity is perhaps the decisive factor for acid-catalyzed esterification.Steric hindrance increases with molecular size, inducing electronic repulsion between nonbonded atoms of reacting molecules.This repulsive hindrance lowers electron density in the intermolecular region and disturbs bonding interactions.Thus, as the alkyl chain increases, its steric effect increases as well.Increasing the steric effect will decrease the conversion of FFA.Therefore, ethanol was used as an alcohol for further research. Effect of type of stirrer on the esterification of SUCO Mixing is one of the important factors that affect the performance of esterification reaction, due to the partial miscibility of oils and alcohol resulting from the polar and non-polar nature of both reactants respectively.The immiscibility of ethanol and SUCO leads to a mass transfer resistance in the esterification reaction.The triglyceride mass transfer limitation is due to the small available active specific catalyst surface, which is mainly covered by adsorbed molecules of ethanol.Influence of mass transfer on the esterification reaction may be observed through temperature, molar ratio and mixing variation as the use of different mixing methods results in different conversions.For the commercialization purpose, mechanical stirrer was used as the production scale was high.But for the small scale study, magnetic stirrer is more convenient as it is easy to handle.For the preliminary stage, two types of stirrer have been evaluated, i.e. magnetic and mechanical stirrer, to investigate the difference in terms of FFA conversion.The reaction was done at the same speed which is 450 rpm to identify the influence of mass transfer resistance.450 rpm was chosen in this work as it is high enough to mixed approximately 200 ml solution and reduce the mass transfer resistance. Figure 3 shows the effects of type of stirrer on the conversion of SUCO into biodiesel.It shows that the application of magnetic stirrer and mechanical stirrer did not significantly affect the conversion of FFA.At the same experimental condition, the maximum conversion achieved for both type of stirrer was 98%.The stirring speeds used were high enough to offset the mass transfer resistance.It can be concluded that different types of stirrer were not affect the rate of reaction.As both of the stirrer showed the same performance, magnetic stirrer was chosen for further experimental work as it is suitable for the small scale reaction and can help to reduce the production cost [1,8,24,25]. The effect of reaction temperature on the esterification of SUCO with ethanol Reaction temperature is an important parameter for the esterification of SUCO with ethanol, with higher temperatures always leading to faster rates, together with a shift in the esterification reaction equilibrium towards the product.In order to explore the effect of temperature on the reaction and to find the optimum reaction temperature, the BMIMHSO4catalyzed esterification of SUCO with ethanol was carried out at different temperatures (338, 343, and 348 K).The results obtained are illustrated in Figure 4. As is evident from the data depicted in Figure 4, in a certain range of time for the esterification reaction, the reaction rate clearly increase with an increase in reaction temperature.However, there is usually no substantial improvement in the product conversion when the reaction temperature increased further.This results was in agreement with Aghabarari et al. [22] and Ghiaci et al. [23].At reaction temperatures below than 348 K, the conversion was significantly increased with the increase of reaction temperature.FFA conversion of 98% was achieved at a reaction temperature of 343 K in 7 h reaction time.Further increase of the reaction temperature (348 K) did not lead to a significant improvement in the product conversion, indicating that the reaction was close to equilibrium [8,22].However, the time taken to achieve equilibrium when operate the reaction at 348 K was shorter than 343 K which is 6 h.Increasing the temperature increases the reaction rates because of the disproportionately large increase in the number of high energy collisions.Rate of reaction depends mainly on the activation energy.The lower it is, the quicker the reaction proceeds.Increasing the temperature decreases the activation energy.Thus, the rate of reaction increases as illustrated in Fig- ure 4. Taking the energy consumption and the product conversion into account, 343 K was selected as the optimum reaction temperature for the esterification of UCO with ethanol. Effect of molar ratio of ethanol to SUCO on the esterification of SUCO An excess of reactant ethanol is necessary for the esterification of FFA because it can increase the rate of ethanolysis.Basically, esterification reaction requires one mole of alcohol for each mole of FFA.However, practically it needs higher ratio than the stoichiometric ratio to drive the reaction forward.In order to study the effect of the molar ratio on FFA esterification, the reaction experiments were conducted at two different molar ratios which are 6:1 and 12:1.The result on FFA conversion obtained versus the molar ratio of the ethanol to oil is shown in Figure 5.During the reaction, the concentration of BMIMHSO4 was fixed at 5 wt% with 70 °C reaction temperature and 450 rpm agitation speed.The reaction time was set at 480 min. The result indicated that the conversion was increased from 94 to 98 % with an increase in the ethanol: oil molar ratio, reaching maximum value at 12:1.Both of the molar ratios show a good conversion.However, the higher amount of ethanol, show a better FFA conversion [8,16,26].Excess ethanol is required not only to shift the equilibrium toward and forward direction but also to wash away the active sites.The high amount of ethanol promoted the formation of ethoxy species on the catalyst surface, leading to a shift in the equilibrium in the forward direction in the reaction mixture.Thus, it increases the conversion of FFA.This was an agreement with the literature, since the esterification reaction is reversible and an excess of ethanol contribute to the esterification of SUCO [23].In this work, the molar ratio 12:1 was taken as the optimal ratio to avoid needless rising in the operational expenses by increasing reactor size and increasing the purification step. The effect of reaction time on the esterification of SUCO with ethanol Reaction time is also an important factor influencing the esterification reaction.Generally, with an increase in reaction time, the reaction equilibrium shifts gradually to the products, and the conversion is enhanced.Reaction time mostly depends on the amount of catalyst, alcohol being introduced to the system and operating temperature.Reaction with shorter time to reach equilibrium conversion is better compared to those who are taking way too long to reach equilibrium [5,7].To find the optimal reaction time for the esterification, the time course of the reaction was plotted, as depicted in Figure 6. Figure 6 shows a plot of FFA conversion versus reaction time for the following reaction condition; 5 wt% BMIMHSO4, 12:1 molar ratio of ethanol to SUCO, 70 °C reaction temperatures with the agitation speed of 450 rpm.As can be seen in Figure 6, the esterification process could be divided into three phases.In the first phase, the substrate SUCO reacted rapidly with the excess ethanol, and more than 67% SUCO was converted within 3 h.In the second phase, the reaction rate gradually increased in the period from 4 to 7 h, and a relatively high conversion of FFA (98%) was obtained at a reaction of 7 h.In the third phase, the esterification reaction moved to equilibrium stage, and the conversion showed no improvement at this extended reaction time.It was found that when the reaction time exceeds the time required to attain equilibrium, the conversion does not increase significantly with increasing reaction time, in agreement reported by Aghabarari et al. [22] and Ullah et al. [26].Therefore, the optimum time needed to produce the highest conversion at optimum temperature and molar ratio is 7 h. Conclusions Two types of ionic liquid was compared for the esterification of SUCO for the preparation of feedstock for transesterification reaction.There are three main stages involved to determine the optimum condition for the best catalyst.Firstly, two types of ILs, HMIMHSO4 and BMIMHSO4 were compared.The results show that both of the catalysts potential to be used as the catalyst in esterification reaction.The conversion of BMIMHSO4 was 98% which was higher than HMIMHSO4, which approximately 82% conversion and thus, BMIMHSO4 was selected for further experimental work.The experimental work continued with the second stage by comparing the performance of different type of alcohol.It showed that the esterification of fatty acids using ethanol gives a better performance compared to propanol and nbutanol.This is due to the miscibility of ethanol with the SUCO is higher as it has lowest alkyl chain than propanol and n-butanol.By using BMIMHSO4 and ethanol, a few variables has been varied at the final stage (i.e; type of stirrer, temperature, molar ratio and reaction time) to find the optimum condition.Therefore, by using BMIMHSO4, ethanol and magnetic stirrer, the optimized reaction conditions for the process are 12:1 ethanol: SUCO mole ratio, 7 h reaction time and 343 K. From the results, it was concluded that BMIMHSO4 shows an excellent catalytic performance to be used as catalyst in the esterification of highly acidified oil due to lower alkyl chain and better miscibility towards alcohol.BMIMHSO4 also has the potential to produce low cost biodiesel from low cost feedstocks in addition to being environmentally friendly.
2019-04-07T13:06:30.220Z
2016-08-20T00:00:00.000
{ "year": 2016, "sha1": "3e0fb94f251c0188d31ddee1f1ba50383081976f", "oa_license": "CCBYSA", "oa_url": "https://ejournal2.undip.ac.id/index.php/bcrec/article/download/549/425", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3e0fb94f251c0188d31ddee1f1ba50383081976f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
219730331
pes2o/s2orc
v3-fos-license
Gender and Trait Preferences for Banana Cultivation and Use in Sub-Saharan Africa: A Literature Review1 Understanding trait preferences of different actors in the banana value chain may facilitate the selection and adoption of new cultivars. We systematically reviewed the scholarly and gray literature on banana trait preferences, with specific attention to studies that document gender-differentiated traits. Of 44 publications reviewed, only four reported gender-specific trait preferences, indicating a significant gap in the literature. The review found that banana farmers, irrespective of gender, value similar characteristics that are related to production constraints, income enhancement, consumption, and cultural or ritual uses. Farmers (as producers, processors, and consumers) often prefer traditional cultivars because of their superior consumption attributes, even if new cultivars have better agronomic and host plant resistance characteristics. Potential differences between trait preferences of farmers and other actors in the value chain should be accounted for to enhance marketing potential. Gender-specific research along the banana value chain and engaging users at the initial stages of breeding can ensure that new cultivars are acceptable to users and may improve adoption. Interdisciplinary teamwork is essential for an efficient and effective breeding program. Background In 2018, around 155 million metric tons of banana were produced around the world, of which 27% came from sub-Saharan Africa (SSA) (FAOSTAT 2020). The majority of this production comes from small plots and backyard gardens. The highest per capita consumption of banana in the world is in the East African highlands, where onethird of the people depend on this crop as a staple food-the crop occupies between 20 to 30% of the acreage under cultivation . In Uganda, millions of people rely on banana for income and daily food, with approximately 75% of farmers cultivating banana Ochola et al. 2013). Over the past decades, new banana cultivars have been introduced across SSA to alleviate declining yields, contribute to household food security, and improve livelihoods (AATF 2009;Aïtchédji et al. 2010;Dzomeku et al. 2007;Gaidashova et al. 2008;Lemchi et al. 2005a, b;Nowakunda et al. 2015;Ortiz et al. 1997;Pedersen 2012;Swennen et al. 2000;Uazire et al. 2008). Adoption rates of introduced banana cultivars are often low compared to their economic importance, and rates are lower than those of other staple crops (ISPC, SPIA 2014;Ortiz 2011;Walker and Alwang 2015). Studies that report adoption rates for new banana cultivars in SSA are scarce (De Weerdt 2003;Faturoti et al. 2006Faturoti et al. , 2009Kagezi et al. 2012;Nkuba 2007). Reasons given by farmers for low uptake include inferior taste, poor marketability compared to local cultivars, and risks associated with growing new cultivars (Kagezi et al. 2012). Farmers indicate preference for local cultivars because of their superior consumption attributes (good taste, soft food texture, good aroma, and good/yellow color), even if new cultivars have better agronomic traits and better response to biotic and abiotic stresses (Akankwasa et al. 2013b; Barekye et al. 2013;Nwachukwu and Egwu 2008). Understanding trait preferences of farmers, consumers, and other value chain actors is a first step for developing a demand-driven breeding program. Developing new cultivars and their subsequent dissemination and adoption is a complex process that starts with setting breeding objectives and developing a selection strategy for priority traits. Such a consultative process requires open dialog and collaboration between plant breeders, other researchers including social scientists, farmers, and other users such as traders and consumers, to understand the needs and preferences of different users, traits and their importance (Christinck et al. 2005). Collecting trait information according to the role and position that an actor occupies in the value chain, as well as gender-specific information, yields wide-ranging and relevant knowledge about cultivars, their traits, and specific uses. The needs and preferences of men and women end-users intersect with various socio-economic and cultural factors at the individual (e.g., age, marital status), household (e.g., wealth), and community (e.g., culture, ethnicity) levels. These factors affect the adoption of new banana cultivars. Farmer preferences may not be the same as market traders and consumers (Ferris et al. 1997). Knowledge of traits that various end-users prefer will enable researchers and farmers to produce marketable cultivars with acceptable attributes (Mugisha et al. 2008). The objective of this review article is to identify trait preferences reported by farmers and other actors in the banana value chain in SSA. We privilege gender-specific differences in trait preferences and the extent to which preferences can set breeding priorities in order to focus on the importance of gendered knowledge in improving food security and banana-based livelihoods. Results will be discussed with the objective to inform future banana breeding research on trait preferences that consider genderspecific needs, while developing product profiles for new cultivars. Methods We accessed English-language publications in both scholarly and gray literature from Musalit (www.musalit.org, repository of references on banana) and CG Space databases, using search terms that include: banana, attribute, trait, gender, preference, choice, priority(ies), end use, desirable, improved variety(ies); refer to Electronic Supplementary Material (ESM) File 1: Table S1 for a full list of the search terms. The main inclusion criteria were that the publication identified and documented banana trait preferences or cultivar preferences by end-users. The initial screening filtered articles based on a review of their titles and abstracts using the inclusion criteria and generated 3489 articles (including duplicates). After the first round of screening, irrelevant articles were excluded. The remaining 86 research articles were screened again with a fulltext reading. We then used reference snowballing to identify additional articles that the original search had missed. We excluded articles that did not meet the criteria at full document reading. We present end-users' trait preferences according to the specified "trait" as well as the "trait state." Trait refers to a feature, attribute, or quantifiable measurement that can be described (e.g., taste, bunch size), while trait state refers to the observed or experienced state of the trait (e.g., sweet taste, big bunch). For breeders, "trait refers to a genetically determined characteristic that is associated to a specific phenotype." The phenotype is controlled by its genotype (G), the environment (E) where the plant grows, and the G × E interaction (Bechoff et al. 2018, 8-9). Results Results presented below are based on a full, analytical review of 44 articles published between 1994 and 2018. The reviews represent ten country experiences (Table 1), the majority from Eastern Africa (70%). Overall, 45% of the articles were from Uganda. Results were differentiated according to the four banana uses common in SSA-cooking, beverage/beer, dessert, and plantain. Karamura et al. (2012) and Swennen and Vuylsteke (1991) provide detailed descriptions of these types. Key information vis-á-vis the geographic location of the study, data collection method, type of banana being studied, end-user's banana trait preferences specified in the studies is presented in ESM (Electronic Supplementary Material) File 1: Table S2. End-users are likely to prioritize different traits depending on factors that may include: role in the value chain (that may be gendered), end use of the crop (determined by cultivar characteristics), environmental constraints, geographic location, individual and household characteristics, and cultural factors. The list of traits is long, making prioritization for breeders challenging. Using a summary of preferred attributes, we grouped banana traits into the five abovementioned categories (Table 2). When available, we provide country-specific details or nuance to the specified traits in the corresponding table narrative. Several of the studies document end-user's preference in order of importance or highlight priority traits ( Table 3, General Ranking of Banana Cultivars Irrespective of Type section). A discussion on the rankings and classification of traits' importance is provided for each banana type (if a study exists), providing breeders with additional information on mentioned traits (Cooking Bananas, Beverage/Beer Bananas, Dessert Bananas, Plantains sections). For all banana types, end-users mention common preferred traits linked to production constraints, particularly host plant resistance to pests and diseases, high yield to ensure food security and surplus production, high market demand, and price. FARMERS' TRAIT PREFERENCES FOR BANANA In their roles as producers, processors, marketers, and consumers of banana, farmers and farming households prefer a large range of traits. Cooking Bananas There are regional differences in preferred texture for cooking bananas; for example, farmers in Uganda prefer soft matooke cultivars (Akankwasa et al. 2013b(Akankwasa et al. , 2016Barekye et al. 2013;Nowakunda et al. 2000;Nowakunda and Tushemereirwe 2004;Rutherford and Gowen 2003). In some parts of Tanzania, cultivars with a hard texture are preferred (Kibura et al. 2010). Characteristics include post-harvest attributes related to processing and value addition. Farmers prefer multi-purpose cooking cultivars that also produce juice and beer (Gaidashova et al. 2005;Nkuba 2007;Rutherford and Gowen 2003). Women value the cultural importance of banana in birthing ceremonies and food preparation, while men emphasize their use at funerals (Musimbi 2007). In one Ugandan study, women indicated that they preferred their traditional cooking cultivar "Katetema" because of its cultural values (Musimbi 2007). Farmers mention preference for cultivars that ensure normal sugar levels after eating them (Dzomeku et al. 2008). Consumption traits, such as good food quality, good taste, soft food, and good flavor, ranked high in Uganda (Akankwasa et al. 2013a, b;Barekye et al. 2013;Nasirumbi et al. 2018;Ssali et al. 2010). Beverage/Beer Bananas Beverage bananas are used in the production of juice, local beers, and local gin. Trait preferences are related more to the products rather than the plant itself or its fruits. Astringency, a characteristic of East African highland banana (EAHB) beer Ability of plants to mature early, ability of mats to perpetuate for a long period General Criteria for selecting banana planting material. Farmers also ranked the best cultivars for: beer production; most productive cultivars in terms of bunch size and land allocation and cultivars with best taste/flavor. Results show differences in rankings in North and South Kivu, DRC (Dowiya et al. 2009) All regions: 1. Flavor/taste; 2. Juice quality; 3. Resistance to disease; 4. Bunch size By region (in parentheses: rank in North Kivu and South Kivu, respectively): Resistance to pests (1, 9); Bunch size (2, 4); Flavor, taste and juice production (3, 1); Adaptation to poor soil fertility (4, 6); Short production cycle (5, 8); Sustainable production (6, 7); Availability of planting material (7, 2); Market demand/prices (8, 3); Tolerance to drought ( cultivars, is preferred for beverage and medicine production (Karamura et al. 2004). Farmers prefer cultivars that can be continuously de-leafed to provide leaves for steaming food, wrapping, and for sale without damaging the cultivar (Rubaihayo 1991) as well as cultivars that produce palatable food in times of food shortages (Musimbi 2007;Rubaihayo 1991). Dessert Bananas Organoleptic and market related attributes are key since dessert bananas are eaten raw and often sold (Ayinde et al. 2010;Kibura et al. 2010;Kwach et al. 2000;Mugisha et al. 2008;Ocimati et al. 2014;Uazire et al. 2008). Dessert bananas are often used for producing beverages (juice and wine) and snacks, hence characteristics related to the quality of the processed products are mentioned. Plantains Plantains are typically processed through boiling, roasting, deep frying and pounding to make food, chips, flour, and biscuits among others (Ekesa et al. 2012;Ubi et al. 2016). Traits related to the appearance of the fruit before processing and product attributes are mentioned. The pulp of "Apem" (small-fruit French plantain) is favored for a dish called Ampesi (where pulp segments are boiled until soft) as it is crispier, firmer, tastes better, and gives the best mouth feel compared to other cultivars in Ghana (Dadzie and Wainwright 1995). In Cameroon, farmers ranked attributes in order of importance as follows: bunch size, fruit length/weight, taste/softness of pulp, and early maturity (Mengue Efanden et al. 2003). Long banana mat perpetuation is preferred (Mengue Efanden et al. 2003). Rwandese farmers reported early maturity as an important criterion (Ocimati et al. 2014(Ocimati et al. , 2016. General Ranking of Banana Cultivars Irrespective of Type Differences in trait rankings based on the geographical location and type of banana grown exist (Table 3). Gold et al. (2002) showed regional differences in the relative importance of banana cultivar selection criteria in Uganda. Principal component analysis (PCA) revealed that farmers preferred drought tolerance, marginal soil tolerance, and longevity (as determined by the first principal component or PC1), which means a robust cultivar that grows as a perennial but with fewer inputs (i.e., a labor-saving cultivar). In the second principal component (PC2), ripening and post-harvest characteristics were preferred (bunch size, taste, maturation, marketability). Dowiya et al. (2009) found regional differences in selecting banana planting material Farmers' selection criteria also reflect the major challenges faced in banana production. In areas where soil fertility is low, or where incidences of pests and diseases are high, adaptability to low soil fertility and resistance to pests and diseases would be critical selection criteria (Ocimati et al. 2016 TRAIT PREFERENCES OF OTHER ACTORS IN THE BANANA VALUE CHAIN (OUTSIDE THE FARMING HOUSEHOLD) Consumers and traders have their own preferred trait preferences. For cooking bananas and plantains, consumer trait preferences are determined by the product type and processing method (Dadzie and Wainwright 1995;Dury et al. 2002;Dzomeku et al. 2006Dzomeku et al. , 2008. Consumers indicate preference for cultivars whose fruit are firm and crunchy when boiled and soft for fufu preparation. At the time of purchase, consumers prefer cooking bananas with big bunches and big fruits that are fresh. Price and the type of biotechnology used to produce the planting material are taken into consideration (Kikulwe et al. 2011;Nalunga et al. 2015;Pillay and Tenkouano 2011). With respect to dessert bananas, consumers prefer yellow skin color, light yellow pulp color, no spots on peel, soft, sweet, firm fruits with easily separable skin and easily detachable fruit. Urban consumers in Nigeria purchase dessert banana fruits according to taste, fruit size, number of fruit/hand (cluster of fruit), texture, aroma, and shelf life of 9 to 12 days (Ayinde et al. 2010). With respect to plantain, fruit shape, fruit size, aspect of the fruit, ripening/maturity stage, bunch size, and good textural qualities after cooking (depends on product and processing method) are desired attributes (Dadzie and Wainwright 1995;Dury et al. 2002;Dzomeku et al. 2006Dzomeku et al. , 2008Kouamé et al. 2015). Kouamé et al. (2015) found that for urban consumers in Cote d'Ivoire, plantain ripening/maturity stage used to prepare different foods was more important than other physical attributes. Traders prefer cooking bananas with big bunch sizes, long fruit, more fruit per bunch and per hand, big hands, more hands per bunch and compact bunches for easy transportation. With respect to appearance, traders prefer cooking cultivars with a good fruit sheen and a pale green fruit color. Other commercial and market life attributes for cooking banana include hands and fruits that do not easily fall off (fruit drop), gradual ripening down the bunch, no easy bruising or quick wilting, a long shelf life, low rate of sheen loss, and price (Akankwasa et al. 2013b;Nalunga et al. 2015;Ssemwanga 1995). For plantain, traders prefer large bunch size (Dadzie and Wainwright 1995;Dury et al. 2002;Dzomeku et al. 2006Dzomeku et al. , 2008Kouamé et al. 2015). GENDER-DIFFERENTIATED BANANA TRAIT PREFERENCES Literature on gender-differentiated trait preferences in the banana value chain is scarce. Although several studies focus on end-user's banana trait preferences (N = 44), only four of these provided some form of gender-specific data (Edmeades et al. 2004;Miriti 2013;Musimbi 2007;Nasirumbi et al. 2018). Table 4 presents the review's genderspecific trait preference findings. These studies documented farmers' preferred traits for cooking and dessert bananas but did not examine trait preferences directly, except Nasirumbi et al. (2018). Rather the focus was on household cultivar demand, gender-responsive strategies in banana production, and impact of gender on adoption. Statistical differences in the importance of banana traits between men and women banana farmers were found for cooking quality (taste, color, softness), beer quality, and resistance to Fusarium wilt. Differences were attributed to underlying preferences based on gender roles; men /beer production and women /cooking, respectively (Edmeades et al. 2004). Miriti (2013) provided male and female farmers' preference rankings for banana cultivars, but did not specify traits they preferred in the different cultivars. Discussion This article contributes to general knowledge of banana trait preferences, including gender. It illuminates the needs and preferences of farmers and banana value chain actors that can be used to orient "product profiles" for new banana cultivars of the different banana types in different ecologies, recognizing the significance of gender-specific trait preferences (Weltzien et al. 2020). Product profiles are used for priority setting for breeding cultivars of matooke and mchare cooking bananas, types popular in Uganda and Tanzania, a n d a r e p u b l i c l y a v a i l a b l e ( h t t p : / / breedingbetterbananas.org/wp-content/uploads/ 2018/07). At the time of submitting this manuscript, the product profile for plantain was not yet publicly available (pers. comm. R. Swennen, November 2019; rony.swennen@kuleuven.be). There is no published product profile for beer or dessert bananas. The published banana product profiles include some of the traits identified above and could be expanded. Product profiles mostly include production and adaptation related traits, such as pest and disease resistance, suckering ability, early maturity, tolerance to drought, and resistance to wind (through plant height). Traits not currently included in profiles include: agronomic attributes (e.g., adaptability to poor soils); processing traits related to value addition (e.g., size and shape attributes, such as uniform fruit size, straight fruit for ease of peeling, and compact bunches for easy transport); social and cultural traits-plant parts which can be used for multiple purposes (e.g., banana leaves for use in food preparation or roots for medicines). End-users mention contrasting traits such as big bunches for the market and small bunches for home consumption. Although sensory/organoleptic/consumption traits are included in product profiles, they are categorized under one umbrella, and treated as a single trait: "table quality/palatability." There might be need to separate the traits in this category, as color, taste, or texture are highly complex characteristics. For example, traits such as "good textural quality after cooking and suitability for various uses" or "firm and crunchy when boiled and soft for fufu preparation" indicate specific demands from consumers, which could need to be incorporated into the product profile. Such consumption and processing attributes are poorly understood in terms of assessment (measurement), inheritance, and their physicochemical nature. Physicochemical characterization, molecular assessments, and interdisciplinary work with food scientists and geneticists would increase the options for inclusion of such traits. The relative priority of different traits in new cultivar design is an important process in breeding. The review presents a long list of traits for each banana type, requiring trait prioritization to set breeding goals and objectives. It is not feasible to include all trait preferences in a banana breeding program due to limited resources and time. A decision-tree analysis, with the critical actors in the value, is one way to prioritize traits and address conflicting factors (Shimelis 2017). BANANA TRAIT PREFERENCES Different banana types share common preferred traits linked to production constraints, such as resistance to pests and diseases, high yield, and high market demand/prices. The review found more preference studies for cooking and plantain types than for dessert and beer banana types. Cooking and plantain types share several common traits, especially related to processing and consumption (appearance, texture, and flavor attributes) as "cooking" types. Dessert and beer bananas are processed into juice and other beverages that may contribute to household income, hence traits related to Trait preferences mentioned by women Trait preferences mentioned by men Trait preferences mentioned by both men and women High suckering ability, early maturity, adaptable to poor soils, leaves can be used for other purposes (e.g., cooking), cultivars with both cash and food value (Cavendish "Lacatan" dessert and "Uganda green" cooking) Good food taste, good food color, commercial dessert Cavendish types ("Valery" and "Grand Nain") Cultural use-women specifically mention uses at birth ceremonies while men mention funerals, resistance to weevils and black leaf streak, big bunches, big fruits, tolerance to drought, tolerance to poor soils, maturity period, good taste, good food color, rich flavor, soft texture, deep yellow color when steamed yield, flavor, taste, and quality of beverage products are mentioned. Traits like fruit length appear to be less important for marketing as each size has its own market, indicating these traits have a wide range of acceptable states. This review found that traits such as host plant resistance (e.g., black leaf streak, Fusarium wilt, and weevils), abiotic stress tolerance (e.g., short plants and strong root systems to avoid wind damage), superior agronomic performance (heavy bunch with big fruit sizes), and vegetative propagation (as related to suckering behavior) are traits that farmers mention and prefer, and are those that breeders target in their programs (Brown et al. 2017). While there appear to be some common priority attributes, cultivars are more likely to be selected if they are better adapted to a region's agro-climatic conditions, local farming systems, and show resistance to prevailing pests and pathogens. Superior consumption attributes, such as taste, flavor, pulp color, and other fruit post-harvest traits (e.g., pulp texture, shelf-life) appear to be of critical importance for cultivar preference and thus adoption of new banana cultivars. Farmers in different regions, however, prefer different banana types for different end uses and cultural events, and hence prioritize different consumption and use related traits. For example, farmers in some regions in Tanzania prefer cooking banana types with a hard texture (Kibura et al. 2010;Pedersen 2012) like mchare, whereas in Uganda consumers prefer EAHB cooking cultivars that make soft food. Edmeades et al. (2004) and Tenkouano et al. (2010) argue that ethnicity strongly influences some of the preferences for organoleptic attributes, such as taste, color, and feel of food. A small number of studies focused on trait preferences of other actors in the banana value chain indicating that cultivars should be marketable (high market demand) and have traits that other value chain actors (traders, processors, consumers) prefer. Farmers sell surplus to consumers and traders in local, urban, or regional markets where preferences are likely to differ by region. Consumers who are not producers are versatile-their consumption patterns depend on what is available in the market. They may substitute or switch to a different product (e.g., rice or potato) if the preferred banana cultivar is not available. Additionally, household demand from urban consumers for a certain product depends on income level. In view of this high diversity of demands for a wide range of traits, banana breeding programs need reliable, detailed information about agronomic, use, and market-related trait preferences of their potential customers. This information can help to identify traits and trait complexes that are important for a large proportion of priority customers. The breeding programs can target improvements for such priority traits, and thus improve the chances that the new cultivar can be beneficial for a large proportion of farmers, farm families, and possibly other consumers. In addition, there is a need to holistically understand the traits and the diverse factors that may affect preferences and eventually other factors that influence adoption. Thus, the revisions of product profiles need to reflect the available understanding of consumer demands in terms of trait combinations and acceptable trade-offs, including gender impacts. As available evidence and trends that could lead to changes are scarce, breeding programs would benefit from well targeted forward-looking consumer studies. GENDER-DIFFERENTIATED TRAIT PREFERENCES The review found that male and female banana farmers often have similar production constraints or common goals such as food security or ceremonial uses, and that men and women might prefer cultivars with big bunches and fruit with a commercial value. Musimbi (2007) found that women mentioned traits related to production (high suckering ability and early maturity), whereas men emphasized consumption-related traits (good taste and color). Women preferred high-suckering cultivars given the potential to earn higher income from selling suckers. We contend that potential differences in preferences, which are not specifically discussed in the reviewed studies, might stem from the different roles that men and women play in the banana value chain, e.g., cooking attributes for women and beer production for men (Edmeades et al. 2004). Women are traditionally responsible for food preparation and processing of banana (Musimbi 2007), whereas men are involved in the preparation of juice that can be fermented to produce local beer (Edmeades et al. 2004;Musimbi 2007;Nkuba 2007). Studies done in Kenya and eastern Uganda indicate that women predominantly participate in ripening and marketing activities at local markets, along roadsides, in trading centers, and in nearby schools (Miriti 2013;Musimbi 2007), whereas men sell at organized markets or farm gates which might lead to differences in preferences. Differences in production goals may lead to varied preferences. Men and women might also face different constraints e.g., related to mobility, information, and input constraints that may affect adoption of new cultivars (Christinck et al. 2017;De Weerdt 2003;Musimbi 2007). Such constraints need to be addressed when designing breeding programs. Farmers often associate new cultivars with increased labor burdens. Musimbi (2007) notes that the introduction of FHIA banana cultivars required digging bigger holes, use of more crop residue, farmyard, and animal manure, and de-leafing in order to produce big fruit. This increased work burden might have different negative implications for men and women farmers, depending on their roles and responsibilities in the production system. Gender roles, constraints, opportunities, and preferences are not static. Information on trait preferences, though, is often collected at only one point in time and may not include on-going changes in gender relations. Recognizing such change will support the targeting of breeding programs towards needs of their priority customers taking socioeconomic and agronomic factors, such as geographic location, gender, ethnicity, culture, age and their interactions into account. Overall, the few studies focusing on gender-specific information indicate that it is essential to capture gender-differentiated preferences of actors in the whole value chain to improve the chances that new cultivars can be adopted and generate maximum benefit. Conclusion The finding that farmers often prefer traditional cultivars because of their superior consumption attributes, even if new cultivars have better agronomic and host resistance characteristics, is a recurring theme of the reviewed studies. Using local germplasm to produce new cultivars can potentially improve acceptance rates, especially as these cultivars would meet the farmers' and consumers' preferences for taste, color, and processing related traits. Understanding what end-users and farmers want in cultivars early on can assist breeders with appropriate targeting of efforts. Bridging the divide between farmers and breeders is one way to ensure that new cultivars have farmers' desired traits, which might lead to faster adoption. Sustained interaction between breeders, other researchers such as pathologists, agronomists, food scientists, social scientists, and entomologists, farmers, and other value chain actors is necessary to understand local context and exchange vital information for an efficient and effective breeding program. Interdisciplinary teams can build "product profiles" for improved cultivar banana types that may be highly acceptable to well-targeted farmers in specific prioritized growing regions (Ragot et al. 2018). Preference studies provide entry points for discussions that prioritize targets for the improvement of specific traits, and priorities for selection in the short and longer term. This can contribute significantly to enhancing the efficiency of a breeding program by improving the chances that the new cultivar would be adopted by farmers and contribute to improving livelihoods. A cultivar product profile based on the priority needs and preferences of priority end-users can be the basis for developing an effective breeding strategy. The reviewed publications contribute only partial information to building such profiles. Research documenting successes and failures of past cultivar releases, adoption rates of introduced banana cultivars, adoption rationales, and a better understanding of the farming production and seed systems remains scanty or missing. Collecting information and understanding why some cultivars are more popular than others is recommended. Popular cultivars are more likely to have traits that end-users prefer, which can help guide breeding programs on what traits to target. Banana breeders need quantifiable information on trait preferences and guidance to set priorities for selection. Traits ranked in order of importance by end-users can provide useful information to help banana breeding teams to adapt and revisit the product profiles and breeding priorities (Ragot et al. 2018). Finally, the review did not find research that evaluated gender differentiation from a value chain perspective. However, as men, women, and other social groups such as traders might have genderspecific knowledge on production, processing, or consumption of particular cultivars, a gender approach can improve efficiency of the breeding program by contributing to the development and refinement of breeding product profiles (Christinck et al. 2017;Ragot et al. 2018). Availability of Data and Materials All the papers used in this review are included in the reference section below. Authors' Contributions PM conducted the review, and was involved in designing the methodology and manuscript writing. CC, IV, and RC were involved in designing the methodology and manuscript writing. EW, RO, CC, and RT were involved in manuscript revision. All authors of this review agree to its publication. Funding Information The CGIAR Consortium through the CGIAR Collaborative Platform for Gender Research, CGIAR Research Program on Roots, Tubers, and Bananas, and the International Institute of Tropical Agriculture (IITA) through the Breeding Better Bananas project funded this review.
2020-06-18T09:05:51.026Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "84d1bc839cea0d9f437b4afad1f7ce7c53fe3e17", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12231-020-09496-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "10b2db2bf396cc64c972de19a34db7450b9d129f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
259343409
pes2o/s2orc
v3-fos-license
Structural and Biological Evaluations of a Non-Nucleoside STING Agonist Specific for Human STINGA230 Variants Previously we identified a non-nucleotide tricyclic agonist BDW568 that activates human STING (stimulator of interferon genes) gene variant containing A230 in a human monocyte cell line (THP-1). STINGA230 alleles, including HAQ and AQ, are less common STING variants in human population. To further characterize the mechanism of BDW568, we obtained the crystal structure of the C-terminal domain of STINGA230 complexed with BDW-OH (active metabolite of BDW568) at 1.95 Å resolution and found the planar tricyclic structure in BDW-OH dimerizes in the STING binding pocket and mimics the two nucleobases of the endogenous STING ligand 2’,3’-cGAMP. This binding mode also resembles a known synthetic ligand of human STING, MSA-2, but not another tricyclic mouse STING agonist DMXAA. Structure-activity-relationship (SAR) studies revealed that all three heterocycles in BDW568 and the S-acetate side chain are critical for retaining the compound’s activity. BDW568 could robustly activate the STING pathway in human primary peripheral blood mononuclear cells (PBMCs) with STINGA230 genotype from healthy individuals. We also observed BDW568 could robustly activate type I interferon signaling in purified human primary macrophages that were transduced with lentivirus expressing STINGA230, suggesting its potential use to selectively activate genetically engineered macrophages in macrophage-based approaches, such as chimeric antigen receptor (CAR)-macrophage immunotherapies. Introduction Cyclic GMP-AMP synthase (cGAS)-STING pathway is crucial for recognizing self or foreign double-stranded DNA and activating type I interferon (IFN-I) signaling via the interferon regulatory factor 3 (IRF3) axis. 1,2 This pathway is indispensable for the innate immune responses in mammalian cells in events of bacteria or DNA virus infections. 3,4 Pharmacological STING activation has been demonstrated as a potential promising approach for the treatment of cancer and virus infection, 5-10 as well as vaccine adjuvants. [11][12][13] Human STING has genetic polymorphisms. Recent genetic analysis demonstrated that the most common STING allele is R71-G230-R293 (RGR) in 59.2% of human population, followed by H71-A230-Q293 (HAQ) which occurs in 20.4%, H232 in 13.7%, A230-Q293 (AQ) in 5.2%, and Q293 in 1.5%. 14 The STING-HAQ variant was found negatively affect innate immunity in humans compared to the most common STING allele (RGR), [14][15][16][17] while others claimed that the difference is insignificant. 18 To our knowledge, almost no reported human STING agonists can discriminate against STING variants except for BDW568 (Figure 1). [5][6][7]19,20 BDW568 is a newly identified STING agonist by our laboratory which can selectively activate STING A230 variants, 21 including HAQ and AQ in human population. The activity of BDW568 against G230 STING was not observed, suggesting a nearly complete selectivity in naturally occurring STING variants. 21 BDW568 is a methyl ester and can be quickly hydrolyzed after cellular uptake by cytosolic carboxylesterase 1 (CES1) to yield the active metabolite BDW-OH ( Figure 1). 21 Here, we reported the crystal structure of the STING A230 C-terminal domain (CTD)-BDW-OH complex and structural-activity-relationship studies. We also validated the activity of BDW568 in primary human cells from healthy individuals with endogenous STING A230 allele or transduced exogenous STING A230 in purified primary macrophages. Results and Discussion Crystal structure of STING A230 CTD -BDW-OH complex To acquire the structural information of STING A230 -BDW-OH complex, we expressed small ubiquitin-related modifier (SUMO) fusion of human STING A230 CTD (residue 155-341) and co-crystalized the protein with purified BDW-OH or with STING endogenous ligand, 2',3'-cyclic GMP-AMP (cGAMP). The crystal structures of the STING A230 CTD complexed with BDW-OH and 2',3'-cGAMP were obtained at 1.95 and 2.01 Å resolution, respectively (Table S1, PDB: 8T5K, 8T5L). Comparing these two structures, we found the overall conformation of STING A230 -BDW-OH complex is similar to that of STING A230 -2',3'-cGAMP complex ( Figure 2). The STING A230 CTD forms a butterfly-shaped dimer. Unlike the apo form of STING, in which the two "wings" from the two STING monomers are wide open, the STING-ligand complexes adopt a closed conformation (Figures 2A & 2B). 23 The distance between residues H185 from both monomers is shown to illustrate open (~47 Å) and closed (34-37Å) conformations. As expected, two molecules of BDW-OH occupy the 2'3'-cGAMP binding pocket in STING A230 homodimer in a symmetric manner. The carboxylic acid group of BDW-OH engages R238/R232 of STING via hydrogen bonding and the pyrimidine ring of BDW-OH also forms a hydrogen bond with T263 ( Figure 3A). In addition, the triazole ring of BDW-OH forms a π-π stacking with Y167 of STING ( Figure 3A). These interactions between the ligand and STING were also observed in structures of STING complexed with 2',3'-cGAMP ( Figure 3B) and MSA-2 (PDB: 6UKM), 5 respectively. Overlay of BDW-OH and 2',3'-cGAMP revealed that the tricyclic structures of BDW-OH are in the same plane as the two nucleobases of 2',3'-cGAMP, maintaining the ππ stacking with Y167 ( Figure 3C). Similarly, the tricyclic structure of BDW-OH also overlaps well with the benzothiophene ring of MSA-2. Importantly, both BDW-OH and MSA-2 use the carboxylic acid to tackle R238/R232 in similar conformations ( Figure 3D). Therefore, we concluded that that binding mode of STING A230 and BDW-OH resembles the STING-MSA-2 complex. We previously speculated that BDW-OH binds to STING A230 in a similar binding mode as that of a mouse STING-specific tricyclic ligand, DMXAA, and an artificial STING mutant with I230. Interestingly, although the DMXAA binds in the same pocket as BDW-OH (Figure 2C), its tricyclic ring does not in the same plane as that of BDW-OH ( Figure 3E). The orientation of the carboxylic acid group of DMXAA is also markedly different from that of BDW-OH in the bound conformation for the interaction with R238 of STING ( Figure 3E). Residue 230 of human STING is located in the lid region of a STING dimer ( Figure 2C). 23 We confirmed from the crystal structure that A230 does not interact with BDW-OH. The lid region is a four-stranded, antiparallel β-sheet that covers the binding pocket of STING. It was demonstrated that single-point mutation in the lid region can control the pharmacological activation of STING. For example, G230I mutation in human STING sensitized the mouse STING-specific DMXAA to bind. 23 Similarly, I229G or I229A mutation in mouse STING desensitized the DMXAA binding (residue 229 in mouse STING is homologous to residue 230 in humans). 23 In human STING, because residue 230 does not interact with the BDW-OH directly, we further hypothesized that the residue 230 gates the exit tunnel of the ligand as a mechanism of ligand recognition. As such, bulkier amino acid residues (i.e., alanine or isoleucine) can probably retain ligands better than glycine residue. 21 Chemical modifications on the tricyclic structure To further probe the interactions between BDW-OH and STING A230 , we performed structure-activity-relationship (SAR) studies and used an interferon-sensitive response element (ISRE) reporter gene assay in THP-1 cells to evaluate the compounds' activity. 21 The ISRE response to IFN-I signaling is a key downstream event of STING activation and is routinely used to quantify potency for STING agonists. 6 The genotype of STING in THP-1 cells is homozygous STING HAQ . 14 We previously demonstrated that STING activation in THP-1 cells using BDW568 is solely dependent on A230, but not H71 or Q293. 21 The crystal structure of STING A230 -CTD-BDW-OH showed that there is a small unoccupied cavity on the dimethyl thiophene side in BDW568 ( Figure 3A). To probe the size of the cavity, we linked the 4,5-dimethyl groups on the thiophene ring (A) with one (1) or two carbons (2) ( Table 1). The reactivity (illustrated as half maximal effective concentrations, or EC50's) of compounds 1 was retained while compound 2 was inactive (Table 1), implying the cavity in the binding pocket can only accommodate one carbon. An isomer of BDW568, having an ethyl group at C5 and hydrogen at C4 (3) is also active. In contrast, removal of the dimethyl groups (4) significantly weakened the activity (Table 1). On the other hand, substitution of the methyl group at C4 position of the thiophene with a bulkier group, such as phenyl group (5), completely abrogated the compound's activity, confirming that the space close to the thiophene ring (A) is limited. Similarly, substitution of either of the two methyl groups with a methoxy group (6, 7), or replacing the 5-methyl group on the thiophene ring into a bromo group (8), all significantly reduced the activity (Table 1). In addition, we found the skeleton of the tricyclic structure cannot be altered for STING activation. For example, replacing the thiophene ring (A) with pyrrole (9), furan (10) or dimethoxybenzene (11) completely abrogates the compounds' activity (Table 1). This is consistent with our previous report, in which we demonstrated that replacing the pyrimidine ring (B) with pyridine (12) or 2-methyl pyrimidine (13), or triazole ring (C) with imidazole (14) all failed to retain the activity. 21 Similarly, breaking the triazole ring (C) by using an N-carbamoylacetamide group (15) was also intolerable (Table 1). BDW568 is required to be hydrolyzed by CES1 to yield the carboxylic acid metabolite to interact with STING A230 . 21 We tested other esters, such as ethyl (16) and isopropyl esters (17) and observed a slightly weakened activity (Table 2). Further increasing the bulkiness of the ester to tert-butyl ester (18) or changing the linear ester into a lactone (19) completely diminished the activity probably because these esters can no longer be recognized by CES1. We also synthesized several cell permeable carboxylic isosteres, such as tetrazole (20) and anisole (21) 24 and observed that none of these isosteres could retain the activity. We concluded that a carboxylic acid group in BDW568 is required for STING binding. The side chain also needs to maintain a proper length to engage the R238 residue in STING, since addition (22) or reduction (23) of the chain length completely diminished the activity ( Table 2). Interestingly, the thioether group in the side chain is also crucial for STING activation. We replaced the S into O (24) and CH2 (25), or oxidized the S into sulfoxide (26), and found all these the alternations inactivated the compounds ( Table 2). Compound synthesis Compounds with modifications on tricyclic rings (1-11 and 15) were synthesized as outlined in Scheme 1, which is similar to the synthetic route for BDW568. 21 Starting with substituted pyrimidinone (S1), the chloro-substituted intermediate S2 was obtained by refluxing in POCl3 with high yield. Subsequent SN2 reaction in hydrazine hydrate yielded the key intermediate S3 which also served as a starting material for compounds with O/CH2 side chain modifications (see Scheme 2). S3 then underwent ring-closing rection with CS2/KOH to form the triazole ring and afforded S4. The reaction between S4 and methyl bromoacetate in DMF/DIPEA occurred rapidly and usually completed within several minutes. Compound 15 was obtained by a single urea-formation reaction using triphosgene as the coupling reagent. Compounds with side chain modifications can be prepared as shown in Scheme 2. Intermediate S4 and substituted alky bromides were used to synthesize compounds 16, 19, 21 and 22 with a Slinker. Intermediate S3 was used in different conditions to form the desired ring closed product. A single reaction of S3 in dimethyl malonate under microwave irradiation and high temperature directly yielded target compound 23. 1,1'-carbonyldiimidazole (CDI) was used in the ring-closing reaction of intermediate S6, and subsequent SN2 reaction in NaH/DMF gave desired product 24. Reaction between succinic anhydride and S3 could produce the ring closed intermediate S7 in carboxylic acid form which was then refluxed in methanol with concentrated H2SO4 as catalyst to afford compound 25. The sulfur oxidized compound 26 was obtained from BDW568 by using mCPBA as oxidant. Biological activities in human primary cells and CARmacrophages Next, we set out to explore the potential application of BDW568 in immunotherapy. In the first step, we confirmed the activity of BDW568 in THP-1 cells by real-time quantitative PCR (RT-qPCR) and found that THP1 cells have elevated expression of interferon stimulated genes (ISGs), including interferon-induced GTPbinding protein (MX1) and 2',5'-oligoadenylate synthetase 1 (OAS1) after BDW568 stimulation ( Figure 4A). We selected the MX1 gene as a marker to quantify IFN-I activation in following experiments. Then, to examine the activity of BDW568 in primary human cells, we collected PBMCs from 15 healthy donors and measured MX1 expression in the presence of BDW568 and 2',3'-cGAMP. The PBMCs from donor 9 responded to BDW568, while all donors responded to 2',3'-cGAMP ( Figure 4B). We genotyped donor 9 and confirmed that it carries homozygous STING A230 . The rest 14 donors do not have a STING A230 allele. Afterwards, the cells were harvested, and RT-qPCR were performed to measure MX1 expression level. Donor 9 (red) showed significant elevation of MX1 after BDW568 stimulation. Encouraged by the promising results in human primary cells, we envision that BDW568 can be a useful probe to selectively activate the STING pathway in genetically engineered immunogenic cells that carry the STING A230 allele. To examine if we can genetically engineer cells with STING A230 to respond to BDW568, we generated lentivirus that will allow overexpression of full-length STING A230 and a marker protein, which is a truncated inactive EGF receptor (EGRt) ( Figure 5). Monocyte derived macrophages were generated from 3 dependent healthy donors with STING G230 alleles and transduced with EGFRt-STING A230 lentivirus. 3 days after transduction, cells were stimulated with vehicle or BDW568. As expected, no elevation of the MX1 gene was observed in mock transduced cells or untransduced cells after BDW568 stimulation. In contrast, macrophages expressing EGFRt-STING A230 showed a 10-fold increase in MX1 transcription ( Figure 5), implying a robust selectivity for pharmacologically activation of STING A230 engineered macrophages without affecting the STING G230 cells. As such, STING A230 -specific agonist can be potentially used in STING G230 patients to selectively activate engineered cells in cellular therapy, such as chimeric antigen receptor (CAR)macrophages, which recently showed promising activity to treat solid tumors. 25 Conclusion In summary, we obtained the crystal structure of STING A230 with a newly reported STING agonist, BDW568. In STING A230 -BDW-OH (active metabolite of BDW568) complex The STING dimer adopts a "closed" conformation, which is almost identical to that of STING A230 -2',3'-cGAMP (endogenous ligand) complex. By comparing the structures of STING A230 -BDW-OH complex with other reported structures, we concluded that the binding mode of BDW-OH is similar to a known synthetic STING ligand, MSA-2. SAR studies demonstrated that the skeleton of all three heterocycles is crucial for BDW568's activity and cannot be extensively modified. We also found that the S-acetate side chain is essential to maintain the activity. We confirmed that the BDW568's selectivity in cells with STING A230 allele was observed in primary PBMCs. Importantly, BDW568 can robustly and selectively activate macrophages that were transduced with STING A230 . Therefore, this compound may be used to selectively activate STING A230 engineered macrophages for macrophage-based immunotherapies, including CAR-macrophage therapies. Crystallography Protein expression and purification The gene encoding human STING (155-341) 230A/232R was cloned into a home-modified pET28a vector with a N-terminal His6 and SUMO tag. The protein was expressed in Escherichia coli BL21 (DE3) cells induced with 0.4 mM isopropyl β-d-1-thiogalactopyranoside (IPTG) overnight at 16˚C. The protein was purified by Ni-NTA agarose. The N terminal tag was cleaved by SUMO protease and removed by a Ni-NTA column. And protein was further purified by a HiLoad 16/600 Superdex 200 column in a buffer containing 20 mM Tris-HCI pH 7.5, 150 mM NaCl and concentrated to 13 mg/ml for crystallization. Crystallization and structure determination Purified STING (155-341) 230A/232R was mixed with BDW-OH at the molar ratio of 1:5. Crystallization screening was performed by hanging drop vapor diffusion method at 4˚C using Index and PEGRx kits from Hampton Research. Crystals were grown in a reservoir solution containing 0.2 M ammonium sulfate, 0.1 M HEPES pH 7.5, 25% polyethylene glycol 3350. After two days, crystals were harvested, cryoprotected in reservoir solution containing 30% sucrose, and flash frozen in liquid nitrogen. Crystallization of STING 230A/232R in complex with 2',3'-cGAMP was performed at the same condition. Diffraction data were collected using a Rigaku MicroMax-007 HF generator with a RAXIS IV 2+ detector at home. The data were processed with iMosflm 26 in the CCP4 package. The structures were determined by molecular replacement using the STING CTD structure (PDB ID: 4KSY) 27 as a searching model in Phenix. 28 The structures were rebuilt with coot 29 and refined by Phenix. Coordinates for the two crystal structures have been deposited into the RCSB Protein Data Bank (PDB) with STING-BDW-OH under accession number 8T5K, and STING-2',3'-cGAMP complex under accession number 8T5L. Chemistry Reagents and solvents were purchased from commercial sources (Fisher, Sigma-Aldrich, Combi-Blocks and Suzhou Medinoah Ltd) and used as received. Reactions were tracked by TLC (Silica gel 60 F254, Merck) and Waters ACQUITY UPLC-MS system (ACQUITY UPLC H Class Plus in tandem with QDa Mass Detector). Intermediates and products were purified by a Teledyne ISCO Combi-Flash system using prepacked SiO2 cartridges. NMR spectra were acquired on a Bruker AV400 or AV500 instrument (500 MHz for 1 H NMR, 126 MHz for 13 C NMR). 13 C shifts were obtained with 1 H decoupling. MestReNova 14.0.1 developed by MESTRELAB RESEARCH was used for NMR data processing. MS-ESI spectra were recorded on Waters Qda Mass Detector. The UPLC-MS was performed on a Waters BEH C18 column (2.1 mm × 50 mm, 1.7 μm) with peak detection at UV 254 nm (mobile phase: acetonitrile and 0.1% formic acid in water; gradient: 0-5 min, 2-98% acetonitrile). Purities of final compounds were assessed by UPLC-MS. All compounds are > 95% pure by UPLC analysis. General Procedures 1). Compound S1 (0.5 g, 1 equiv.) in POCl3 (4 mL) was refluxed at 120 °C for 1 h or until the reaction completed. POCl3 was evaporated in vacuo and the residue was dissolved in ethyl acetate. After washing with saturated NaHCO3 solution, the organic phase was collected and dried with Na2SO4 and concentrated in vacuo to furnish compound S2. The crude product was purified by silica-gel column chromatography using 0-10% ethyl acetate in hexanes. 2). Compound S2 (0.4 g, 1 equiv.) in 65% hydrazine hydrate (4 mL) was heated at 100 °C for 30 min. The solid formed was filtered and washed with water to remove excess hydrazine. The water residue was removed by co-evaporating with toluene to afford compound S3 as yellow solid which was used in the next step without further purification. 3). A solution of KOH (1.3 equiv.) and CS2 (4 equiv.) in ethanol was added dropwise to the solution of compound S3 (0.1 g, 1equiv) in ethanol. The reaction mixture was then heated to 90 °C for 1 h. The solvent was removed in vacuo and the residue was dissolved in water and acidified with 1N HCl. The precipitate was filtered, washed with water, and dried. The crude solid was heated and recrystallized in ethanol to give compound S4. 4). Compound S4 (50 mg, 1 equiv.) in DMF was added the respective alkyl bromide (1.5 equiv.) followed by N, N-diisopropylethylamine (DIPEA, 3 equiv.) and the reaction mixture was stirred at room temperature for 30 min. After completion, the reaction mixture was added water and extracted with ethyl acetate. The organic layer was washed with water followed by brine and was dried with Na2SO4. Ethyl acetate was removed under vacuum and the crude product was purified by silica-gel column chromatography with 0-100 % ethyl acetate in hexanes to afford the final product. Compounds 1 to 11 were synthesized following General Procedures 1 to 4 by using respective starting material S1. Lentivirus construction, production and purification The lentiviral vector FG12-EF1p-EGFRt-p2A-STING 230A was constructed by cloning STING 230A variant into a FG12 expression vector. The vector also contains EF1α promoter sequence that expresses the α subunit of eukaryotic elongation factor 1 along with truncated inactive epidermal growth factor receptor (EGFRt) that can be used as flow-cytometric marker and a self-cleaving peptide 2A (p2A). The lentivirus vector was produced in 293FT cells using the calcium phosphate transfection protocol. Briefly, DNA mix was prepared by adding above mentioned STING 230A plasmid with the third-generation lentiviral packaging plasmid pMDLg along with Rev and VSVG envelope expressing plasmid (Azenta Life sciences, USA). The DNA was mixed with 2 M CaCl2 in cell-grade water followed by addition of 2× HEPES and 2× HBS. The mixture was incubated at room temperature for 25 min and added to 293FT cells cultured in IMDM complete media (IMDM media, 10% FBS, 1% PBS, 1% Glutamax) and 10mM chloroquine. After 6-8 h of incubation, the transfection media was aspirated and fresh 2% Opti-MEM compete media (Opti-MEM media with 2% FBS, 1% Pen-Strep, 1% Glutamax) was added followed by 2 days of incubation. Supernatant was collected from transfected 293FT cells 48 h following transfection, filtered using a 0.45 μm sterile filter, and concentrated by ultracentrifugation using a Beckman SW32 rotor at 30,000 rpm at 4°C. Medium was aspirated and pellet was resuspended with PBS and stored at -80°C. Primary macrophage purification and differentiation Healthy human PBMCs were obtained from the Department of Virology, UCLA AIDS institute. Monocytes were magnetically sorted using human CD14 + microbeads (Miltenyi Biotec, USA) according to manufactures protocol. Briefly, PBMCs were incubated with CD14 + microbeads and passed through LS columns placed in a magnetic field with multiple rounds of washing using MACS buffer. CD14 + cells were differentiated into macrophages in the presence of RPMI complete media (RPMI 1640 media, 10%FBS, 1% PenStrep) and macrophage colony-stimulating factor (M-CSF) growth factor at a concentration of 10 ng/ml for 5 days. Macrophages were transduced with FG12-EF1p-EGFRt-p2A-STING 230A lentivirus or mock transduced at MOI:10 overnight and washed with fresh media afterwards. Cells were cultured for 2 more days in the presence of RPMI complete media and M-CSF. Three days after transduction, untransduced and transduced macrophages were stimulated with DMSO (control) and BDW568 at a concentration of 50 µM for 6 h. Cells were harvested using accutase for RNA extraction. RT-qPCR THP-1 and primary macrophage RNA was extracted using RNeasy kit (Qiagen), followed by cDNA synthesis using the High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific). RT-qPCR was performed using the TaqMan Gene Expression Assays (Thermo Fisher Scientific), targeting human HPRT1 (Hs01003267_m1), MX1 (Hs00895608_m1) and OAS1 (Hs00973635_m1). To determine the relative mRNA expression, MX1 and OAS1 gene expression was normalized to the HPRT1 expression level. Genotyping for STING residue 230 The gene fragment containing the residue 230 of STING was amplified by PCR using Phusion high-fidelity PCR kit (ThermoFisher Scientific, USA). The PCR reaction was set up using template genomic DNA, Phusion DNA polymerase, STING230-GT forward primer 5'-GGCCCGGATTCGAACTTACA and STING230-GT reverse primer 5'-CCTGCACCCACATGAATCCT. Following PCR, the generated DNA fragment was purified from agarose gel and subcloned into a pCR™Blunt II-TOPO™ vector using Zero Blunt TOPO PCR cloning kit (ThermoFisher Scientific, USA). Afterwards, colonies were picked, and the plasmid DNA were isolated using QIAprep Spin Miniprep kit (Qiagen) and Sanger sequenced (Laragen, Inc. CA). Supporting Information Data collection and refinement statistics for STING A230 -ligand complexes; 1 H, 13 C NMR Spectra; purity analysis of active compounds.
2023-07-06T16:47:22.041Z
2023-07-02T00:00:00.000
{ "year": 2023, "sha1": "edd6fceadbdd31894e63d0f283f64308efd572df", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327114", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "b37c363d24d1a8a09acbb1cc345b16dc66792f6f", "s2fieldsofstudy": [ "Chemistry", "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
201070238
pes2o/s2orc
v3-fos-license
Orbital dynamics of circumbinary planets We investigate the dynamics of a nonzero mass, circular orbit planet around an eccentric orbit binary for various values of the binary eccentricity, binary mass fraction, planet mass, and planet semi--major axis by means of numerical simulations. Previous studies investigated the secular dynamics mainly by approximate analytic methods. In the stationary inclination state, the planet and binary precess together with no change in relative tilt. For both prograde and retrograde planetary orbits, we explore the conditions for planetary orbital libration versus circulation and the conditions for stationary inclination. As was predicted by analytic models, for sufficiently high initial inclination, a prograde planet's orbit librates about the stationary tilted state. For a fixed binary eccentricity, the stationary angle is a monotonically decreasing function of the ratio of the planet--to--binary angular momentum $j$. The larger $j$, the stronger the evolutionary changes in the binary eccentricity and inclination. We also calculate the critical tilt angle that separates the circulating from the librating orbits for both prograde and retrograde planet orbits. The properties of the librating orbits and stationary angles are quite different for prograde versus retrograde orbits. The results of the numerical simulations are in very good quantitative agreement with the analytic models. Our results have implications for circumbinary planet formation and evolution. INTRODUCTION Binary stars form in turbulent molecular clouds (McKee & Ostriker 2007) where the accretion process during star formation may be chaotic (Bate et al. 2003;Bate 2018). Therefore circumbinary discs likely form misaligned with respect to the binary orbital plane. Observationally, misaligned circumbinary discs appear common (e.g., Brinch et al. 2016;Chiang & Murray-Clay 2004;Winn et al. 2004;Kennedy et al. 2012Kennedy et al. , 2019. Giant planets form while the gas disc is still present (e.g., Lagrange et al. 2010) and thus it is likely that planets may form on misaligned orbits. A giant planet in a misaligned disc in a binary system may not remain coplanar to the disc (Picogna & Marzari 2015;Martin et al. 2016;Franchini et al. 2019a). In this paper we concentrate on the properties of circumbinary planets involving eccentric orbit binaries. The Kepler Mission has so far detected 10 circumbinary planets. Among these, at least two of them have been detected around eccentric binaries. Kepler-34b with a mass of 0.22 M J orbits an eclipsing binary star system (Kepler-34) that has an orbital eccentricity of 0.52 (Welsh et al. 2012;Kley & Haghighipour 2015). The moderately eccentric Email: chenc21@unlv.nevada.edu (e = 0.26) binary KIC 5095269 hosts a circumbinary planet KIC 5095269b (Getley et al. 2017). All the circumbinary planets detected this far are nearly coplanar with the binary orbital plane. However, this is likely a selection effect due to the small orbital period of the Kepler binaries (Czekala et al. 2019). Longer orbital period binaries are then expected to host planets with a wide range of inclinations. Planets with large misalignments are much more difficult to detect than those that orbit in the binary orbital plane because their transits are much more rare. However, other detection methods may be possible. For example, polar planets (those in an orbit close to perpendicular to the binary orbital plane) may be distinguished from coplanar planets through eclipse timing variations (Zhang & Fabrycky 2019). We consider circular circumbinary planet orbits throughout this paper. For a misaligned test (massless) particle circumbinary orbit about a circular orbit binary, its nodal precession occurs around the binary angular momentum vector, but may be either prograde or retrograde depending upon the initial particle inclination. The nodal precession about a circular orbit binary is always circulating. That is, the longitude of the ascending node fully circulates over 360 • , since the nodal precession rate does not change sign. However for a binary with nonzero eccentricity, a circumbinary test particle orbit with a sufficiently large inclination may undergo libration. That is, the longitude of the ascending node covers a lim-ited range of angles less than 360 • and the nodal precession rate changes sign. In the test particle case, the angular momentum vector of the test particle librates about the binary eccentricity vector (or binary semi-major axis) and undergoes tilt oscillations (Verrier & Evans 2009;Farago & Laskar 2010;Doolin & Blundell 2011;Naoz et al. 2017;de Elía et al. 2019). The minimum inclination required for libration decreases with increasing binary eccentricity. This means that a test particle orbit with even a small inclination can librate around a highly eccentric binary. Misaligned low mass/angular momentum discs in and around around binaries can undergo similar precession as a test particle (Larwood et al. 1996). Consequently, following the behaviour of test particles, a sufficiently misaligned low mass disc around an eccentric binary can precess around the eccentricity vector, rather than the binary angular momentum and undergo tilt oscillations (Aly et al. 2015;Martin & Lubow 2017, 2018Zanazzi & Lai 2018;Franchini et al. 2019b). Dissipation within the disc leads to the eventual alignment, either coplanar or polar aligned with respect to the binary orbital plane. In the polar aligned state, the low angular momentum circumbinary disc lies perpendicular to the binary orbital plane and its angular momentum vector is along the binary eccentricity vector. The polar disc as well as the binary do not undergo nodal precession. In the eccentric orbit binary, a disc that is evolving towards coplanar alignment undergoes tilt oscillations as it does so (Smallwood et al. 2019). The mass of the disc has a significant effect on the polar alignment. A disc with mass is expected to evolve to a generalised polar state in which the inclination of the disc relative to the binary is stationary in a frame that precesses with the binary. A simplified model for the disc with mass involves the orbit of a particle with mass. The particle orbit then corresponds to a ring or narrow disc. In this stationary state, the disc inclination is less than 90 • for a disc in a prograde orbit (Zanazzi & Lai 2018;. In this work, we extend the three-body numerical simulations of a circular orbit, test (massless) circumbinary particle performed by Doolin & Blundell (2011) by allowing the particle (planet) to have nonzero mass. There has been some exploration into the stability of low mass polar particles (Cuello & Giuppone 2019). In this case, the binary feels the gravitational force of the planet which causes the binary orbit to be modified. The eccentricity vector of the binary precesses as a result of this interaction. The binary orbit tilt and the magnitude of its eccentricity oscillate in this case. We consider the third body (planet) to lie on a circular circumbinary orbit. The dynamics of a circular orbit planet are similar to those of a narrow ring that in turn may be indicative of a more extended disc with mass. Therefore, the results can also have implications for the evolution of a circumbinary disc with nonzero mass. Our simulations are scale free and can therefore be applied to all scales. In Section 2 we describe the initial conditions for our threebody simulations and explore the properties of circumbinary orbits with a planet, represented as a particle with nonzero mass. In Section 3 we determine the critical angles between the librating and circulating orbits and compare them to the analytic solutions found by that is based on earlier work by Farago & Laskar (2010). We also determine stationary inclinations, the inclination at which the binary and the third body orbit precess at the same rate with no change in relative tilt. In Section 4 we present our discussion and conclusions. Table 1. Parameters of the simulations. The first column contains the name of the model, the second and third columns indicate the binary mass fraction and initial eccentricity. The fourth and fifth columns represent the mass of the planet in units of m b and the distance of the planet with respect to the centre of the mass in units of a b respectively. THREE-BODY SIMULATIONS In this section we describe and present results of our three-body simulations. We explore the effects of the binary eccentricity and the binary mass ratio on the different orbit types, circulating and librating, for planets with varying angular momentum. Three-body simulation set-up To study the evolution of a third body orbiting around an eccentric binary star system, we use the N-body simulation package, REBOUND. We use a WHfast integrator which is a second order symplectic Wisdom Holman integrator with 11th order symplectic correctors (Rein & Tamayo 2015). We solve the gravitational equations for the three bodies in the frame of the centre of mass of the three-body system. The central binary has components of mass m 1 and m 2 with total mass m b = m 1 + m 2 and the mass fraction of the binary is f b = m 2 /m b . The orbit has semi-major axis a b , the magnitude of the eccentricity of the binary is e b and the orbital period of the binary is T b . The Keplerian orbit of the planet with mass m p around the centre of mass of the binary is defined initially by six orbital elements, its semi-major axis a, inclination i, eccentricity e, longitude of the ascending node φ, argument of periapsis ω, and true anomaly ν. Since the planet orbit is initially circular, we set as initial conditions e = 0 and ω = 0. We take ν=0 and φ = 90 • initially in our suites of simulations. We note that the planet orbit remains nearly circular in our simulations, as is expected analytically since the particle eccentricity is a constant of motion in the secular quadrupole approximation for the binary (Farago & Laskar 2010). We vary the planet mass, initial inclination and semi-major axis of its orbit. Note that the binary orbit is not fixed since the binary feels the gravity of the massive third body. In order to plot the results, we work in a frame defined by the instantaneous values of the eccentricity and angular momentum vectors of the binary, e b and l b respectively. The frame defined by the binary has the three axes e b , l b × e b , and l b . Denoting the planet angular momentum as l p , we determine the inclination of its orbital plane relative to the binary through wherel b is a unit vector in the direction of the angular momentum of the binary andl p is a unit vector in the direction of the angular momentum of the particle. The inclination of the binary relative to the total angular momentum l is wherel is a unit vector in the direction of the total angular momentum (l = l b + l p ). Similarly, we determine the phase angle of the particle in the same frame of reference through In the next three subsections, we vary masses and orbital properties of the binary stars and planets and describe the binary and planet orbital evolution. We first ran some test particle simulations with the planet mass set to zero in order to verify that our results are in agreement with the results presented in Figure 2 in Doolin & Blundell (2011). We then performed a set of simulations for various values of the mass of the planet from m p = 0.001 m b to m p = 0.01 m b and its orbital radius from r = 5 a b to r = 20 a b . We also explored the effect of different initial binary eccentricities and mass ratios. We also consider some simulations with higher angular momentum of the third body. In Table 1, we list the parameters for each model. Fig. 1 shows the results of our three-body simulations for a low mass planet with m p = 0.001 m b on an orbit with semi-major axis r = 5 a b for varying e b and f b . Each line in each plot corresponds to a planet orbit with different initial inclination to the binary. The first and third columns show the i cos φ-i sin φ plane while the second and fourth columns show the corresponding e b cos φ-e b sin φ plane. The black stars represent the initial position of the planet. Low mass planet at small orbital radius The green lines represent prograde (relative to the binary) circulating orbits where the planet displays clockwise precession in the longitude of the ascending node φ. The blue lines correspond to retrograde circulating orbits where the planet displays counterclockwise precession in φ. The red and cyan lines identify librating orbits. The inclination at the centre of these orbits is the stationary inclination, i s . In this low mass planet case, the centres are at i = i s ≈ 90 • and φ = ±90 • . The red lines have initial inclination i < i s while the cyan lines have initially i > i s . These librating orbits display counterclockwise precession in φ. The i cos φ-i sin φ phase plots are very similar to those in the test particle case considered by Doolin & Blundell (2011). The eccentricity of the binary does not vary during the precession of a test particle because the particle does not have angular momentum to exchange with the binary system. The e b cos φ-e b sin φ panels for these models shows the curves are sometimes circulating while slightly noncircular and sometimes librating due to the eccentricity oscillations associated with the relatively small but nonzero particle (planet) mass. If instead the object orbiting the binary is a planet (a nonzero mass object), e b initially increases or decreases depending on whether the initial angle of the planet's orbit is greater or smaller than the stationary inclination. As we shall see, the larger the planet angular momentum, the larger the oscillations of the binary orbit. In order to investigate the effect of the initial binary eccentricity, we ran simulations with initial e b = 0.2, 0.5 and 0.8. Comparing the three rows in Fig. 1, we see that higher initial eccentricities correspond to larger libration islands. The comparison between Models A and B in Fig. 1 shows the effect of decreasing the binary mass fraction from f b = 0.5 to f b = 0.1. The size and shape of the orbits in the i cos φ-i sin φ plane does not change significantly with mass fraction, but the variations of e b in the libration region are larger for smaller binary mass fraction. We find that for lower binary mass fraction, the orbits near the libration centre (with inclination i = i s and phase angle φ = ±90 • ) become divergent for moderate to high binary eccentricities (e b = 0.5, 0.8) and therefore might be unstable. This issue is beyond the scope of this work. We will explore the stability of orbits close to misaligned binaries in a future publication. In the top row of Fig. 2 we consider in more detail the evolution of the orbits in time for models with initial binary eccentricity of 0.5 and binary mass fraction f b = 0.5 (top left, model A2) and f b = 0.1 (top right, model B2). We show e b , i b , and i as a function of time for different values of the initial planet inclination. The colour of the lines corresponds to the orbit type in Fig. 1. Comparing the left and right plots, we see that there is less evolution of the binary in the equal mass binary compared to the lower binary mass fraction. This is because with lower binary mass fraction, the planet has more angular momentum compared to the binary and thus has a stronger effect on the binary. The timescale for the oscillations is longer for smaller binary mass fraction. Fig. 1, we see that the region of prograde circulating orbits is larger for a higher mass planet. The differences between the red and cyan lines becomes more prominent. The inclination at the centre of the libration, i s , decreases. For example, for model C2 with e b = 0.5 and f b = 0.5 it is 80 • . Because of this decrease, there is a narrower range of inclinations for which red orbits exist, those with initial inclination i < i s , and a wider range of inclinations for which cyan orbits exist with i > i s initially. We discuss the value of the stationary inclination further in Section 3. High mass planet at small orbital radius The higher planet mass causes the binary eccentricity to vary significantly not only in the librating solutions, but also in the circulating orbits of the planet. The binary eccentricity in the libration islands for Model C2 in Fig. 3 starts at e b = 0.5 and reaches values as large as e b 0.7. By comparing Models C and D in Fig. 3, we can see the effect of changing the binary mass fraction. The cyan libration islands are significantly larger and there are fewer red orbits in the simulation with f b = 0.1 because i s is smaller than in the equal mass binary case. Therefore, decreasing the binary mass fraction results in a lower stationary inclination. We find particle orbital instability close to i = i s and φ = 90 • for Model D1. There are no stable orbits in the region close to the stationary inclination. . The i cos φ − i sin φ plane (first and third columns) and e b cos φ − e b sin φ plane (second and fourth columns) for orbits with different values of initial inclination and longitude of the ascending node. The planet has mass m p = 0.001 m b , and orbital radius 5 a b . The binary eccentricity is e b = 0.2, 0.5 , 0.8 in the upper, middle and lower panels respectively. The mass fraction of the binary is f b = 0.5 in the first and second column and f b = 0.1 in the third and fourth column. The green and blue lines represent prograde and retrograde circulating orbits, respectively. The red lines represent librating orbits with initial inclination i < i s while the cyan lines represent librating orbits with initial inclination i > i s . The black stars mark the initial positions of the planet with i ranging from 10 • to 180 • . We removed unstable orbits. The variations in e b are larger for the smaller binary mass fraction, as seen in the e b cos φ-e b sin φ plot. In particular, e b in Model D3 becomes very close to 1 during libration. The middle panels of Fig. 2 shows the binary eccentricity, inclination and planet inclination evolution with time for Models C2 and D2. We see again that decreasing the binary mass fraction leads to larger amplitude oscillations in the binary eccentricity for librating orbits starting above the critical angle (cyan line). Model D2 has larger variations of i b and i during the libration of the planet because the system has a larger planet to binary angular momentum ratio and the secondary star (m 2 = 0.1m b ) interacts more strongly with the planet. High mass planet at large orbital radius We now increase the planet semi-major axis to r = 20 a b and keep its mass at m p = 0.01 m b . Comparing Models E in Fig. 4 with Models C in Fig. 3, we see that the libration islands become even larger when the planet orbits the binary with a larger semi-major axis. The eccentricity of the binary e b can be excited to larger values during the planet's libration because in this configuration the planet has more angular momentum to exchange with the binary system. The increase in the initial binary eccentricity e b has the same effect as in previously described simulations, i.e., it increases the range of stable librating orbits. We find that there are no retrograde circulating solutions for the low mass fraction case f b = 0.1 (see the third column of Fig. 4, there are no blue lines). The eccentricity of the binary can be excited to very large values (close to 1) during the planet's precession in Models F1, F2, and F3. However, there is a new type of orbit that is different from those described in previous sections. These librating orbits, represented by magenta lines, only appear in the simulations with smaller binary mass fraction f b = 0.1 that start with higher initial eccentricities e b = 0.5, 0.8. The magenta orbits have higher initial inclination than the librating cyan lines and display counterclockwise precession. In Model F2, these crescent shape orbits appear in the i cos φ-i sin φ plane for i > 150 • while in Model F3, they appear also for lower inclinations i > 140 • . These librating orbits are not nested within each other, as they are in the prograde case. Appendix A of shows that noncoplanar stationary states for retrograde orbits with i s < 180 • do not exist below a critical value of the angular momentum ratio j cr given by equation A3 in that paper The lack of stationary states means that the crescent shaped librating orbits are always of non-zero extent in the i cos φ−−i sin φ phase plane. This is understood by the lack of stationary retrograde inclinations for j < j cr . For example, for Model F2 j = 0.58 < j cr = 1.29 and for Model F3 j = 0.83 < j cr = 0.91. Appendix A of shows analytically that stationary coplanar retrograde i = 180 • (and also prograde i = 0 • ) orbits should exist. However, we have not been able to find such an orbit numerically for Models F1, F2, and F3. In the bottom panels of Fig. 2, we show the evolution of e b , i b , and i for Model E2 and F2. The time oscillations of e b , i b and i have longer periods compared to the corresponding case of the same planet mass with smaller semi-major axis. We show the evolution of one of the crescent orbits, the magenta lines in Model F2. As e b is very close to 1 during the precession, the vector of the bi-nary angular momentum changes quickly, resulting in the narrow peaks in the i b and i plots. Systems with high angular momentum ratios To investigate the retrograde librating orbits, we now consider simulations of an equal mass binary with higher angular momentum ratios j = 1 and j = 2, with binary eccentricities e b = 0.2, 0.5, and 0.8. The angular momentum ratios are larger than the critical required for a retrograde libration centre given in Equation (4). Fig. 5 shows the orbital evolution of simulations with j = 1 (left panels) and j = 2 (right panels). There are librating orbits in the phase diagrams in the first and third columns of Fig. 5 that surround the stationary points. with i s < 90 • . The fully retrograde librating orbits (i > 90 • throughout the orbit) are seen as the crescent magenta orbits in the i cos φ − i sin φ phase plane. In the case of inclinations less than the critical inclination, the orbits decrease in extent in the phase plane with increasing initial inclination. They are at most only partially overlapping and are not fully nested. They reach zero extent at the stationary angle, i s . For inclinations above the stationary angle, the orbits increase in extent in the phase plane. Above the critical angle, the binary eccentricity initially decreases while below the critical angle the binary eccentricity initially in- creases starting at φ = 90 • , indicated by the stars in the phase planes. Unlike the prograde librating case, the retrograde librating orbits orbits are not nested about a common centre that occurs at the stationary inclination. STATIONARY AND CRITICAL ANGLES We compare the results of our numerical simulations to the analytic results presented in for the stationary inclination (inclination at the centre of the libration island), i s , and the critical minimum inclination angle i min that separates the prograde orbits from the librating orbits. We also calculate numerically the critical maximum inclination for librating orbits, i max . The analytic results are based on secular equations with the quadrupole approximation for the binary potential (Farago & Laskar 2010). The equations are expected to break down for orbits that are close to the binary. Stationary inclination The stationary inclination i s depends only the eccentricity of the binary, e b , and the ratio of the angular momentum of the particle to the angular momentum of the binary, j. The equation is given by (2019)). Prograde Stationary Inclination The solid lines in Fig. 6 plot Equation (5) in the prograde case for e b = 0.2 (blue line), 0.5 (black line) and 0.8 (red line) as a function of the planet-to-binary angular momentum ratio j. For fixed e b , the stationary inclination i s decreases monotonically with increasing j and for fixed j it increases monotonically with increasing e b . The stationary inclinations for the models in Table 1 were determined numerically from the simulations. The magenta dots in Fig. 6 correspond to the models with f b = 0.5 and the cyan dots correspond to the models with f b = 0.1. The results confirm the prediction that for fixed binary eccentricity the stationary inclination depends directly on j, independent of f b . The simulations are in very good agreement with the analytically results. The quadrupole approximation made in deriving Equation (5) is more accurate for larger planet orbital radii a. To test the accuracy of the prograde analytic solution, we consider simulations with a close-in planet. In Fig. 7 we plot Equation (5) in the prograde case for e b = 0.2 (blue line), 0.5 (black line), and 0.8 (red line) as a function of a. The points show simulation results for numerically determined stationary inclinations. The dot colours correspond to the binary eccentricity of the analytic lines. The orbit of the planet is unstable if a is less than about 2.3a b . Thus, the innermost dots which we acquire by our simulations are for a planet at a = 2.3 (black line) and 2.4 (red and blue lines). The simulations are in very good agreement with the analytically results at large a, but deviate somewhat at smaller values of a. The black dotted lines in Fig. 8 and Fig. 9 plot the prograde stationary inclinations i s from our simulations and the magenta lines in the same figures plot the analytic solution for the prograde stationary inclination given by Equation (5) as a function of binary eccentricity e b for all parameters fixed except the binary eccentricity. Fig. 8 shows the results for a planet at r = 5 a b while Fig. 9 refers to the same system but with the planet at r = 20 a b . The upper panels and bottom panels of two figures show the critical angles for an equal mass binary and a binary with f b = 0.1 respectively. The left and right panels of the two figures show results for different planet masses. The black dotted lines are in very good agreement with the analytic results. The prograde stationary inclination values for the low mass planet are rather insensitive to the location of the planet or f b and so the lines look similar and are in the range of 80 • − 90 • , since j is small (see Fig. 6). On the other hand, the stationary inclination is sensitive to e b for the high mass planet. The stationary inclination angle is smaller for smaller binary eccentricity and smaller binary mass fraction (larger j). Retrograde Stationary Inclination The retrograde stationary inclination is given by Equation (5), where we take the negative square root. The solid lines in Fig. 10 show the analytical solutions for e b = 0.2 (blue line), 0.5 (black line), and 0.8 (red line) as a function of angular momentum ratio j. The retrograde stationary inclination, i s , decreases monotonically with increasing j. However, the behaviour with binary eccentricity is more complicated, as we discuss further below. The six dots whose colours correspond to the e b values for the analytic curves that represent i s of Models G1 to H3. The simulations are in very good agreement with the analytic predictions. There are two major qualitative differences between the prograde and retrograde stationary orbits. In the prograde case, for any value of planet-to-binary angular momentum ratio j there is a stationary inclination value about which there are nested librating orbits in the i cos φi sin φ plane (e.g., Figs. 1 and 7). But in the retograde case, we see from Fig. 10 that i s reaches 180 • for j = j cr given by Equation (4). For j < j cr there are no stationary noncoplanar librating orbits, as discussed in Section 2.4. The second difference between the prograde and retrograde stationary orbits is that for any fixed j, the stationary inclination angle i s increases monotonically with e b in the prograde case, but not generally in the retrograde case. The difference is seen by comparing Fig. 7 (prograde) and Fig. 10 (retrograde) in that the curves in the prograde case for different e b do not intersect, except at j = 0 and i s = 90 • that is the upper limit of prograde tilts, while in the retrograde case they do intersect and cross. To confirm this crossing in the retrograde case, recognise that the intersection implies that i s is independent of e b at some fixed value of j. This condition can be written as where cos i s is given by Equation (5) in the retrograde case. This condition has an analytic solution for the intersection point at j = j int and i s = i int , where which is in agreement with the point of intersection in Fig. 10. For fixed j greater (smaller) than j int , the stationary angle i s increases with decreasing (increasing) binary eccentricity e b . Critical inclinations for libration The critical minimum inclination angle between the prograde circulating and librating orbits can be determined analytically. We follow the description in Section 3.4 of . There are two branches, one with lower j and one with higher j, based on the the sign of the parameter χ that is defined as The minimum tilt angle occurs where φ = 90 • . If χ >0, the minimum tilt angle for libration to occur is while if χ<0, we have The green and red solid lines in Fig. 8 and Fig. 9 plot the analytic solutions of i min obtained in Equation (10) (green segments for χ < 0) and Equation (11) (red segments for χ > 0) as a function of e b . We also determined critical angles numerically in our simulations. The green dotted lines and blue dotted lines in Fig. 8 and Fig. 9 show the simulation results for minimum and maximum inclinations, respectively. Fig. 8 and Fig. 9 are determined for librating orbits of a low mass planet (m p = 0.001 m b ) and a high mass planet (m p = 0.01 m b ), respectively. Note that the dotted lines are shorter in the low binary mass fraction plots because some of the orbits are unstable in our simulations. However, the analytic solutions cover the entire range of binary eccentricities. The analytic solutions and the three-body simulations are in very good agreement. The comparison shows that both branches of the i min analytic solutions (red and green) agree well with the simulations. For a low mass planet, the value for i min is rather insensitive to the location of the planet or f b and so the curves look similar in each panel. This insensitivity can be understood by the fact that such models have small values of angular momentum ratio j. In the limit that j goes to zero, we have from Equation (9) that χ > 0 for e b > 0. Therefore, i min is given by Equation (10) and the plotted curves should be nearly entirely red, rather than green. In this limit that j goes to zero, Equation (10) reduces to This equation implies that i min decreases monotonically from 90 • to 0 • as e b increases from 0 to 1, as we find in the low mass planet plots. In the limit that j is large, we have that χ < 0 in Equation (9) and the entire curve for i min should be green, as we find in the lower right panel of Fig. 9. In that limit, Equation (11) applies and independent of e b , as given by equation (40) in . This minimum angle for libration is the same as the socalled Kozai-Lidov angle of 39.2 • (Kozai 1962;Lidov 1962). Therefore, in this high j limit, i min should be constant and lie along the χ < 0 (green) branch, independent of e b for e b < 1. In the lower right panel of Fig. 9 (the highest j panel), we find that the green line is roughly what we predict. (There is a small red region close to e b = 1 that is not visible in the plot.) The upper left panel in this figure has a smaller j value for a given e b than in the lower right panel. In going from the former panel to the latter panel, we see that the behaviour of i min is approaching the expectations of Equation (13). The lower right panel of Fig. 9 shows the results for the simulations with the same parameters as those in the upper right panel, but with a smaller binary mass fraction. In the smaller binary mass fraction case, there are no retrograde circulating orbits, only the crescent shaped orbits described in Fig. 4. Thus, the maximum libration angle (the blue dotted line) shows a different trend compared to the other parameters which have retrograde precessing orbits. DISCUSSION AND CONCLUSIONS In this paper, we investigated the orbital evolution of a misaligned circular orbit planet with nonzero mass around an eccentric orbit binary by means of numerical simulations. The planet and binary interact gravitationally and the orbits of both vary in time. In particular, both undergo nodal precession in the inertial frame. In our suite of three-body simulations, we consider a low mass planet with m p = 0.001m b at r = 5 a b and a high mass planet with m p = 0.01m b at r = 5 a b and at r = 20 a b along with some even higher angular momentum third bodies. We considered different values of the eccentricity of the binary, e b , its mass fraction, f b , and the planet's initial inclination i. To map out the possible orbits in these systems, we concentrated on numerically determining the transitions between the orbit families (circulating and librating for both prograde and retrograde orbits). In addition, we determined the stationary orbits for which the relative tilt and nodal phase between the planet orbit and binary orbit are constant in time. For a very small planet mass, there are two stationary orbital states: coplanar and polar. In the polar state, the stationary planetto-binary tilt is 90 • and the angular momentum of the planet is along the binary eccentricity vector. The stationary states in the case that the planet mass is nonzero is a generalisation to the polar state, but with the relative orientation not being perpendicular and the nodal phase not being constant in time in the inertial frame. Equations (5), (9), (10), and (11) predict that that the only parameters that control the planet-to-binary tilt for the stationary orbit and the minimum tilt for the transition from circulation to libration are the binary eccentricity e b and the ratio of the planet-tobinary angular momentum j. Our numerical results agree with this prediction. Other parameters, such as the binary mass fraction f b , only cause changes in these angles through their dependence on e b or j. For example, in Fig. 6 we see that the stationary angle depends only on j for fixed e b for different values of binary mass fraction f b . In addition, the general agreement between the simulations and analytic predictions across a range of parameters implies that this dependence holds. These angles are also related to the evolution of discs. Simulations by suggest that a prograde disc approaches the stationary angle given by Equation (5), if we consider j to represent the disc-to-binary angular momentum ratio. The numerical results agree very well with the analytic equations for the stationary and minimum libration tilts given in Martin & Lubow (2019) (see Figs. 6, 8, 9, and 10). These analytic equations are based on the quadrupole approximation for the secular binary gravitational field (Farago & Laskar 2010). We find numerically that this approximation holds well, even for orbits that are fairly close to the binary ∼ 3a b that is near the orbital radius where instability sets in (see Fig. 7). As predicted analytically, the main effect of increasing angular momentum ratio j for fixed e b is to monotonically decrease the relative planet-to-binary stationary tilt i s in the prograde case (where i s ≤ 90 • ) (see Fig. 6). In addition, this stationary tilt increases with increasing e b for fixed j in the prograde case. The behaviour of the stationary tilt in the retrograde case is more complicated, but agrees with the analytic predictions given in . In this case, the stationary inclination for noncoplanar orbits decreases with increasing j for fixed e b as in the prograde case. But i s changes from increasing with e b to decreasing at j = 2/ √ 3. In addition, there are no noncoplanar stationary orbits below a certain j value, denoted by j cr , that depends on e b (see Fig. 10 and Equation (4)). This property does not hold in the prograde case. Another difference between the prograde and retrograde cases is the topology of librating orbits. In the prograde case, the librating orbits are always nested in a i cos φ − i sin φ phase plane about point with i = i s and φ = ±90 • (e.g., red and cyan lines in Fig. 1). In the retrograde case, for j > j cr described above, librating orbits are not fully nested within each other and do not orbit about the stationary point in the i cos φ − i sin φ phase plane (e.g., magenta lines in Fig. 5). In this phase plane, prograde librating orbits have an oval shape, while retrograde librating orbits have a crescent shape. The variation of e b in time is significantly larger for a higher mass planet. In the simulation with initial conditions e b = 0.2, f b = 0.1 and i = 170 • , the binary eccentricity can be excited to values very close to 1 (see Fig. 1). This behaviour is similar to what occurs with Kozai-Lidov oscillations in which the outer object is a planet (e.g. Naoz 2016). The recent release of data from Gaia allowed us to better characterize the kinematics of the eccentric equal mass close binary HD 106906 in the Lower Centaurus Crux group (Bailey et al. 2014). This system hosts both a wide asymmetric debris disc and a planetary-mass companion. The formation mechanism of the planet and the stability of its orbit are still under debate (Rodet et al. 2017; De Rosa & Kalas 2019). The binary components have masses Figure 8. Stationary inclination (i s ), critical minimum inclination (i min ) for libration, and critical maximal inclination for libration as a function of the binary eccentricity with the planet orbiting at r = 5a b with different binary mass fractions f b =0.5 (upper panels) and 0.1 (lower panels) for the lower mass planet (left panels) and the high mass planet (right panels). The dotted lines plot the results of numerical simulations, while the solid lines are from the analytic model. The green dotted lines show the boundary between the prograde circulating and librating orbits while the blue dotted lines show the boundary between librating and retrograde circulating orbits. The black dotted lines show i s obtained from our simulations. The magenta lines plot the analytic solutions for i s from Equation (5). The red lines plot the analytic solutions for i min from Equation (10) The planet, HD 106906b was directly imaged with a projected separation of 738 au and was found to be oriented at 21 • from the position angle of the disc midplane, suggesting that the planet's orbit is not coplanar with the system (Kalas et al. 2015). The planet mass was inferred to be m p = 11 ± 2 M J (Bailey et al. 2014). While the orbital properties of the planet are uncertain, if we assume that the semi-major axis is 738 au and the eccentricity is zero, the ratio of the angular momentum of the planet to the angular momentum of the binary is j = 1.13. According to our analytical calculations, the stationary inclination is i s = 73.5 • and the minimum critical angle between the prograde circulating and the librating orbit regions is i min = 41.6 • . The expected orbital properties of a high inclination circumbinary planet depend on the mass of the gas disc in which it forms, and when the planet forms. showed that prograde discs of high mass align to the generalised polar state at tilts that are less than 90 • . A giant planet that formed in such a disc would be expected to open a gap and not remain coplanar with the disc Pierens & Nelson 2018). Such a massive planet would undergo libration oscillations in its orbit about an inclination of less than 90 • , even after the disc has dispersed. As the disc loses mass, the inclination of the generalised polar state moves closer towards 90 • . Thus, the timescale of the disc dispersal may affect the final inclination of debris left over from the gas disc. A low mass gas disc, or massive disc that is dispersed on a sufficiently long timescale forms a debris disc that orbits close to polar, as is the case for 99 Herculis, that is 3 • away from polar alignment (Kennedy et al. 2012). A planet (e.g., an Earth-like planet) formed from the resulting polar debris disc would lie on a stationary polar orbit. A debris disc that is not close to polar or coplanar would be subject to violent collisions due to nodal differential precession. assistantship. We acknowledge support from NASA through grants NNX17AB96G and 80NSSC19K0443. Figure 10. Comparison of the retrograde analytic solution given by Equation (5) with simulation results of Models G1 to H3 for the retrograde stationary tilt i s of the planet relative to the binary as a function of planet-tobinary angular momentum ratio j with binary eccentricity e b = 0.2 (blue line), 0.5 (black line), and 0.8 (red line). The six dots represent the simulation results with j = 1 and j = 2. The curves reach i s = 180 • at j = j cr given by Equation (4).
2019-08-17T18:41:46.000Z
2019-08-17T00:00:00.000
{ "year": 2019, "sha1": "caf1ed21a0e86d79283e84f69dc3f0ce885721fa", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/490/4/5634/30793830/stz2948.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e75c533fdac65f3ec4972150619709cc66f1874f", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
232484283
pes2o/s2orc
v3-fos-license
Value-at-Risk Analysis for Measuring Stochastic Volatility of Stock Returns: Using GARCH-Based Dynamic Conditional Correlation Model To assess the time-varying dynamics in value-at-risk (VaR) estimation, this study has employed an integrated approach of dynamic conditional correlation (DCC) and generalized autoregressive conditional heteroscedasticity (GARCH) models on daily stock return of the emerging markets. A daily log-returns of three leading indices such as KSE100, KSE30, and KSE-ALL from Pakistan Stock Exchange and SSE180, SSE50 and SSE-Composite from Shanghai Stock Exchange during the period of 2009–2019 are used in DCC-GARCH modeling. Joint DCC parametric results of stock indices show that even in the highly volatile stock markets, the bivariate time-varying DCC model provides better performance than traditional VaR models. Thus, the parametric results in the DCC-GRACH model indicate the effectiveness of the model in the dynamic stock markets. This study is helpful to the stockbrokers and investors to understand the actual behavior of stocks in dynamic markets. Subsequently, the results can also provide better insights into forecasting VaR while considering the combined correlational effect of all stocks. Introduction Nowadays, it has been a big challenge for the investors to predict the risk and return associated with the specific index or portfolio in dynamic markets. Higher market risk is because of the presence of unpredictability or volatility in stock prices. The reason for high volatility in stock returns is always the unstable country's conditions, both politically and economically (Afzal et al., 2019). Increasing market risk on stock has brought new challenges to find an effective way of predicting risk in dynamic conditions. Value-at-risk (VaR) is the most widely used standardized volatility measurement tool for the measurement of market risk (Jorion, 1996). In dynamic markets, neither the stock returns could be identical nor could the future trend be predicted historically. Therefore, traditional volatility models, such as historical simulation, variance-covariance, and Monte-Carlo simulation method (Marshall & Siegel, 1997), are not capable of giving the right estimation of market risk; either they are based on assumptions or have the issues of underestimation and overestimation (Sampid & Hasim, 2018). Indeed, limited tools have been developed that could predict the volatility correctly without any issues of underestimation or overestimation. In doing so, this study desires to find an efficient stochastic model that can predict better estimation of risk and return in dynamic stock markets, thereby winning the interest of investors in wealth creation. Therefore, this study suggests that employing dynamic conditional models can capture better volatility of stock returns without any assumptions or problems of underestimation or overestimation of market risk, thereby maximizing the investors' confidence. Subsequently, modeling the dynamic correlation structure gives insight into both markets' volatility clustering and synchronization in financial series. Hence, under dynamic correlation structure, dynamic conditional correlation (DCC) and generalized autoregressive conditional heteroscedasticity (GARCH) are found to be efficient and practicable methods to capture the market volatility and compare the forecasting results of VaR. While looking for the model assumptions, this model can predict better VaR with time-varying correlation rather than using a constant correlation. In this dynamic structural model, an integrated model DCC-GARCH(1,1) has been used for the estimation of VaR and conditional correlation estimation. The findings of the study suggest that adopting an integrated hybrid tool of DCC and GARCH can give better insights of risk estimation in dynamic conditions. Furthermore, this study contributed to the literature in a way of introducing an effective method of risk estimation in dynamic conditions. This study has been expanded in the following section: section "Previous Research on VaR and Measurement Tools" describes the theoretical background of relevant studies; section "Data and Empirical Results" is based on the details of materials and methods; section "Key Findings" describes the key finding of the study; and the last section consists of the conclusions, implications, and future research directions. Previous Research on VaR and Measurement Tools VaR has been popular and is widely adopted as a market risk analysis tool, and it measures the volatility of stock indices. It summarizes the maximum possible loss that financial assets can have at a certain period under a certain confidence level. In 1993, VaR analysis was initially presented in a report "the practice and rules of derived products" published by Group 30. Afterward, in 1994, J.P. Morgan Company applied the VaR model to measure the market risk in stock exchanges. Nowadays, VaR has been adapted by different financial institutions, for instance, banks, fund managers, insurance companies, and stockbrokers. There have been a wide range of studies found on the method of VaR that address its significance in volatility measurement, particularly in stocks (Ackermann et al., 1999;Beder, 1995;Carhart, 1997;Favre & Galeano, 2002;Fung & Hsieh, 1997;Marshall & Siegel, 1997). The Monte-Carlo simulation and variancecovariance methods are based on the assumption that the returns are identically distributed and are independent of each other. Similarly, historical simulation relies on historic data believing that the same trend will get repeated in the future, yielding the desired outcome (Hull & White, 1998). Traditionally, historical simulations, Monte-Carlo simulations, and variance-covariance methods assume that stock returns are normally distributed. Therefore, the fluctuations in asset prices show persistent volatility over time. However, volatility is not considered to be persistent in real-life dynamic conditions (Bansal et al., 2014). Recent studies have also described some practical VaR measurement models that can better address the nonpersistent nature of volatility. While addressing the normality issues in time series volatility, Engle (1982) has introduced a unique ARCH model. Later on, based on this ARCH model, Bollerslev (1986) has presented the GARCH with the estimation of parameters (p,q) that make it easier than the ARCH model for volatility measurement. Similarly, Füss et al. (2007), in their study on volatility measurement, suggested that the GARCH-based VaR models appear to be superior and outperform the traditional VaR estimation methods. While working on funds' risk, Zhou et al. (2010) considered multiple distributions for the effectiveness of the GARCH model over the traditional VaR models. Thus, results have shown that generalized difference distribution outperforms the t-distribution and normal distribution. Furthermore, to overcome the weaknesses of asymmetric and long-memory volatility effects, several extensions of the GARCH family have been introduced, such as Exponential GARCH (EGARCH), Glosten-Jagannathan-Runkle GARCH (GJR-GARCH), Fractionally Integrated GARCH (FIGARCH), Fractionally Integrated Asymmetric Power ARCH (FIAPARCH), and Hyperbolic GARCH (HYGARCH). Unfortunately, this GARCH family has some limitations to measure the time-varying effect of volatility in dynamic conditions (Francq & Zakoïan, 2010;Yang, 2011). Researchers have found that in dynamic conditions, the correlations among stock indices are asymmetric across the market movements and the return distribution tails are fatter (Ang & Chen, 2002;Boyer et al., 1997;Kolari et al., 2008;Longin, 2000;Longin & Solnik, 2001;Tastan, 2006). Therefore, forecasting returns seemed to be underestimated or overestimated using GARCH family models alone. To address the appropriation of VaR with time-varying effects in dynamic conditions, one possible solution is to apply the DCC model for volatility measurement as proposed by Engle in 2002 and further modified in 2006. Studies on DCC show that the DCC-GARCH model is found to be more accurate in yielding the conditional variances (Engle, 2002;Tse & Tsui, 2002). Modeling the DCC structure can provide insight into both markets' synchronization and volatility clustering in financial series. Thus, by using a conditional correlational and time-varying effect, this DCC model provides a better estimation of the dynamic correlation structure for capturing the volatilities and forecasting returns more efficiently than other models (Celik, 2012). Data and Empirical Results The data used in this study consist of three stock indices of the Pakistan Stock Exchange (PSX), that is, KSE100, KSE30, and KSE-ALL, and Shanghai Stock Exchange (SSE), that is, SSE180, SSE50, and SSE-Composite. A total of 2,528 equal daily log-returns for each index are used as sample data, because an equal number of observations for each asset class is necessary for DCC modeling for conditional correlation (Asai & McAleer, 2009). Data were gathered from the official websites of PSX and SSE from the period of 2009 to 2019. Table 1 shows the summary statistics of each index used in this study that explains the time frame of research and the total number of observations used. The distribution of data is slightly skewed to the left; the negative skew values in all indices show that there are more chances to earn negative returns than positive returns. For the kurtosis, we have a value of less than 3 in the case of PSX, which implies that the distribution of the data is platykurtic, but in the case of SSE the kurtosis is greater than 3, which shows that the distribution of SSE data is leptokurtic. Skewness and kurtosis are essential for volatility as kurtosis is the measure of the level of a distribution expressed as fat tails. Investors who are riskaversive always prefer to have low kurtosis distribution or simply the returns that are near to distribution mean (Shanmugam & Chattamvelli, 2016). If there is positive skewness, then it becomes possible to get high kurtosis by avoiding excessive negative returns in the future with the possibility of having more positive returns. When we have negative skewness, investors can face extreme negative returns due to the impact of a high excess kurtosis (Jondeau & Rockinger, 2003). The procedural steps for DCC-GARCH(1,1) are described as follows: 1. The daily log-returns of series have been calculated as where S 1 represents the return of stock 1, SN N represents the return of the nth stock, t represents the time period, and R t represents the total return of a specific stock at time t. While using log-returns, Figure 1 shows the shreds of evidence of the existence of volatility clustering in three selected stock indices of PSX and SSE. Hence, modeling of conditional volatility can be done by considering the fact of volatility presence. 2. Financial data are always nonstationary or have normality issues; therefore, to check the normality of data, Shapiro test has been applied to each set of the data separately. The p-value for each asset after Shapiro test is less than 5% in all the selected indices, which indicates the data are not normally distributed (Hanusz & Tarasińska, 2015). The data are then normalized by considering M = 0 and variance = 1. On the normalized data set, Shapiro test shows the p-values higher than 5%, which indicates the data are now normally distributed around the mean, and they are ready for risk-modeling 3. Afterward, it is necessary to check whether the data have more or less volatility clustering. To check the volatility clustering, the Ljung-Box test (Q k ) has been applied to the squared log-returns of each index that indicates the presence of volatility clustering in each asset class (Peña & Rodríguez, 2002). Lagrangemultiplier test Q k r also confirmed the presence of arch effect in the series of log-returns of each asset class (see Table 2). The Ljung-box test checks confirm the presence of autocorrelation in the data based on the number of lags m. The purpose is to test whether the autocorrelation γ 1, . . . , γ m of z t is 0 or not. The Ljung-box test can be described in the form of hypothesis: The test statistic is as follows: The Ljung-Box test statistics have been shown in Equation 2, where n shows the total number of samples used in this study. ρ t represents the correlation samples of where is the α-quantile of the chi-square distribution with m degrees of freedom. Hence, after testing the null hypothesis H 0 (no Arch effect present), H 0 is rejected because the p-value is greater than 5%. Thus, H 1 is accepted because the arch effect is present in a series of data (Burns, 2005). 4. In the next step, it is important to check the correlation pattern of return. Figure 2 shows the autocorrelation function (ACF) of actual returns of PSX and SSE, and Figure 3 shows the ACF of squared returns of PSX and SSE. Actual returns are serially uncorrelated, and then the 5% lags in the ACF plot are expected to fall outside the limits mentioned as reddotted lines in the ACF plots (Finlay et al., 2011). In Figure 2, we can see that the lags falling outside the series do not make any pattern and show random walk in all the selected stock indices of PSX as well as SSE. We can conclude that the actual returns are serially uncorrelated, but in Figure 3, the lags falling outside the limits (red-dotted lines) do make some patterns (Madsen, 2007). We can see that initial lags are greater, and then the ACF is decreasing. This is just because the lags falling outside the limits and inside the limits are making patterns, so the squared returns are not uncorrelated. The squared returns only can be uncorrelated if the actual returns are serially independent, but this is not the case here, which means the actual returns are dependent. Because of the uncorrelation of actual returns, correlation modeling is being done using the DCC-GARCH model by Engle (2002). 5. Before applying the DCC volatility matrix, first, GARCH mean and variances should be calculated (Sampid & Hasim, 2018). Bollerslev (1986) proposed the GARCH model that allows the dependence of the conditional variance to its previous lags. The GARCH(1,1) model has the following form: where in Equation 4 ζ u t , represents the log-returns of daily stock prices, χ u represents the conditional log-return mean, α u t , represents the mean residuals, ϑ u t , shows the white noise having variance 1 and 0 mean, and λ u , t shows the conditional volatility series; α, β, and γ in Equation 5 are described as the key parameters of GARCH(1,1) estimation, T represents the available sample size, and N represents the number of stock values. Thus, the covariance matrix of DCC at time t is as follows: And the DCC conditional correlation matrix is then P t : In Equation 7, G t represents the diagonal matrix of the N conditional volatilities of the stock returns, that is, and λ u,t is the (u,v)th component of the volatility matrix. Based on the above assumptions, the DCC model can be expressed as follows: In Equation 8, P shows the unconditional correlation matrix of ϑ t , θ 1 , and θ 2 , which are positive real numbers satisfying 0 ≤ Ω 1 + Ω< 1. ϕ t−1 stands for the correlation matrix of returns depending on {ϑ t−1 . . . ϑ t−n } for some integer n and is as follows: where the optimal parameters can be viewed as the smoothing parameter; the larger the value of n, the smoother will be the correlational effect (Sampid & Hasim, 2018). Engle (2002) has proposed the modified version of the DCC model, where the correlation matrix explained in Equation 8 can be further defined as follows: In Equation 10, Ω 1 and Ω 2 show the positive real numbers and 0 < Ω 1 + Ω 2 < 1; p t stands for a positive-definite matrix. To renormalize the correlation matrix at each time t−1, Equation 10 uses the parameter ϑ t−1 (for more information, see Tsay, 2018). Table 3 shows the forecasted VaR of all three indices of each selected stock exchange at different quantiles of standard residuals calculated using the DCC-GARCH model. Quantile shows the probability of volatility scores of all three indices. The joint VaR value at α = 5% is 1.38229 for PSX and 0.87883 for SSE, showing a combined portfolio result. Key Findings As the study desired to measure future market risk using VaR, therefore, volatility forecasting has been calculated to find out the accurate measures of stock returns. In doing so, realized and forecasted correlation results of the DCC-GARCH model show the forecasting of returns of combined stocks. Using DCC, this study has come across a key finding of estimating more realized forecasted results than the traditional models. Figure 4 shows the realized conditional correlation between the selected indices of PSX and SSE from the period 2009 to 2019. Figure 5 shows the forecasted correlation of each stock index separately. In all, 20 lags from the estimated correlation matrix and 10 lags from the forecasted matrix have been selected to present the conditional correlation between the stock indices better. Cor_30.all is the correlation between KSE30 and KSE-ALL indices, cor_KSE100.all is the correlation between KSE100 and KSE-ALL, and, finally, cor_100.30 is the correlation between KSE100 and KSE30, respectively. Similarly, cor-180.50 is the correlation between SSE180 and SSE50 indices, cor-180.comp is the correlation between SSE180 and SSE-Composite indices, and, finally, cor-50.comp is the correlation between SSE50 and SSE-Composite indices. The DCC model of Engle (2002) can better estimate the time-varying correlation between the asset classes. The main finding of this article is even in the highly volatile stock markets, and the bivariate time-varying dynamic conditional correlation model provides better performance than traditional models. See Figure 6 as the comparison of capturing volatilities across the sample indices that endorse the findings of this article. Figure 6 shows the volatility capturing of each selected stock index in comparison with the simple GARCH model and DCC model used for capturing the volatility. We can notice that the volatility captured by the GARCH(1,1) method is underestimated, but volatility captured through the DCC model is more accurately addressed. The GARCH family models alone are unable to capture the volatility effectively. The DCC model is a much more effective model to address the volatility as the parameters estimated by the DCC model indicate the effectiveness of the model in the selected stock market. Table 4 presents the estimated parameters from the DCC model for all the selected indices of PSX and SSE, including the p-values of each parameter estimated. Results reveal that the parameters estimated in Table 4 by DCC-GARCH(1,1) are highly significant. The parameter β1 is significantly positive that shows the connection of its risk measures with its conditional variance, which shows substantial and positive autocorrelation of returns of all indices. The persistence measurement is α1 + β1, which should be less than 1, but here it is almost equal to 1 in the case of all stock indices whether they are PSX stock indices or SSE stock indices. That means the estimated correlations can be integrated with the DCC-GARCH process, and it is nonstationary. More significance is given to the joint dcc and dcc α β 1 1 parameters as individual parameters α1 and β1 are of univariate GARCH model. In Table 4, dcc dcc α β 1 1 + is less than 1, which shows the stationary condition of the DCC model, indicating that there is no more volatility clustering behavior present after the modeling on selected stock indices of PSX and SSE. The findings of the study are helpful to the stockbrokers and investors to understand the actual behavior of stocks in dynamic markets. Subsequently, the results can also provide better insights into the forecasting of VaR while considering the combined correlation effect of all stocks because all stocks are prorogated to have a serial correction and time-varying effects. Therefore, this model gives reasonable estimates to the investors in a highly volatile market who are looking at estimating what kind of correlation and volatility dynamics is present in their potential portfolios to maximize returns. Conclusion The nature of the model is critical in addressing the volatility of any stock market portfolio. Investors are always wondering to find better forecast methods to select the potential portfolios for their investment. model provides better performance than traditional models. The joint dcc and dcc α β 1 1 parameters are more significant than the individual parameters α 1 and β 1 that are of univariate GARCH model. This indicates that there is no more volatility clustering behavior. The volatility captured by the GARCH(1,1) method is underestimated, but the volatility captured through the DCC model is more accurately addressed. The GARCH family models alone are unable to capture the volatility effectively. The DCC model is a much more effective model to address the volatility as the parameters estimated by the DCC model indicate the effectiveness of the model in the selected stock market. Thus, this study contributes to the body of knowledge in a way to introduce an efficient method of risk estimation in dynamic market. This model gives a new way of risk consideration rather than using traditional methods. Furthermore, it is suggested that only dynamic models should be considered for risk estimation in dynamic market. Similarly, investors and financial experts can increase their market confidence by adopting this DCC-GARCH model for market risk estimation in dynamic capabilities. Future work could be done on the estimation of conditional correlation using the traditional GARCH family models together with the dependence measurement by copulas in volatile stock markets, where news impact is powerful. One can check the compatibility of copulas with individual stock dynamics. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2021-04-02T13:13:40.629Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "78546a17e1fd2d9cc32f7a13eb6d390978d7ac17", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440211005758", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "78546a17e1fd2d9cc32f7a13eb6d390978d7ac17", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
9725530
pes2o/s2orc
v3-fos-license
Spanning Trees of Bounded Degree Graphs We consider lower bounds on the number of spanning trees of connected graphs with degree bounded by $d$. The question is of interest because such bounds may improve the analysis of the improvement produced by memorisation in the runtime of exponential algorithms. The value of interest is the constant $\beta_d$ such that all connected graphs with degree bounded by $d$ have at least $\beta_d^\mu$ spanning trees where $\mu$ is the cyclomatic number or excess of the graph, namely $m-n+1$. We conjecture that $\beta_d$ is achieved by the complete graph $K_{d+1}$ but we have not proved this for any $d$ greater than 3. We give weaker lower bounds on $\beta_d$ for $d\le 11$. Introduction We consider lower bounds on SP (G) the number of spanning trees of a connected graph G. Robson Clearly a tree has only one spanning tree and adding a single edge to a tree creates a cycle which can be broken in at least 3 ways giving 3 spanning trees. Adding a second edge does not necessarily multiply SP (G) by 3 again since a square with one diagonal edge has 8 spanning trees rather than 9. We are interested in lower bounds which are exponential in the number of edges added, that is the cyclomatic number of the graph, but no such bound can exist for general graphs. Accordingly we consider graphs for which an upper bound holds on the maximum degree. This study was motivated by the analysis of the effectiveness of memorisation in reducing the computation time of some graph algorithms, effectiveness which depends on the number of small induced subgraphs encountered ( [2]). The most effective way known to upper bound this number of small induced subgraphs is to count the number of their spanning trees; knowing that each subgraph has many spanning trees enables us to reduce the upper bounds so obtained. We will make considerable use of two well known properties of a spanning tree chosen uniformly -The electrical property: the probability that an edge (u, v) is included in the spanning tree is 1/ (1 + res(u, v)) where res (u, v) is the resistance between u and v of an electrical network obtained by deleting the edge (u, v) and replacing every other edge by a 1 ohm resistor, -The random walk model: the tree is exactly that produced by a random walk on the graph where an edge traversed in the random walk is added to the tree precisely if it arrives at a node not already in the tree. Some definitions and a conjecture We define the excess edges of a connected graph G = (V, E) as the number of edges minus the number in a spanning tree, that is the cyclomatic number: µ(G) = |E| − |V | + 1 Then β(G) = SP (G) 1/µ(G) is the geometric mean of the factors by which SP (G) is multiplied in adding the excess edges. Then we define β d as the minimum of β(G) over all graphs G with vertex degrees at most d. A General Lower Bound Since adding a new vertex of degree d mutiplies SP (G) by at least d and increases µ(G) by exactly d − 1, we have a simple lower bound of d 1/(d−1) for β d which is obviously rather weak because a graph of maximum degree d cannot be built up by repeatedly adding new vertices of degree d. This section will strengthen this bound for small d. Adding a vertex We first consider the effect on SP (G) of adding a new vertex. When a new vertex v of degree c is added, the number of spanning trees is obviously multiplied by at least c. The multiplying factor is in fact lower bounded by f c,d strictly greater than c, given an upper bound d on the degree of the graph (after the addition). Conjecture 2: f c,d is achieved when G is K d . Consider a graph G with c distinguished vertices u i 1 ≤ i ≤ c and G ′ consisting of G, a new vertex v and c new edges (v, u i ). Define the multiplying factor f (G) = SP (G ′ )/SP (G). When G is K d , we can prove by induction, using the electrical property, that SP ( Lemma: Conjecture 2 is true for c ≤ d ≤ 11. Proof: First we claim that f (G) is decreased by adding any new edge to G. This can be deduced from the electrical property or it is a consequence of the more general result of [1] Lemma 3.2 which shows that the event e ∈ T (T a random spanning tree) is negatively associated with any monotone combination of other such events. Therefore adding e makes v more likely to be a leaf and so decreases the ratio Also f (G) is not changed by adding a new vertex to G connected to one existing vertex, so adding a new vertex connected to two or more existing vertices again decreases f (G). Defining G k for any k ≥ |G| − c as G with new vertices and edges added so that it consists of the u i still with their same induced subgraph together with a k-clique and enough edges into the clique from each u i to make its degree d − 1, we conclude that f (G) ≥ f (G k ) ≥ f (G k+1 ). Then we consider the limit as k → ∞. Considering the random walk model of a random spanning tree, we see that in the limit G k behaves exactly like a weighted graph W consisting of all the u i with their same induced subgraph and a single vertex w connected to each u i by an edge of weight (d − 1 minus the degree of u i in this induced subgraph). Now a lower bound on f (G) can be computed by a (lengthy) computation over all possible induced subgraphs of c vertices with degree less than d. For c up to 10, the possible subgraphs were generated by a relatively simple program. For the 1018997864 cases when c = 11 we used Brendan McKay's geng program ([3]). The results of this computation are shown in the table. In each case the smallest value of f was given by the induced subgraph K c so that the lower bound is strict, being given, by another application of the random walk argument, by any graph G in which the vertices of the K c are all connected to the same d − c Table 1. The multiplying factors f c,d Cuts Lemma: If a graph G (of maximum degree ≤ d ≤ 11) is cut into two components G 1 and G 2 by the removal of c edges (c < d), SP (G) ≥ f c,d SP (G 1 )SP (G 2 ). Note In fact this result is also true for c = d but the proof given below does not cover this case for all d ≤ 11 and we prefer not to give a more complex proof when we need the result only for c < d. Proof: (We write Π for SP (G 1 ) * SP (G 2 )). Let the endpoints of the cut edges in G 1 and G 2 be U and V respectively. If all the endpoints of the cut edges in one of the components (say G 1 ) are distinct, a different spanning tree of G is given by the union of the edges of any spanning tree of G 1 + v and any spanning tree of G 2 where, by G 1 + v we mean the graph consisting of G 1 together with a vertex v connected to each vertex of U . So, in this case, the lemma is true for all d. In the particular case of two edges (u 1 , v 1 ) and (u 2 , v 2 ) with u 1 = u 2 and v 1 = v 2 , we can do better: there are at least f 2,d Π spanning trees containing a spanning tree of G 1 and at least this same number containing a spanning tree of G 2 . Of these exactly 2Π occur in both the sets (those consisting of a spanning tree of each component plus one of the edges (u i , v i )) so that there are at least (2f 2,d − 2)Π spanning trees of G. In the general case we consider the bipartite graph C of the cut consisting of c edges joining U and V . Without loss of generality we suppose that u 1 has the highest degree maxu (in C) of all vertices of U , that v 1 has the highest degree maxv (in C) of all vertices of V and that maxu ≥ maxv. We give a lower bound on the number of spanning trees of G having one of the following forms: trees with exactly one cut edge (cΠ) -trees with at least 2 cut edges with a common end point at u 1 or v 1 (and no other cut edges) ((f maxu,d − maxu)Π and (f maxv,d − maxv)Π) -for every remaining pair of cut edges, trees containing exactly that pair ((f 2,d − 2)Π if the pair have a common end point and 2(f 2,d − 2)Π otherwis e) The number of pairs of edges with a common end point other than . For given maxu and maxv, our lower bound is )Π This expression is minimised when the degrees d C (u i ) and d C (v i ) are chosen according to the "greedy" partition, that is (for instance for U ) the lexicographically greatest partition of c into positive parts respecting the necessary constraints d C (u i ) ≤ d C (u 1 ) = maxu and |U | ≥ maxv. To verify that the lower bound obtained is always at least f c,d it suffices to test that it is so for every combination 2 ≤ c < d ≤ 11, maxu ≥ maxv ≥ 2, maxu + maxv ≤ c + 1 for their respective greedy partitions. The 200 relevant conditions along with their greedy partitions are given in an appendix. Dissecting a graph We consider the process of cutting a graph of maximum degree d until nothing remains but singleton vertices. Using the previous result, the number of spanning trees of the original graph is at least the product of the multipliers associated with each cut. At each cut we choose one of the available cuts of minimum size. As a result, the initial cut has size at most d (which can only happen if the graph is d-regular) and all subsequent cuts have size at most d − 1. For each possible size c of cut we note its impact on the number of components (increased by 1), the number of edges (reduced by c) and the product of the multipliers (multiplied by f c,d ). Linear Programming We write the obvious constraints on the number of cuts n c of each size c, that the total number of cuts is n − 1, d c=1 n c = n − 1 and the total number of edges cut is m, d c=1 n c c = m. We deduce that d c=1 n c (c − 1) = µ(G). We have also the constraints that n d ≤ 1 and that n d = 0 if the graph is not d-regular. Then we divide by the excess to give constraints on x c the normalised number of cuts of each size and use linear programming to solve for (a lower bound on) the logarithm of the product of multipliers obtained under the constraint d c=1 x c (c − 1) = 1. The constraints on n d give us that x d ≤ 1/min where min is the minimum excess of any d-regular graph, from which we exclude K d+1 for which the conclusion is already known to be true. For instance for d = 10, min = 49 given by the 10-regular graph of order 12. Regular graphs The critical case is that of certain d-regular graphs, namely those for which the first cut is a d-cut, and we first look in detail at this case. In this case, from the constraint d c=1 n c = n − 1, we obtain d c=1 x c ≥ (n − 1)/(dn/2 − n + 1) for the smallest n such that a d-regular n vertex graph exists (other than K d+1 ), namely d + 2 for even d and d + 3 for odd d. For d = 10 this gives us 10/49. The solution to the linear program would give us a lower bound on log(β d ) if it was also valid for the remaining graphs (those with an initial cut less than d). For instance for d = 10, the solution is 0.366508 (giving β 10 ≥ 1.44269) with a mixture of 1-cuts, 9-cuts and 10-cuts but no others. We improve on this by noting that such a combination of cuts cannot arise with our rule of always taking the smallest cut available. For this we need a lemma: Lemma (the average cut lemma): The average size of all cuts of size less than k after some k-cut (k ≤ d) is at least k/2 Proof: Consider any j-cut (j ≥ k); it splits some connected subgraph into 2 components C 1 and C 2 and all other connected subgraphs are (j − 1)-connected. In any following sequence of c cuts not including a (≥ j)-cut, all cuts are within C 1 or C 2 and so they are split into c + 2 components. Before the preceding cut, each of these components had at least j outgoing edges (otherwise there would have been a (j − 1)-cut available); this gives at least j(c + 2)/2 edges of which j were removed by the preceding cut. Hence jc/2 edges must have been removed by the sequence of c cuts; hence at least j/2 edges are removed on average by each cut; hence average cut size ≥ k/2. With this added constraint we get a significantly better bound on log β d . Table 2 gives the lower bounds on β d so obtained and, for comparison, the upper bounds given by K d+1 . For example for d = 10 this gives the linear program Minimise 0.788457x 2 + 1.289233x 3 + 1.672225x 4 + 1.990679x 5 + 2.268310x 6 + 2.517771x 7 + 2.746613x 8 + 2.959706x 9 + 3.160377x 10 under the constraints (In solving this program we use the values of log f c,d computed as accurately as possible rather than these 6 figure approximations.) A small improvement could be made by the following observation. The last cut other than 1-cuts must be a 2-cut which cuts a cycle into two components. The multiplier of this cut should thus be 3 rather than f 2,d . Writing x ′ 2 for the (normalised) number of such cuts, we observe that x ′ 2 ≥ x 10 and adjust the objective function to log(3) for the new variable. In fact for d > 3 this improvement improves the constant found for regular graphs to one better than that for the non-regular graphs of the following subsection. We are currently investigating how to refine the treatment of non-regular graphs correspondingly. Non-regular graphs We now consider other graphs, namely those with an initial cut of size less than d. As noted above this case is not the critical one and the argument is slightly more messy and we only sketch the details. For sufficiently small initial cuts, say ≤ small d , this follows at once by induction on the order of the graph because bound cut−1 > multiplier where bound is the claimed bound on β d , cut is the cut size and multiplier is f cut,d . The values of small d for d from 3 to 11 are [2,3,3,4,4,5,5,6,7]. For graphs with an initial cut of size between small d + 1 and d − 1, we modify the linear program and find that its solution is greater than or equal to that obtained for the regular case. First we improve the constraint concerning x d to x d = 0 but we no longer have all the constraints given by the average cut lemma but we do have them for k ≤ small d + 1. We can moreover add new variables for the number of small cuts preceding the first i-cut for small d + 1 < i < d and include the average cut lemma for the others. Finally we can use the argument that, if up to some stage in the process (such as the first such i-cut), the product of multipliers is sufficiently large, the result follows by induction, so we can add to the linear program a constraint that this does not happen. In fact, for the application to memorisation mentioned in Section 1, we can assume that the graph is not regular for reasons given in [2] but the result for non-regular graphs is not of enough interest to merit detailed study here. Conclusions For degree bounds up to 11 we have shown that the number of spanning trees grows at least exponentially with the cyclomatic number of a graph and we have shown lower and upper bounds on the base of the exponent. The methods used are apparently hard to generalise. It would be much more satisfactory to have general proofs of any of the three properties which we have conjectured or proved for small d: β d is given by the complete graph K d+1 -Adding a new vertex of given degree to a graph G multiplies SP (G) by a factor which is minimised, over all graphs G such that the resulting graph has degree bounded by d, when G is K d -Cutting a graph G (of degree bounded by d) into two parts G 1 and G 2 gives the minimum possible value of SP (G)/(SP (G 1 )SP (G 2 )) when G 1 or G 2 is a single vertex. Table 2. Upper and lower bounds on β d Appendix The following conditions are tested to check that the multipliers for cuts with both parts having more than one vertex are subject to the same lower bounds as those where one part is a singleton. In each case the format is the same. Writing d for the degree bound, c for the cut value, i and p for the number of independent and dependent pairs: followed by the numerical values of the two sides.
2009-02-12T08:57:39.000Z
2008-12-01T00:00:00.000
{ "year": 2009, "sha1": "fcddd0ee2b447fabd1864865bf056e767934b6e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "98e6fe2df831ebf8b1e1e2d4c0f2bfb1a19e1dde", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
247390354
pes2o/s2orc
v3-fos-license
Developing a Health-Spatial Indicator System for a Healthy City in Small and Midsized Cities A recent examination of the significant role of public health has prompted calls to re-investigate how the urban environment affects public health. A vital part of the solution includes Healthy City initiatives that have been the subject of extensive policies, implications, and practices globally. However, the existing literature mainly focuses on big cities and metropolitan areas, while investigations into small and midsized cities (SMCs) are lacking, and thus reflect the underlying issues of health inequity. This study develops an indicator system for evaluating Healthy City initiatives in SMCs, linking urban design and public health, supported by the analyzed opinions from experts collected using both questionnaires and interviews. The indicator system includes six primary dimensions and 37 variables: urban form and transportation (UFT); health-friendly service (HFS); environmental quality and governance (EQG); community and facility (CF); green and open space (GOS); and ecological construction and biodiversity (ECB). A fuzzy synthetic evaluation technique was used to assess the relative importance of factors, emphasizing the importance of UFT, HFS, and EQG, with importance indexes of 0.175, 0.174, and 0.174, respectively. This indicator system is helpful for SMCs seeking to construct a Healthy City in the future, and is based on urban design and governance inputs and for enhancing the Healthy City knowledge base of cities of varied scales. Introduction The concept of the Healthy City with 11 characteristics was first proposed in 1988 [1]; it received extensive attention followed by physical interventions globally. Overcoming health inequity has recently been highlighted as a major goal, emphasizing conditions in daily life such as the distribution of resources and reducing the health gap [2]. While good health and well-being are part of the United Nations Sustainable Development Goals (UN SDGs), there has been a call for global development addressing the causes of inequality [3]. More evidence from low-and middle-income countries on how urban planning contributes to public health is needed, and particular attention to underprivileged, vulnerable, and easily ignored geographic areas and social groups is necessary [4][5][6], especially in the post-COVID-19 landscape. Healthy urban design is recognized as an essential issue that includes addressing the design of urban places for the community to address health inequity [2,7,8]. Discussion of the association between the built environment, urban design, and public health is hardly new. However, many previous studies have focused on big cities [9][10][11][12][13], general cities without specific attention to city scale, or urban-rural differences [14,15]. Far more limited literature can be found for small and midsized cities (SMCs). This is possibly caused Design of the Study/Method We started with a systematic review of the related literature to identify a proposed factor series for the expert survey. A combined questionnaire and interview were then conducted with relevant professionals to validate the indicator system. Exploratory factor analysis (EFA) and fuzzy synthetic evaluation (FSE) were used to analyze the questionnaire data. The research flow is shown in Figure 1. Potential academic articles were prepared using Web of Science in February 2021. The full search code for Web of Science is listed as follows: AB = ((health city OR healthy city) AND (urban OR town OR space OR built environment) AND urban design) AND TS = ((health city OR healthy city) AND (urban OR town OR space OR built environment) AND urban design) The publishing time was set as 2010 to 2020 (years inclusive). To retrieve an appreciable number of studies, we only selected the papers published in journals with a minimum of three publications on the subject topic. Three academic fields-urban studies, regional urban planning, geography and architecture-were included. The search result included a total of 131 articles from 27 journals. Of these, 23 articles were identified as irrelevant and excluded for the content assessment after the preliminary assessment. After a systematic literature review, a list of criteria related to urban design and Healthy Cities was extracted. After 3 rounds of discussion, 5 significant aspects and 39 factors were chosen. This also comprised the components of the theoretical framework and was further refined for the questionnaire survey. A pilot expert interview was conducted to seek expert comments to validate and refine these criteria as 40 variables to fit the specific topic better. In the pilot study, the experts were invited to remove and propose any variables for the framework. The panel of experts consisted of 13 professionals selected for their expertise in urban design, architecture, urban planning, health services, and social workers. A revised questionnaire was developed with three parts. The first part provided an introduction to the survey purpose, clearly defining the topic range. Second, 40 statements were categorized into five groups, namely health service, UF and function, GOS, environmental quality and energy, and society and governance. The statements were measured on a seven-point Likert scale, with seven indicating the most vital importance. The final part gathered demographic data from the respondents, including socio-economic and career information. adopted to enlarge the sample size and gain an exceptionally knowledgeable group. Respondents were requested to send the survey link to their professional networks and colleagues with relevant knowledge or working experience [33,34]. This ensured that the respondents had a certain degree of knowledge of the research topic [35]. The respondents were not limited to academic scholars but included architects, an urban planner, and professionals working in relevant fields who may have had sufficient practical experience to balance professional bias. As the target expert groups were challenging to reach, the snowball sampling method was an efficient approach allowing the possibility of attitude convergence [33,36]. Although this study does not advocate the use of a population line to define SMCs, the level "less than 1 million residents" was provided as a cue for respondents in their consideration. The geographic region was not limited to extensively reflect the research question and provide global experience for further research. Theoretical Framework Urban design contributes to a Healthy City in varied ways. At the beginning of the 21st century, research on health issues and the urban environment mainly focused on reducing non-communicable diseases (NCDs), such as obesity and mental health [21,37,38]. After the COVID-19 pandemic, both NCDs and infectious diseases will require attention The questionnaire survey was deliberately carried out using a purposive sampling approach to select expert respondents with relevant knowledge and experience on the research topic. The target respondents included scholars who have published on the topic and who have led related research projects. A snowball sampling method was further adopted to enlarge the sample size and gain an exceptionally knowledgeable group. Respondents were requested to send the survey link to their professional networks and colleagues with relevant knowledge or working experience [33,34]. This ensured that the respondents had a certain degree of knowledge of the research topic [35]. The respondents were not limited to academic scholars but included architects, an urban planner, and professionals working in relevant fields who may have had sufficient practical experience to balance professional bias. As the target expert groups were challenging to reach, the snowball sampling method was an efficient approach allowing the possibility of attitude convergence [33,36]. Although this study does not advocate the use of a population line to define SMCs, the level "less than 1 million residents" was provided as a cue for respondents in their consideration. The geographic region was not limited to extensively reflect the research question and provide global experience for further research. Theoretical Framework Urban design contributes to a Healthy City in varied ways. At the beginning of the 21st century, research on health issues and the urban environment mainly focused on reducing non-communicable diseases (NCDs), such as obesity and mental health [21,37,38]. After the COVID-19 pandemic, both NCDs and infectious diseases will require attention in terms of urban environmental interventions [39]. As 54% of the world's population lives in cities, the role of urban design in promoting a Healthy City needs extensive attention [8]. After a systematic review of the literature on the theory of the Healthy City within the context of urban design, 5 factors with a total of 40 variables were considered for the basic theoretical framework, and they are summarized and described as follows. Health Services Health service is a fundamental factor for urban health and typically includes hospitals, healthcare centers, and sanitation services [40,41]. The uneven accessibility of health services for varied socio-economic groups is a key issue for health service provision [4,40,42]. McKee stated that the location of a healthcare center should consider coordinating with real estate acquisition and nearby residential areas [41]. At the community level, the provision of emergency services can offer effective treatment for patients [43], especially for aging people in nursing homes and assisted living, who need intensive service [44]. Disabled and aging people, as marginal and vulnerable social groups, require more consideration and support from healthcare services [45,46]. Typical disabled care design strategies include barrier-free entry for wheelchair users and visually disabled groups, unique handrails, dog-using guides, and altered ground color [46]. Aging, as birth rates have fallen and longevity has increased globally, has become a key urban challenge worldwide. The World Health Organization's emphasis is on "aligning health systems to the needs of older people" [47]. With this background, emerging age-friendly communities have clear advantages for older residents by catering to their living requirements and mobility, providing more open spaces and accessibility to neighborhood amenities [48,49]. The social attributes of the environment can deeply influence aging people's health and quality of life [50]. Moreover, the revitalization of old buildings and districts to solve "aging-in-place" should not be ignored [26,51]. Urban Form and Function The urban structure and transportation infrastructure profoundly influence resident lifestyle and health by encouraging or impeding pedestrianism or cycling [52][53][54][55][56][57]. Layout, density, land-use mix, polycentric forms, and job-housing distance further influence resident lifestyles. The urban transportation infrastructure, including the road networks and connections, and particularly the public transport service, profoundly influence residents' connectivity and convenience [54,[58][59][60][61][62][63][64][65][66]. Residential space, as one vital type of land use, requires the careful consideration of residential density, such as median floors of accommodation and the portion of affordable housing, along with the layout and diversity of community facilities, including ordinary retail facilities and grocery stores [26,48,[67][68][69]. Facilities that encourage physical activity, such as sports fields (e.g., swings, basketball courts, handball, and baseball fields), cycling paths, and playgrounds have been revealed to correlate with healthy conditions [69][70][71]. Recreational and entertainment facilities have also been considered as social and cultural facilities, especially those that are public. Green and Open Space GOS contributes to urban health in two ways: providing quality pedestrian walking and related physical activities to further contribute to mental health [21]. Typical activities include walking, exercising, and relaxing in parks [72,73]. Urban spaces include parks, open spaces, and sports fields, providing venues for physical activities that reduce obesity [74]. The proximity to and quality of open space also affect healthy urban design and planning [38]. Streetscape, which includes both the formal aspect ratio, sidewalks and the presence of trees, is correlated with walking and related physical behaviors [53,62,75,76]. The accessibility and use of GOS contribute to urban health. Detailed indicators include park proximity, distance to nearest GOS, and GOS density [77]. It is necessary to consider the form and distribution of greenspaces and, more specifically, the size of the nearest GOS is associated with walking and usage [11,38]. Distribution is related to socioeconomic factors, which needs to be considered for equity and the needs of minorities [78]. The effects of GOS vary based on the different qualities of the greenspace. The current perceptions of crime and disorder strongly affect park quality [79]. Landscape design elements-including lawns, plazas, small lakes, walkways [13], vegetation style [9,68], tree quality [80,81], urban farming [82], roadside vegetation, urban greening features, and environmentally friendly buildings [26,83]-have received extensive attention. Outdoor environment features have also been highlighted from a science-based perspective for urban quality, including sky, tree, and building views [84]. GOSs allow people to experience nature, creating nature relatedness and connectivity [85,86]. This factor can be assessed by the nature and landscape connectivity and the number, size, and density of parks [87]. They provide more opportunities for residents to encounter natural elements, such as trees, animals, and new flowers, further supporting mental health [21]. A "wildlife-inclusive" urban design approach is necessary for coexistence and public health [88]. Maintaining biodiversity thus remains essential for urban health by providing opportunities to encounter wildlife, such as birds [89]. Environmental Quality and Energy Environmental quality further influences physical health through hygiene, sanitation, and energy sustainability. The spatial pattern of PM 10 can be analyzed and understood to reduce air pollution [90,91]. So, reforming block and wind tunnels can accelerate the dispersion of pollution materials [91,92]. Air pollution, as an essential determinant on human health, could also affected by PM 2.5 , NO 2 , and O 3 pollutant concentrations [26,93]. Moreover, coastal cities are more sensitive to air pollution-related health and happiness issues than inland cities [94]. Dealing with waste, including litter, undesirable waste, and water pollution, is vital for urban hygiene [18]. Improving toilet facilities and zero-waste systems can contribute to solving related issues [69]. Water sanitation is one vital element related to hygiene and influences the usage of the natural space [67,95]. Storm water facilities, reflected by the length of sewers, surface channels, and areas, together affect water quality [96,97]. Green building elements, such as roof gardens, are one practical approach for improving environmental quality [98]. Sound and thermal aspects are vital environmental factors that have been correlated with living comfort. Serious urban noise nuisances, including screaming, quarrels, and fights, have the potential to negatively impact human health [99]. Thermal comfort and humidity in outdoor places also affect health conditions [12,100]. Both windspeed and UV protection for pedestrians via tree shade could influence urban health [10,101]. Society and Governance Urban design not only concerns interventions to the physical environment, but also includes governance approaches and public policy. Sense of place, community identity, and social life can be viewed together as a "soft environment" that affects quality of life and human well-being. At the city level, the governance ability and approach deeply affect resident health [102]. At the neighborhood level, accumulating social capital and creating both the sense of and an actual robust community, is vital for local health [53,103]. Spaces increase unplanned social encounters and interaction opportunities contribute to mental health rather than social isolation [104,105]. For historic areas in the city, the conservation of heritage and local culture also needs attention [4]. The role of social interaction in maintaining a healthy lifestyle has been well documented [105]. However, public participation in the decision-making processes related to life and well-being, as well as participation at the community level, tend to be lacking in practice [106][107][108]. As one vital dimension of spatial perception, safety can be influenced by traffic conditions, fire hazards, and varied urban environments. The diversity and vitality of the urban economy profoundly influence urban socio-economic conditions, further affecting urban health [4,42], especially for low-socioeconomic neighborhoods [109]. Data Analysis After obtaining the questionnaire data, we used the EFA and FSE techniques to transform this information into six variables in the form of continuous data series before proceeding to the FSE. The primary purposes of an EFA are to identify the principal directions and reduce sub-dimensions in the dataset with a minimal loss of information [35]. EFA is widely used to identify the dimensionality of subjects related to built environments and urban conditions [110,111]. After constructing the variables, we used the FSE to grasp each factor's importance pattern and ranking [112]. A total of 322 responses were collected in the survey, and respondents who self-selected as "not familiar with the Healthy City concept" were excluded from the data analysis. Data from a final total of 281 questionnaires were used after careful validation examinations (Table 1). In the EFA, the Cronbach's alpha value of 0.967 showed good reliability, indicating a proper consistency among the responses. The Kaiser-Meyer-Olkin (KMO) test (0.952) showed an adequate sampling for the research. SPSS Version 26 was used in the factor analysis. As the questionnaire adopted a seven-point Likert scale, we first examined the mean score of each variable and ranked each of them according to their mean value. No variable had a mean value below 4.0, which represents lower importance in our case; so, all variables were kept for further analysis [113]. After the component rotation, we deleted the variables of education facility, entertainment facility, and cycling facility, because their coefficient value was below 0.5. Six key representative factors, composed of 37 variables, were initially extracted ( Figure 2, Table 2). To further reveal the importance and ranking of each factor, the FSE technique was adopted after categorizing the variables. FSE has been widely used for building composite indicators and establishing assessment frameworks and has a special advantage in generating a ranking index [112]. The FSE technique has the ability to handle complicated evaluations with multi-levels and attributes [114]. Moreover, the method has the potential to objectify subjective opinions from experts [115]. Hence, the FSE was considered very appropriate in this study to ascertain the importance ranking of factors for achieving Healthy Cities in SMCs. FSE was conducted following six key steps [34,112,116,117]: We defined a basic set of variables for each factor based on the EFA results. π = {f 1 , f 2 , f 3 , . . . , f m }, where m is the number of variables in each factor. 2. We established a set for the grading standard E = {e 1 , e 2 , e 3 , . . . , e n }. The sets of grading standards were the scale measurements adopted for the study. In this study, the seven-point Likert scale was adopted, where e 1 = extremely low important, e 2 = very low important, e 3 = low important, e 4 = neutral, e 5 = important, e 6 = very important, and e 7 = extremely important. 3. Normalization was applied. We established the weightings for each variable and factor. The weightings (W) for each variable and factor were computed from the mean scores [117]: W i = {w 1 , w 2 , . . . , w m }, where (0 ≤ w 1 ≤ 1). Here, the weightings were computed using the following equation [117]: where W i is the weighting of a variable or factor; M i is the mean value of the variable or factor; and ∑M ii indicates the summation of the mean values of all variables or factors. 4. We computed the membership function (MF) for each variable (second level) and factor (first level), and established a fuzzy evaluation matrix. The matrix was written as R = (r ij ) m×n , where r ij is the degree to which the grading scale e satisfies the variable f m . 5. We computed the weighting vector and the final evaluation matrix: where D is the final evaluation matrix, and • is a fuzzy composition operator. 6. We normalized the evaluation matrix according to following equation: From this equation, the index for each factor was determined. A seven-point Likert scale was used to evaluate the relative importance scores for the variables in this study; so, E = {1, 2, 3, 4, 5, 6, 7}. Here, we take Factor 1 "Ecological Construction and Biodiversity" as an example to indicate the analysis process. For the variable "urban farming," we first calculate its weighting by using Equation (1): Then, the MF of the variable is determined. The survey results indicated that 2.5% of the respondents asserted that the relative significance of "urban farming" is of extremely low importance, 5.7% of the respondents were also of the opinion that this measurement item is of very low importance, 14.6% of the respondents insisted that this variable is of low importance, while 28.5%, 23.5%, 15.3%, and 10% of the respondents stated their assessment of its relative importance as neutral, important, very important, and extremely important, respectively. As such, the MF for "urban farming" is given by the following equation: After completing the same calculation for all data, the final results of the indices and importance levels for all factors are shown in Table 3. Findings and Discussions Table 2 and Figure 3 show the analysis results with components of the indicator systems, including each factor's composition with the rankings. The importance index is clearly indicated in Table 3. We found six factors with an importance of hierarchy as follows: urban form and transportation (UFT), environmental quality and governance (EQG), and health-friendly service (HFS) were important; community and facility (CF), green and open space (GOS), and ecological construction and biodiversity (ECB) were of neutral importance. The small mean value indicates relative importance but does not mean that the factor would not contribute to a Healthy City. We ranked the variables according to their factor loadings within each factor, as this indicates to what extent it can explain that factor [34,35]. Detailed explanations are discussed below. Findings and Discussions Table 2 and Figure 3 show the analysis results with components of the indicator systems, including each factor's composition with the rankings. The importance index is clearly indicated in Table 3. We found six factors with an importance of hierarchy as follows: urban form and transportation (UFT), environmental quality and governance (EQG), and health-friendly service (HFS) were important; community and facility (CF), green and open space (GOS), and ecological construction and biodiversity (ECB) were of neutral importance. The small mean value indicates relative importance but does not mean that the factor would not contribute to a Healthy City. We ranked the variables according to their factor loadings within each factor, as this indicates to what extent it can explain that factor [34,35]. Detailed explanations are discussed below. Urban Form and Transportation Factor 6 has the highest index (5.73)-that is, urban form and transportation was revealed as the most critical factor among the six factors (Table 3). Factor 6 accounts for Urban Form and Transportation Factor 6 has the highest index (5.73)-that is, urban form and transportation was revealed as the most critical factor among the six factors (Table 3). Factor 6 accounts for 7.382% of the total variance in the factor analysis. Urban form and transportation refers to factors affecting the overall urban structure, including the transportation system, land use, and structure and layout; that is, it sets the city's fundamental structure and further affects various aspects of human behavior and urban life, thus having an impact on urban health conditions. Once a city has an advantageous overall UFT, this tends to affect diverse functions that contribute positively to urban health. Two essential components of urban life were highlighted in this factor: transportation and residence. Four variables comprised this factor (Table 2). Transportation infrastructure was revealed as the most important variable, with a factor loading of 0.734. The mean value for this variable was 5.88 (important). Although the mean value of 5.51 was not high, urban structure was ranked second, with a factor loading of 0.692. Although accessibility of public transportation had a factor loading of 0.552 (the third highest), it had the highest mean value of 5.9 and was thus valued as very important. Residential space had a slightly lower factor loading (0.532) and mean value (5.64) than accessibility of public transportation. This variable indicates land supply, density, diversity of housing forms, and overall housing quality. Consistent with previous studies, transportation infrastructure and public transportation accessibility were revealed as key variables of factor 1 UFT; the results of this study highlight the strong correlation between transportation planning and urban health, reflecting the possible systemic impact this factor has across other determinants of health [118]. The importance to SMCs is probably related to transport injustice, which is caused by the income gap. The transportation infrastructure and urban structure together as UFT profoundly affect the residents' daily lifestyle, for example, encouraging walking and cycling, and thereby influence public health. If public transportation services are lacking, some residents, especially blue workers, are forced to travel by car, further reducing physical activities [119]. Residential space emphasizing the diversity of housing forms and affordable housing provision for disadvantaged social groups is also necessary. Environmental Quality and Governance Factor 2, environmental quality and governance, and Factor 4, health-friendly services, have the same index (5.69), thus ranking as the second and third vital factors that are considered important, respectively (Table 3). Factor 2 accounted for 12.378% of the total variance and was tied for second among the six factors. With an index of 5.69, it consists of eight variables. This factor emphasizes both hard aspects, such as sanitation and pollution, and soft aspects, such as how a city is governed, to enhance urban health in SMCs. The findings suggest a slightly higher importance of environmental quality than governance. Among all of the variables, safety ranked first for both factor loading (0.715) and mean value (6.01). The following three variables were noise, sanitation facility, and air quality, which had factor loadings of 0.666, 0.663, and 0.659, respectively. The mean value for air quality was relatively high, at 5.94. These variables together express the importance of environmental quality. Variables reflecting governance followed the ones related to environmental quality. Public participation (0.577), heritage conservation (0.545), and urban governance (0.544) are the three variables that followed in the ranking. One variable related to environmental quality-thermal comfort-ranked last (0.535), along with a relatively low mean value of 5.41. The importance of environmental quality possibly reflects the poor development condition of SMCs facing pollution and fundamental sanitation challenges. While most metropolitans are entering the post-industrial and post-modern development period seeking a higher quality of life, many SMCs still struggle to meet essential hygiene provisions. Especially, SMCs tend to have less implementable policies and have minimal data openness regarding air quality [16]. Investment and improvement in air quality through more public engagement and data sharing has the potential to largely enhance urban health in SMCs. Under rapid urbanization, both metropolitan areas and SMCs face challenges in conserving heritage landmarks representing local history and enhancing local character. Health-Friendly Service Factor 4, health-friendly services, has the same index (5.69) as Factor 2, environmental quality and governance, indicating a matching importance level. This factor has six variables related to health services of various types and levels, and it accounts for 10.348% of the total variance. Among the six variables, health service equity has the top factor loading of 0.677, emphasizing the importance of health services in different socio-economic areas. Although community-level service and public health service accessibility had lower factor loadings of 0.638 and 0.607, respectively, they had higher mean values of 5.79 and 6.00, respectively. While disabled facilities had a higher factor loading (0.627) than age friendliness (0.546), the mean value for the latter was higher (5.75) than that for the former (5.47). The importance of health service equity, community service, and public health service accessibility together reflect the observed uneven development level of urban health and the relationship between urbanization level and its ability to provide public services. Community-level service meets daily needs that directly influence quality of life. Concerning vulnerable social groups, the built environment plays a vital role for people with disabilities by providing space for the continuity of daily life. The relatively low ranking of age friendliness probably reflects the more obvious aging phenomenon in metropolitan areas with more significant populations. Even the integrated thinking on health services with overall neighborhood planning showed higher importance than age friendliness. Community and Facilities The final three factors can be considered as being of neutral importance. Factor 3, community and facilities, with an index of 5.43, ranked fourth. Because urban health is related to both health condition and human well-being, it is believed that community and facilities providing social benefits and community service are vital for improving quality of life. This factor component accounted for 11.335% of the total variance and was composed of eight variables. Sense of community (0.664) and social interaction (0.662) ranked first and second, respectively, in factor loadings within these variables. A previous study has identified the advantages of SMCs in having closer neighborhood relationships as social capital and connections, particularly for the elderly. Under rapid urbanization and transition, the existing social relationships in SMCs are under threat. Maintaining a neighborly atmosphere for achieving collective identity is vital for SMCs' healthy development. Urban economic diversity ranked third (0.573), emphasizing the diverse forms and scales of economic enterprises, especially providing more opportunities for individual small businesses. Regarding variables that have a direct influence on physical activities, playground had a higher factor loading (0.531) than sport facilities (0.51), while the mean value of sport facilities was higher than playground (5.27 vs. 5.25). Urban walkability and quality residential space had the same factor loadings (0.516), while urban walkability had a higher mean value (5.62 vs. 5.54). Surprisingly, as walkability was the primary domain for achieving urban health that received extensive discussion, it presented relatively lower importance for SMCs. This possibly reflects that walkability is largely determined by urban density and land-use mix [77]. Quality residential space is mainly regarded as open space in the residential estate and regeneration of deteriorating old estates and urban villages. Although ordinary life services did not have a high factor loading (0.512), the mean value (5.79) ranked first among the variables in this factor. Typical ordinary life facilities include retail facilities, grocery stores, and mundane facilities. Green and Open Space Factor 5, green and open space, ranked fifth, with an index of 5.40. GOS is widely recognized as an essential element that contributes to urban health, especially for mental health, in what are known as green health interventions. The neutral importance of GOS possibly reflects the fewer pressures and better mental conditions of SMCs' living. This factor contains four variables: accessibility, quality, streetscape, and equity. This factor component accounted for 9.44% of the total variance and ranked fifth. Among the variables, GOS accessibility had both the highest factor loading (0.684) and mean value (5.52). GOS accessibility, which is closely related to green and nature access, reflects park proximity. Whether the green spaces are close and easily accessible for the public contributes most for SMCs' public health. GOS quality follows, with a factor loading of 0.652. GOS quality can be reflected by the general perceived safety and comfort, beauty and sky views, or use of detailed landscaping elements. Although streetscape had a higher factor loading (0.623) than GOS equity (0.569), they had the same mean value (5.37). Ecological Construction and Biodiversity The final factor (Factor 1), ecological construction and biodiversity, had an index of 4.85. The ecological construction and biodiversity factor had a low index at 4.85 and accounted for 14.612% of the total variance. With seven variables, this factor generally reflects the degree to which people encounter and engage with the natural environment, including urban farming, flowers, vegetation, urban trees, and biodiversity. Storm water gardens and green building are eco-constructions common in urban habitats. Among all seven variables, urban farming had the highest factor loading at 0.839. This is possibly because urban farming is a type of urban green space with vital social functions, as urban farms bring different people together, reflecting the social needs of SMCs. For the mean value, only urban trees had a mean value higher than 5, at 5.1, while the rest were below 5, indicating neutral importance. The findings of this study contribute to the existing literature in several ways. The proposed indicator system expands the knowledge base of Healthy Cities with the customized consideration of a city's scale. This study also enhanced our understanding of how built environments affect public health in different development conditions. In addition, both physical and social dimensions of a space need to be considered to better achieve a comprehensive system for urban health from an urban design and governance perspective. The importance of hierarchy shows the eager needs of primary fundamental urban functions in SMCs, such as transportation, land structure, and environmental quality. The findings echo the reasons that the WHO put these factors as core indicators instead of expanded ones [67]. In addition, health-friendly service, significantly lower-level, easily accessible healthcare service, presents essential importance. This reflects the issues of SMCs that are usually not in the regional center, not advanced, and short of public service provision. The desire for well-being and happiness has also been highlighted from the importance of social-cultural needs, governance, and community. In contrast, GOSs and ECB were assessed as being of neutral importance. While green spaces, including urban parks, street trees, and experiencing nature have received extensive attention and are well recognized as an essential part of the urban environment that contributes to urban health, comparatively lower importance was revealed in this study on SMCs. It is possible that this is due to SMCs being usually located closer to nature, and because of the residents' high mobility between urban and rural areas [120,121]; in contrast to providing urban public services, such as public transportation and health services, providing green spaces is not among their major weaknesses. For SMCs, improving overall urban form and transportation infrastructure, providing public services, both in hygiene and health services, are among their priority concerns. Limitations and Further Study It is worth mentioning that the framework proposed in this study is not a fixed construct, but rather a flexible option that could be adjusted in local contexts in future longitudinal, empirical, and in-depth studies. The methodology provided in this article could also applied to validate the framework for varied localized contexts with featured variables and hierarchies. While some factors may have a systematic impact across the whole system, more attention could also be paid to the interrelationships among the factors and variables by adopting techniques such as Bayesian network analysis or structural equation modeling. Although we seek to reach and invite experts globally, a possible uneven geographic distribution exists and tends to concentrate in Asia. To avoid this knowledge bias in expert surveys, future empirical studies place more attention on grassroot community perception and the real usage of spaces. SMCs, with different locations, under different development periods, may face varied development conditions and challenges. It is necessary to consider the inner heterogeneity among SMCs for subsequent studies. More in-depth consideration may provide insight into the varied development scenarios that SMCs are facing. Conclusions Small and midsized cities represent the backbone of urban development, and the shortage of relative knowledge on Healthy Cities requires a customized theoretical framework that can better promote the achievement of Healthy Cities. With particular attention on health inequity issues between various city sizes and development conditions, this study developed an indicator framework for constructing a Healthy City in SMCs, with specific consideration of urban design and governance. Comprehensive expert questionnaire surveys and interviews were employed, followed by using EFA to extract key factors and using FSE to further identify the importance of hierarchy among factors. In this way, this article has validated a framework composed of 6 critical factors and 37 criteria. The indicator system designed in this study provides better understanding of SMCs in achieving urban health and has the potential to strengthen equally the development between metropolitan areas and SMCs. It can also be used as an assessment system for Healthy City performance to identify those urban areas under worse environmental condition that require more health-led improvements and inputs. In addition, the importance hierarchy identified in this study can inform decision-makers about the priorities of planning SMCs. From a practical perspective, this indicator system may offer long-term impacts by providing valuable insights enabling urban designers and managers to emphasize urban form and transportation, environmental quality, and urban governance in the planning and governing of SMCs to achieve better urban health. For SMCs seeking actions to achieve a Healthy City through urban design, we suggest examining the current city health performance using the framework provided in this study. This study can also inform decision-makers about the importance rankings among diverse variables when undertaking urban development projects, especially for SMCs facing limited resources. The table presented with factor rankings provides supporting materials for planners and managers in decision making to better capture the vital elements in achieving a Healthy City. In the context of dynamic urban changes in SMCs, an adequate understanding of the diverse factors that influence achieving a Healthy City is required across the world. Acknowledgments: The authors are grateful for the constructive comments from the anonymous reviewers. Conflicts of Interest: The authors declare no conflict of interest.
2022-03-12T16:19:44.745Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "f4da6f0c09720901f51581d15c23f19fe7ee5d6d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/6/3294/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2a9d03b702cd5139541bb5769c4342152fc105a", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
14637778
pes2o/s2orc
v3-fos-license
A Link among DNA Replication, Recombination, and Gene Expression Revealed by Genetic and Genomic Analysis of TEBICHI Gene of Arabidopsis thaliana Spatio-temporal regulation of gene expression during development depends on many factors. Mutations in Arabidopsis thaliana TEBICHI (TEB) gene encoding putative helicase and DNA polymerase domains-containing protein result in defects in meristem maintenance and correct organ formation, as well as constitutive DNA damage response and a defect in cell cycle progression; but the molecular link between these phenotypes of teb mutants is unknown. Here, we show that mutations in the DNA replication checkpoint pathway gene, ATR, but not in ATM gene, enhance developmental phenotypes of teb mutants, although atr suppresses cell cycle defect of teb mutants. Developmental phenotypes of teb mutants are also enhanced by mutations in RAD51D and XRCC2 gene, which are involved in homologous recombination. teb and teb atr double mutants exhibit defects in adaxial-abaxial polarity of leaves, which is caused in part by the upregulation of ETTIN (ETT)/AUXIN RESPONSIVE FACTOR 3 (ARF3) and ARF4 genes. The Helitron transposon in the upstream of ETT/ARF3 gene is likely to be involved in the upregulation of ETT/ARF3 in teb. Microarray analysis indicated that teb and teb atr causes preferential upregulation of genes nearby the Helitron transposons. Furthermore, interestingly, duplicated genes, especially tandemly arrayed homologous genes, are highly upregulated in teb or teb atr. We conclude that TEB is required for normal progression of DNA replication and for correct expression of genes during development. Interplay between these two functions and possible mechanism leading to altered expression of specific genes will be discussed. Introduction The determination of whether to change or maintain the expression status of groups of genes based on positional information of individual cells is central for the development of multicellular organisms. Because DNA is wrapped around histone octamers to compose nucleosomes, transcriptional regulators and RNA polymerase cannot bind to template DNA and catalyze its transcription without remodeling chromatin to make DNA accessible to those proteins [1]. Epigenetic regulation (such as methylation of cytosine in DNA or histone modification) is increasingly recognized as a normal, essential mechanism to control gene expression at the level of chromatin organization, and thus to regulate many aspects of development or responses to the environment [1][2][3][4]. Chromatin packaging is also a barrier to processes acting on DNA other than transcription, namely replication, repair and recombination, and thus chromatin structure is remodeled to loosen it during these processes [5][6][7][8]. To preserve and inherit genetic information, chromatin has to be reassembled and the epigenetic information it carries has to be reestablished after DNA replication and repair. However, because the replication of the genome is regulated in part spatiotemporally, the S phase may offer an opportunity for cells to reprogram genome-wide epigenetic information, leading to a change in gene expression pattern [6,9]. In contrast, DNA repair is an unscheduled process after DNA damage that occurs at any time and place, potentially activating gene expression in an unregulated manner [5,6,10]. DNA damages such as double-strand breaks (DSBs) have been shown to change the local histone modification pattern, which may change epigenetic information (reviewed in [5]). To investigate the link between DNA damage and chromatinbased gene regulation, the plant Arabidopsis thaliana offers an excellent model, because there are a number of mutants affecting both the DNA damage response and chromatin-based gene silencing. The FASCIATA1 (FAS1) and FAS2 genes of A. thaliana respectively encode the large and middle subunits of chromatin assembly factor 1 (CAF-1) [11]. CAF-1 facilitates incorporation of histones H3 and H4 into newly synthesized DNA during DNA replication [12] and repair [13]. Loss-of-function fas1 and fas2 mutants have fasciated stems, disrupted leaf phyllotaxy, narrow, dentate leaves, and short roots [14], and show a disrupted expression pattern of developmentally regulated marker genes [11]. fas mutants show increased levels of DSBs and highly express DNA damage-inducible genes even under normal growth conditions [15][16][17]. In addition, formation of heterochromatin and transcriptional gene silencing (TGS) are impaired in fas mutants [17][18][19]. Although these pleiotropic phenotypes of fas mutants are essentially consistent with the idea that FAS reorganizes chromatin and preserves epigenetic information during DNA replication and repair, the cause-effect relationship between these phenotypes and the specificity of the target genes with affected expression have yet to be clarified. Mutations with defects in MRE11, which is involved in repair of DSBs and DNA damage-associated cell cycle checkpoint control [20], in the RPA2 subunit of replication protein A (RPA), which is a single-stranded DNA binding protein involved in DNA replication and repair [21], and in the small subunits of ribonucleotide reductase (RNR), which is involved in the production of deoxyribonucleotides needed for DNA synthesis, show similar phenotypes to fas mutants, including sensitivity to DNA damage and TGS release [19,[22][23][24][25][26]. These results suggest that defective DNA synthesis causes DNA damage and aberrant expression of genes in both euchromatin and heterochromatin, possibly through impaired chromatin organization. Similar phenotypes in development, DNA damage response, and TGS are also observed in mutants impaired in the plantspecific TONSOKU/BRUSHY1/MGOUN3 (TSK/BRU1/MGO3) gene [19,27,28]. Phenotypic similarities between tsk/bru1/mgo3 mutants and fas, mre11, rpa2, and rnr mutants, combined with the observation that the Nicotiana tabacum homolog of the TSK/BRU1/ MGO3 gene is predominantly expressed at S phase in synchronously cultured tobacco BY-2 cells suggest that TSK/BRU1/ MGO3 protein is involved in the structural and functional maintenance of chromatin during DNA replication [19,29]. The TSK/BRU1/MGO3 protein has LGN repeats and leucine-rich repeats, both of which are involved in protein-protein interactions [19,27,28,30], and thus may function as a scaffold of proteins involved in DNA replication, repair, and chromatin maintenance. We previously reported that the TEBICHI (TEB) gene of A. thaliana encodes a protein with both DNA helicase and polymerase domains that are conserved among plants and animals [31]. Its animal homologs (namely, Drosophila melanogaster MUS308 and mammalian DNA polymerase h [POLQ]) have been reported to be involved in tolerance to DNA damage [32,33], prevention of chromosome breakage [34], and somatic hypermutation of immunoglobulin genes [35]. Loss-of-function teb mutations cause various morphological defects, including short roots, abnormal leaf shape and fasciated stems [31]. In addition, teb mutants are hypersensitive to DNA damage, constitutively express DNA damage-responsive genes, and accumulate cells expressing a G2/ M-specific reporter, cyclinB1;1:GUS (CYCB1;1:GUS) [31,36]. However, unlike other mutants exhibiting similar developmental phenotypes and DNA damage response, teb mutants do not upregulate a marker of TGS, transcriptionally silent information (TSI). This result suggests that chromatin-based silencing of heterochromatic genes is not impaired in teb. However, the phenotypic similarity between teb and the preceding mutants suggests that chromatin-based regulation of euchromatic gene expression is affected in teb. If so, teb mutants may be a good model to explore the relationship between DNA damage, chromatin regulation, and developmental program. In the present study, we conducted genetic and global gene expression analyses to explore the link between DNA damage responses and developmental phenotypes of teb mutants. We found that TEB genetically interacts with ATR, which is involved in the DNA replication checkpoint, and that expression of a number of tandem and dispersed duplicated genes and genes near Helitron transposons is activated in teb mutants. Furthermore, we found that the upregulation of two genes near Helitron transposons, ETT/ ARF3 and ARF4 genes, in teb, plays a role in partial abaxialization of leaves, which is a newly found phenotype of teb mutants. We propose a DNA replication-coupled mechanism that maintains the chromatin state of regions around duplicated sequences for correct gene expression during development. Results atr mutations enhance developmental phenotypes of teb mutant but suppress accumulation of cells expressing CYCB1;1:GUS To elucidate the molecular link between DNA damage responses and developmental phenotype in teb, we analyzed the genetic interaction of TEB with ATM and ATR. The ATM and ATR protein kinases are key regulators of cell cycle checkpoints conserved among eukaryotes, and are involved in sensing DNA damage and activating downstream regulators of cell cycle progression and DNA repair. ATM is activated primarily by DSBs, whereas ATR is activated when replication forks become stalled (reviewed in [37]). The A. thaliana homologs of ATM and ATR function in transcriptional responses after the application of DSB and DNA replication stress, respectively [38,39]. Although atm and atr mutants do not show defects in growth and development in the absence of external stress ( Figure 1A and 1B, Figure S1; see also [39,40]), atm mutant plants are hypersensitive to DNA-damaging agents, such as g-irradiation, but rather insensitive to replication-blocking agents, such as hydroxyurea or aphidicolin [39], and atr mutants are hypersensitive to replication-blocking agents but also mildly sensitive to cirradiation [40]. We constructed double mutants of teb with atm and atr, and analyzed their phenotypes. We found that atr mutations enhanced the developmental phenotype of teb; teb atr double mutants exhibited severe growth retardation ( Figure 1A), shorter roots than teb ( Figure 1B), and more severe morphological defects in Author Summary DNA replication, repair, and recombination are interrelated processes. Chromatin structure, into which DNA is packaged, is important for regulation of DNA replication, repair, and recombination, as well as gene transcription. After DNA replication and repair, chromatin status including its structure and modification has to be reproduced, and defects in these processes can alter gene expression program because of change in chromatin regulation. Our series of genetic analysis of tebichi (teb) mutant of model plant Arabidopsis thaliana suggest that TEB gene is involved in DNA replication and recombination. We also show here that TEB gene is required for correct expression of many genes including genes regulating development. From these results we propose that TEB gene function is important for maintenance of gene expression pattern after DNA replication and recombination. Furthermore, preferential upregulation of genes near highly duplicated transposons and tandemly arrayed homologous genes are observed in teb mutants, suggesting the interrelationship between homologous recombination and gene transcription around the repetitive sequences. leaves, shoot apical meristems (SAMs), and embryos than teb ( Figure 1A, 1D, 1E, and 1G; [31]), whereas atr mutants did not show any alteration of morphology in embryos or meristems ( Figure 1C and 1F). Furthermore, atr also affected the phenotypes of weak alleles of teb (teb-3 and teb-4), which by their own do not cause morphological defects ( Figure 1H). On the other hand, atm did not appear to have any effect on the development-related phenotype of teb ( Figure S1). These results strongly suggest that the function of TEB is associated with DNA replication. We next examined the effect of atr on accumulation of cells expressing CYCB1;1:GUS in teb. The accumulation of cells expressing CYCB1;1:GUS normally observed in teb was largely suppressed by atr ( Figure 1I and 1J). Aphidicolin-induced accumulation of cells expressing CYCB1;1:GUS is suppressed by atr, suggesting that ATR is responsible for a cell cycle checkpoint following arrest of DNA replication [40]. Thus, our results suggest that teb activates the ATR-mediated DNA replication checkpoint, which is then followed by cell cycle arrest at G2/M. However, the developmental phenotype of teb was enhanced rather than ameliorated by atr mutation. Taken together, these results suggest that a defect in DNA replication or an event associated with it, rather than the resulting defect in cell cycle progression, is associated with the morphological phenotype of teb. To understand cellular defects leading to the morphological phenotypes of teb and teb atr, we first examined the extent of cell death using trypan blue staining. DNA damage-induced cell death is well-characterized in animals, and the aphidicolin-treated atr mutant of A. thaliana shows nuclear degradation, suggesting that an ATR-dependent checkpoint plays a critical role in protecting the genome and preventing cell death [40]. Although teb atr and teb atm double mutants showed some trypan blue staining, single teb mutants were unstained, and the cell death phenotype did not correlate with the severity of the morphological phenotype of these mutants ( Figure S2). We concluded that cell death does not play a major role in the morphological phenotype of teb. teb and teb atr affect leaf adaxial-abaxial polarity In detailed analysis of the phenotype of teb atr double mutants, we noticed that teb atr plants frequently develop filamentous leaves that are radially symmetrical (Figure 2A and 2B). About half (61/ 107) of teb atr plants developed one or more filamentous leaves. Establishment of a boundary between adaxial (upper) and abaxial (lower) cells is required for the formation of flat leaf blades, and thus a complete loss of adaxial-abaxial polarity leads to formation of radially symmetrical leaves [41]. Therefore, we examined the adaxial-abaxial polarity of leaves in teb and teb atr mutants. In wildtype leaves, a layer of closely packed palisade cells and loosely packed spongy mesophyll cells reside adaxially and abaxially, respectively ( Figure 2C). However, adaxial palisade cells were missing in some regions of teb leaves ( Figure 2D). Although a number of leaves were not radially symmetrical in teb atr plants, the mesophyll tissue consisted largely of spongy mesophyll-like cells in these somewhat expanded leaves of teb atr ( Figure 2E). Likewise, the polarity of transverse sections of the petioles was also altered in teb and teb atr ( Figure 2F-2H). In addition, polarity of vascular bundles in teb atr was also perturbed; development of phloem cells around xylem, in contrast to wild-type, in which xylem and phloem respectively develop adaxially and abaxially, although the vascular polarity of teb was almost normal ( Figure 2I-2K). We also analyzed the expression of green fluorescent protein (GFP) under the control of the FILAMENTOUS FLOWER (FIL) promoter (FILp:GFP); expression is observed only in the abaxial region of wild-type leaves ( Figure 2L and 2O; [42]). Expression of FILp:GFP occurred ectopically in the adaxial regions of some teb and teb atr leaves ( Figure 2M, 2N, 2P, and 2Q). Looking specifically at radially symmetrical leaves from teb atr plants, we observed expression of FILp:GFP around the outer surface of these leaves ( Figure 2Q). Taken together, these results support the stochastic occurrence of partial abaxialization in teb and teb atr leaves. teb and teb atr upregulate ETT and ARF4 genes We analyzed the adaxial-abaxial polarity phenotype of teb and teb atr in more detail to elucidate the relationship between the molecular function of TEB and the developmental phenotype of teb mutants. In recent years, molecular factors that are involved in establishment of leaf adaxial-abaxial polarity have been identified (reviewed in [43]). We generated a series of double mutants combining teb with mutations in regulatory genes involved in adaxial-abaxial polarity. Mutants affected in genes such as REVOLUTA (REV), PHABULOSA (PHB), KANADI1 (KAN1), and FIL did not appear to enhance or suppress the teb phenotype (data not shown). However, asymmetric leaves 1 (as1) and as2 mutations affected the leaf phenotype of teb ( Figure 3). teb as2 double mutant plants exhibited leaves with several lobes and a very ruffled surface, in addition to some trumpet-shaped leaves ( Figure 3A-3E), indicating severe defects in adaxial-abaxial polarity. Likewise, teb as1 double mutant plants showed a severe defect in leaf expansion ( Figure 3F). In addition, the epidermal surface of the adaxial side of teb as1 leaves showed an undulating surface with a high density of stomata, resembling the abaxial leaf surface of wild-type, rather than the adaxial surface, which is flat and has a low density of stomata ( Figure 3G and 3H). Moreover, teb as1 and teb as2 exhibited higher ectopic expression of FILp:GFP in the adaxial domain of leaves compared with the teb single mutant ( Figure 3I-3M). The leaves of teb as1 and teb as2 resemble leaves of double mutants of as1 or as2 in combination with genes encoding components of the trans-acting short-interfering RNA (ta-siRNA) pathway [44][45][46]. One ta-siRNA, tasiR-ARF, targets the mRNAs of three AUXIN RESPONSE FACTOR (ARF) genes, ARF2, ETTIN (ETT)/ARF3 (hereafter ETT), and ARF4, for cleavage, and ETT and ARF4 are overexpressed in mutants defective in the ta-siRNA pathway [47][48][49]. ETT and ARF4 have also been reported to redundantly specify abaxial cell fate [50]. Thus, we examined the expression of ARF2, ETT, and ARF4 in teb and teb atr. We found a small but reproducible increase in the expression of ETT and ARF4, but not of ARF2, in shoot apices and leaves of teb plants, and the effect was enhanced by atr ( Figure 4A). To examine the effect of increased expression of ETT and ARF4 on the phenotype of teb, we analyzed teb ett and teb arf4 mutants. ett and arf4 had an insignificant effect on the overall leaf phenotype of teb ( Figure 4B). However, the increased and ectopic expression of FIL in teb was largely suppressed by the ett and arf4 mutations ( Figure 4C-4G), suggesting that upregulation of ETT and ARF4 plays a role in leaf abaxialization associated with the ectopic expression of FIL in teb mutants. Since overexpression of ETT and ARF4 alone does not cause any defect in adaxial-abaxial polarity or cause ectopic expression of FIL in mutants affected in the ta-siRNA pathway [44][45][46], abnormal expression of some other gene is probably responsible for the leaf polarity defect of teb. Thus, we concluded that the abaxialization of the leaves in teb is caused at least in part by increased expression of ETT and ARF4. We next analyzed genetic interactions between TEB and ARGONOUTE7 (AGO7) or RNA-DEPENDENT RNA POLYMER-ASE6 (RDR6), which encode components of the ta-siRNA pathway. ago7 and rdr6 slightly exaggerated the phenotype of teb leaves. Additionally, ETT and ARF4 were expressed at higher levels in teb ago7 and teb rdr6 compared with ago7 or rdr6 ( Figure S3). This additive effect of teb and ago7 or rdr6 on the expression of the ETT and ARF4 genes suggests that TEB regulates expression of ETT and ARF4 by a pathway different from the ta-siRNA pathway. Upregulation of genes near Helitron transposons in teb and teb atr A survey of the genomic sequence around ETT and ARF4 revealed the presence of Helitron-like sequences upstream of both genes ( Figure 4H). Helitrons are a class of DNA transposons recently discovered in a number of eukaryotes, and they and their nonautonomous derivatives constitute more than 2% of the A. thaliana genome [51]. The Helitron-like sequences upstream of ETT and ARF4 are nonautonomous elements designated AtREP3 and AtREP1, respectively [51]. To determine whether Helitron elements play a role in upregulation of nearby genes in teb mutants, we looked at the effect of a T-DNA insertion (ETTups-1) between the Helitron element AtREP3 and the ETT locus on the expression of ETT in teb ( Figure 4H). Plants with both the teb-1 mutation and the ETTups-1 insertion expressed ETT at the same level as ETTups-1 plants, which is lower than the level in teb ( Figure 4I). It would appear that ETTups-1 increases the distance between AtREP3 and the ETT gene, and neutralizes the effect of AtREP3 on the expression of ETT in teb. Since plants harboring only ETTups-1 did not show any defect in leaf morphology (data not shown), ETTups-1 probably does not have much of an impact on the normal expression pattern of ETT in wild-type, suggesting that Helitron AtREP3 does not have a major role in the normal expression of ETT. These results suggest that upregulation of ETT in teb may be linked to the presence of an upstream Helitron, although the involvement of the other upregulating element around ETTup-1 insertion cannot be excluded. To support this result, we examined the expression of randomly chosen 4 genes with Helitron AtREP3 in their upstream regions. As a result, we found a small but reproducible increase in the expression of these 4 genes in teb plants, and the effect was enhanced by atr ( Figure S4A). We next analyzed global gene expression using a microarray approach ( Figure S5) to see whether the effect of the teb mutation on the expression of genes having a nearby Helitron insertion is a general one. We examined the expression of a set of the genes with Helitron elements of more than 300 bp in their upstream 2 kb regions, in our microarray experiments. We found that genes with upstream Helitron elements showed weak but statistically significant tendency to be upregulated in teb and teb atr ( Figure 5A and 5B, Figure S6A, S6B). However, the insertion of Helitron elements in nearby regions was not sufficient for upregulation in teb, suggesting the involvement of other factors in the upregulation. Upregulation of tandem and dispersed duplicated genes in teb and teb atr Interestingly, we found that many tandemly arrayed homologous genes (TAGs; [52]) are markedly upregulated in teb and teb atr compared to the wild-type ( Figure 5C and 5D, Figure S6C, S6D). We also observed significant increases of expression of duplicated genes, i.e., those with one or more closely related genes somewhere in the genome, in teb and teb atr ( Figure 5E and 5F, Figure S6E, S6F). Because duplicated genes include both TAGs and dispersed duplicated genes, in order to ask whether the upregulation of duplicated genes is solely attributable to the upregulation of TAGs, we first subtracted TAGs from the list of duplicated genes and then again asked whether duplicated genes are upregulated in teb and teb atr. Tendency of upregulation of these duplicated genes was still observed ( Figure 6A and 6B, Figure S6G and S6H). However, this tendency was not observed for non-TAG duplicated genes with low homology to other genes ( Figure 6C and 6D, Figure S6I, S6J). These results suggest that duplicated genes are preferentially upregulated in teb and teb atr, and that both the proximity and the homology between duplicated genes are important factors in upregulation in teb and teb atr. Furthermore, we found that the expression of many cirradiation-inducible genes [38] was upregulated in teb ( Figure S7). This result reinforces our previous observations with selected DNA damage-inducible genes [31]. These genes were also upregulated in teb atr ( Figure S7E, S7F), and to a greater degree than in teb ( Figure S7G, S7H). Genetic interaction between TEB and genes involved in homologous recombination Recently, it was reported that the recombination-related RAD51D protein is involved in a transcriptional activation of pathogenesis-related (PR) genes in a suppressor of npr1 inducible 1 (sni1) mutant background of A. thaliana [53] (See below). Exploration of reported microarray data of sni1 [54] revealed that TAGs tend to be upregulated in sni1 ( Figure S8), which is similar to what we observed in teb ( Figure 5), suggesting sni1 mutation affects the transcription of TAGs via the function of RAD51D. Furthermore, our microarray data showed that teb and teb atr upregulate the expression of PR genes as in the sni1 mutant ( Figure S9). These results suggest that the global gene expression patterns are similar in teb and teb atr, and sni1. Accordingly, we examined genetic interaction between TEB and two recombination-related genes, RAD51D and XRCC2 [55]. As a result, both of rad51d and xrcc2 mutations markedly enhanced the developmental defects of teb, whereas rad51d and xrcc2 single mutants did not show any developmental defects (Figure 7). Function of TEB in DNA replication and recombination Here, we demonstrated that TEB genetically interacts with ATR for developmental phenotypes, cell death, and altered gene expression. Our results provide genetic evidence for a function of TEB in DNA replication to correctly propagate genetic information. The increased expression of c-irradiation-inducible genes in teb and further upregulation in teb atr suggest that TEB and ATR prevent the formation or accumulation of DSBs or other types of DNA damage during DNA replication. The mammalian ATR and its yeast homologs, Mec1 and Rad3, are essential for cell survival and are known to be involved in preventing replication fork collapse, DNA breakage, or genome rearrangement, after a stall in the progression of the replication fork, even in the absence of exogenous stresses [56]. However, atr mutants of A. thaliana are viable and develop normally in the absence of treatment with DNA replication-blocking agents [40]. Hence, A. thaliana may have fewer endogenous stresses that perturb DNA replication under normal growth conditions. Otherwise, other proteins may ensure smooth progression of replication forks. We found here that in the presence of the teb mutation, an effect of the loss of ATR became apparent, suggesting that TEB has a crucial role in normal progression of DNA replication. In the teb single mutant, it is probable that the ATR pathway functions to alleviate the defect in DNA replication by activating any bypass pathway and/or delaying replication and cell cycle progression. Indeed, the accumulation of cells expressing CYCB1;1:GUS in teb was ATRdependent, suggesting that an ATR-dependent cell cycle checkpoint is activated to delay G2/M progression in teb. Homologous recombination is thought to be important for recovery from stresses that perturb replication, such as DNA damage, nucleotide depletion, or the presence of a specific sequence that hinders progression of a replication fork [57]. Strong genetic interaction between TEB, and ATR and recombination-related RAD51D and XRCC2 suggest the involvement of TEB in homologous recombination or functionally connected other process during DNA replication. Because double mutants between teb and atr, rad51d, and xrcc2 are not lethal despite the severe growth retardation, it would be interesting to examine what occurs in the genomic sequences of these double mutants, and how they complete DNA replication. Function of TEB in gene expression Phenotypic overlap between mutants for TEB, FAS, MRE11, RPA2, RNR, and TSK/BRU1/MGO3 suggests functional overlap of these genes in maintenance of chromatin and correct gene expression following DNA replication. Unlike other mutants, however, teb did not affect TGS of heterochromatic genes [31], suggesting TEB does not have a major function in the maintenance of heterochromatin. However, we showed here that teb affects expression of many genes. Thus, it is possible that TEB regulates the expression of euchromatic genes through chromatinbased manner. In support of the idea that TEB has a role in maintenance of chromatin, we could not identify any double homozygotes for teb and fas2 in the progeny of plants homozygous for fas2 and heterozygous for teb (our unpublished results), suggesting that TEB and CAF-1 have complementary functions in the maintenance of chromatin. It is interesting that teb influenced the expression of a number of genes that do not seem to be directly involved in cellular responses to DNA damage, including tandem and dispersed duplicated genes and genes near Helitron transposons. It has been shown in yeast and animal that DSBs or other DNA damages induce local nucleosome depletion and changes in histone modification to make damaged DNA accessible to repair proteins, an effect that also has the potential to impose changes in gene expression [5,6]. Since TEB seems to function to prevent the formation or accumulation of DNA damage, selective upregulation of TAGs and genes with nearby Helitron insertions in teb indicates that teb affects the chromatin state of these loci due to accumulation of DNA damage in their vicinity. Taken together, we hypothesize that teb affects the chromatin state of regions around tandem and dispersed homologous genes or transposons through unsuccessful homologous recombination and resulting DNA damage during DNA replication. Tandem and dispersed homologous sequences can be the targets of ectopic homologous recombination [58][59][60]. Helitron elements are abundant in the genome, the elements are typically large, and the elements share high sequence homology with one another, which seem to increase the chance of ectopic homologous recombination between elements [51,61]. Indeed, AtREP3 and AtREP1 near ETT and ARF4 genes, respectively, are two of most abundant classes of nonautonomous Helitrons [51], and homology search analysis for each of these AtREP3 and AtREP1 against A. thaliana genome sequence identified more than a hundred of homologous elements with more than 80% sequence identity entirely or partly. Furthermore, genes having Helitron elements of less than 300 bp long in their upstream regions did not show tendency to be upregulated in teb and teb atr (data not shown), as opposed to genes with upstream Helitron of more than 300 bp long ( Figure 5). The results that the proximity and the homology between duplicated genes are critical factors for upregulation in teb and teb atr (Figure 6) also support our hypothesis, because proximity and high degree of homology between repeats increase the frequency of recombination between them [62][63][64]. What mechanism would lead to an altered chromatin state in these specific regions in teb? One possibility is that TEB is involved in homologous recombination between repeats, which is activated by a stalled replication fork. Aberrant recombination between repeats in teb mutants might result in DNA damage and chromatin disorganization. If so, however, many cells should undergo recombination events between these repeats in wild-type plants, because changes in expression of TAGs are generally large and thus large population of cells should increase their expression in teb mutants. This would mean that the DNA sequences of these regions would likely change rapidly even in a single generation, which is unlikely. Alternatively, TEB may repress homologous recombination between repeats by ensuring allelic recombination. teb did not show increased recombination between two tandemly arrayed overlapping parts of a GUS transgene [31]. Therefore, it is possible that the initiation of recombination between repeats is triggered by a failure of allelic recombination in teb, but teb cannot normally undergo recombination between repeats. It would be interesting to explore possible involvement of specific epigenetic marks in the teb-mediated upregulation of Helitron-flanked and duplicated genes. At the A. thaliana recognition of Peronospora parasitica 5 (RPP5) locus, comprised of seven duplicated genes, small RNA species corresponding to genic regions are detected [65] and a considerable amount of cytosine methylation was detected in genome-wide mapping study [66]. Another cluster composed of nine chitinase/glycosylase-18 genes is associated with TERMINAL FLOWER 2/LIKE HETEROCHROMATIN PROTEIN 1 (TFL2/LHP1), indicating the association of this locus with histone H3 trimethylation at lysine 27 [67]. These epigenetic marks might regulate the coordinate expression of genes in a cluster. However, in the region around the duplicated genes upregulated in teb, we did not find any significant amount of small RNA or cytosine methylation in public databases (http://asrp.cgrb.oregonstate.edu and http://epigenomics.mcdb.ucla.edu/DNAmeth/project.html). In addition, high level of cytosine methylation and small RNAs were found in Helitron regions according to these databases. However we did not find any difference in cytosine methylation level in AtREP3 and AtREP1 in the upstream of ETT and ARF4 genes, respectively, between wild-type and teb (data not shown). Possible interplay between recombination and gene expression Our knowledge about the interplay between recombination and gene expression is scarce. However, the findings that sni1 show upregulation of the expression of many TAGs ( Figure S8) and RAD51D protein is required for upregulation of PR genes in the sni1 mutant [53] suggest the occurrence of recombination-coupled regulation of gene expression. A large family of resistance (R) genes responsible for recognition of specific pathogenic signals form clusters in the plant genome, and these R genes are subjected to ectopic recombination within or between clusters [60,68]. Hence, SNI1 and RAD51D may antagonistically control the transcription of R genes in a recombination-coupled manner. PR genes themselves also have homologous genes nearby, and teb and teb atr also upregulate the expression of PR genes ( Figure S9), suggesting the possibility of direct role of TEB, SNI1, and RAD51D in regulating the expression of PR genes. In any case, our observation that the mutations in recombination-related genes enhanced the phenotypes of teb supports our hypothesis that there is a genomewide recombination-coupled maintenance mechanism of chromatin around duplicated sequences. Identification of additional factors involved in the regulation of duplicated genes, analyses of their genetic and physical interactions, and their impact on genetic and epigenetic contexts of the genome will help understand the interplay between recombination and gene expression. In more general, our results raise the possibility that (tandemly) duplicated genes and Helitrons elements play a role in changing expression pattern of genes, in addition to genetic change by recombination and transposition, in the evolutionary process. It has been shown that Helitrons are involved in creation of new genes by capturing a part or whole of genes and transposing with them in maize [69,70]. Tandemly duplicated genes are believed to have a role in genome evolution by homologous crossing over and gene conversion [58]. Our results propose an unidentified potential of these genetic elements to produce expressional and developmental variation. Histological analyses Observation of developing embryos, sectioning of leaves and meristems embedded in Technovit resin, and histochemical staining of GUS activity were done as described previously [31]. For trypan blue staining, 15-day-old plants were incubated in 0.5 mg/ml trypan blue, dissolved in phenol/glycerol/lactic acid/ water/ethanol (1:1:1:1:8), in a boiling water bath for 1 min. The tissues were left in staining solution at room temperature for 1 h, cleared in chloral hydrate solution, and examined with an Olympus SZX12 stereomicroscope. For scanning electron microscopy, samples were fixed overnight in Carnoa's solution (1:3 isoamyl acetate:ethanol), incubated in 1:1 and then 3:1 isoamyl acetate:ethanol for 15 min each, and finally immersed in isoamyl acetate. The materials were then critical-point-dried in liquid CO 2 , coated with platinum and palladium, and examined with a Hitachi S-3000 scanning electron microscope. For observation of FILp:GFP, shoot apices were embedded in 6% agar with 0.05% Silwet L77, and transverse sections of 100-150 mm were obtained using a LinearSlicer Pro 10 (D.S.K.). Sections were mounted with a drop of water and examined using an Olympus FV500 confocal laser scanning microscope. Both GFP and chlorophyll are excited at 488 nm, and the emission was split using a 560 nm dichroic mirror and collected through a 505-525 nm band-pass filter and a 560 nm long-pass filter to observe GFP and chlorophyll, respectively. Real time RT-PCR Total RNA was isolated from 12 to 14-day-old plants that were dissected to separate leaves from shoot apices. Leaves were defined as leaves with recognizable petioles, and shoot apices were defined as the remaining aerial parts. Total RNA was isolated using the RNeasy Plant Mini Kit (Qiagen) according to the manufacturer's instructions. Next, cDNA was synthesized from DNase I-treated total RNA using an oligo(dT) primer and SuperScript III Reverse Transcriptase (Invitrogen). Quantitative real time PCR was carried out using an iCycler iQ system (Bio-Rad) with iQ SYBR Green Supermix (Bio-Rad) as described previously [31]. Primer pairs for each gene were designed to amplify specific fragments of approximately 100 bp. The threshold cycles at which fluorescence of the PCR product::SYBR Green complex first exceeded a background level were determined, and the relative template concentrations compared to that of the control were determined based on a standard curve for each gene made using a cDNA dilution series. Relative levels of ACTIN2 mRNA were used as a reference. The real time PCR assays were performed in duplicate for each cDNA. Microarrays Microarray analysis was done using Affymetrix GeneChip ATH1. Total RNA from shoot apices of 14-day-old plants was analyzed. Replicate experiments were done using different combinations of teb and atr alleles. In the first experiment, Col-0 (wild-type), teb-1, atr-2, and teb-1 atr-2 were used. In the other, Col-0, teb-2, atr-4, and teb-2 atr-4 were used. For each sample, 5 mg of total RNA was processed using the GeneChip One-Cycle cDNA Synthesis Kit and the IVT Labeling Kit (Affymetrix) according to the manufacturer's instructions (GeneChip Expression Analysis Technical Manual; Affymetrix) to produce biotin-labeled cRNA. Next, 20 mg of the resulting biotin-labeled cRNA was fragmented to an average strand length of 100 bases (range, 35-200 bases). Subsequently, 15 mg of fragmented cRNA was hybridized to an Affymetrix GeneChip ATH1 and the hybridized chip was washed, stained with streptavidin-phycoerythrin, and scanned. Basic data analysis used to obtain values for signal intensity and detection calls, i.e., 'present (P)', 'marginal (M)', and 'absent (A)', were carried out using GeneChip Operating Software 1.2 (Affymetrix). Further data analysis, including normalization, was performed with GeneSpring GX 7.3 (Agilent Technologies). After values less than 0.01 were set to 0.01, data from each chip were normalized to the 50th percentile of values from that chip. For comparison, the values for each gene were normalized to those of Col-0 by setting values of all genes in Col-0 to 1. Subsequently, we used only a set of genes for which the detection call was 'P' or 'M' in at least 2 of the 4 samples in each experiment. The raw and normalized data files and details of labeling and hybridization have been deposited in a public microarray database (http://www.ebi.ac.uk/ arrayexpress) under accession number E-MEXP-1329. The list of duplicate genes and TAGs has been described previously [52]. Briefly, two datasets were defined as duplicate genes. The high stringency (H) set included protein pairs that share at least 50% identity over 90% of the protein length, whereas the low stringency (L) set included protein pairs with at least 30% identity over 70% of the protein length. TAGs were identified as subsets of duplicate genes. Genes were defined as TAGs if they belonged to the same family of duplicate genes and were physically adjacent. The list of c-irradiation-induced genes has been described previously [38]. The data for microarray analysis that compares the sni1 mutant with wild-type Col-0 [54] was obtained from NASCArrays database (http://affymetrix.arabidopsis.info/). For statistical analysis in Figure 5, Figure 6, Figure S6, and Figure S8, we first converted the ratios of expression from linear to logarithmic scale. Then the difference between mean values of two different groups of genes was tested by Student t test. Figure S1 atm mutations do not affect the developmental phenotypes of teb. Shoot morphology of 2-week-old wild-type (WT), teb, atm, and teb atm double mutant plants. The morphology of teb atm cannot be distinguished from that of teb. Scale bar, 5 mm. Figure S3 TEB represses ETT and ARF4 in a pathway different from the AGO7-and RDR6-mediated pathway. (A) Rosette phenotypes of 20-day-old wild-type (WT), ago7-1, rdr6-11, teb-1, teb-1 ago7-1, teb-1 rdr6-11 plants. Scale bar, 5 mm. (B) The levels of ETT and ARF4 mRNAs in teb-1 (t), ago7-1 (a), teb-1 ago7-1 (ta), rdr6-11 (r), and teb-1 rdr6-11 (tr) relative to wild-type (w) as determined by quantitative real time RT-PCR. The values represent means of 5 biological replicate 6S.E. After per-chip normalization, the values of each gene were normalized to the mean values for the wild-type (WT) in two experiments. Colors represent normalized expression levels for teb-1 atr-2, as indicated in the color bar (right). (B) Venn diagram of differentially expressed genes. Magenta circles, genes with more than 1.5-fold higher or lower expression in teb than in wild-type; green circles, genes with more than 1.5-fold higher or lower expression in teb atr than in wild-type. Yellow, the overlap of these gene groups. Many genes that exhibited higher and lower expression in teb than in wild-type also showed higher and lower expression in teb atr than in wild-type, respectively. Furthermore, for about three-quarters of the genes in common, the difference in expression was more pronounced in teb atr than in teb, suggesting that the molecular phenotype of teb related to gene expression is enhanced by atr.
2014-10-01T00:00:00.000Z
2009-08-01T00:00:00.000
{ "year": 2009, "sha1": "86ba924b7fac5e18b8ea3a8aa136aaa9ff5b79b8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1000613&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86ba924b7fac5e18b8ea3a8aa136aaa9ff5b79b8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209246746
pes2o/s2orc
v3-fos-license
Eye-associated multiple cranial nerve palsies Multiple cranial nerve paresis (MCNP) can occur due to some syndromes, systemic diseases, extracranial and intracranial pathologies. The paresies including cranial nerves III, IV, V, VI, and VII are eye-associated MCNP. The common causes of eye- associated MCNP often include cavernous sinus syndrome, superior orbital fissure syndrome, orbital apex syndrome, and cerebellopontine angle syndrome. Clinical approach to MCNP includes the careful examination for generalized limitation in various gaze positions, proptosis, decreased corneal and facial sensation, conjunctival injection, ptosis, anisocoria, and cerebellar signs. In this review, we aim to briefly remind the main causes of MCNP associated with the eye. Introduction Cranial nerve (CN) paresies are neuropathies which might be classified as isolated/single or combined/multiple, unilateral or bilateral and painless or painful paresis. Additionally, they may be presented acute, subacute or recurrent manner. Combined or multiple cranial nerve paresis (MCNP) can occur due to a variety of different causes such as some syndromes or systemic diseases, extracranial or intracranial pathologies (brain stem, meninx, and base of the skull (BoS). 1−6 CN paresies including CN III (n.oculomotorius), IV (n.trochlearis), V (n.trigeminus), VI (n.abducens) and VII (n.facialis) are eye-associated MCNP. 6−9 The common causes of eye-associated MCNP often include cavernous sinus syndrome (CSS), superior orbital fissure syndrome (SOFS), orbital apex syndrome (OAS) and cerebellopontine angle syndrome (CPAS). 8−18 A recent report including the largest series of 979 cases of MCNP by Kaene demonstrates that the most common causes of MNCP are tumors with a percent of 30. 4 The other common causes by frequency order are infarctions, trauma, infection, Guillain-Barré syndrome, Fisher syndrome, idiopathic cavernous sinusitis, surgery, multiple sclerosis, demyelinating encephalomyelitis, and diabetes mellitus. In this publication, it has been reported that the most common tumor was schwannoma with a percent of 17. The other tumoral causes of MNCP are metastases, meningioma, lymphoma, pontine glioma, nasopharyngeal carcinoma, pituitary adenoma, chordoma, leukemia, epidermoid tumor, and glomus jugulare tumor, respectively. Kaene reported that the common pathologic sites were cavernous sinus (CS) (25%), clivus and BoS (13%), subarachnoid space (10%) and cerebellopontine angle (CPA) (8%). CN which is most commonly involved is CN VI followed by CN V, IV. 4 Cavernous sinus syndrome (CSS) The cavernous sinus (CS) is a venous plexus located between the periosteal and dural layers of the meninx and at the central BoS, on both sides of the sella turcica (pituitary). Structures passing through CS are the internal carotid artery, its sympathetic plexus and CN, III, IV, VI, V1 (n.ophthalmicus) and V2 (n.maxillaris) and the superior and inferior orbital veins (SOV and IOV). 1−9,15−18 Cavernous sinus syndrome is characterized by MCNP manifesting with ophthalmoplegia, ptosis, and facial sensory loss, proptosis, orbital (ocular and conjunctival) congestion, sympathetic disturbance and Horner's syndrome due to MCNP of the CN III, IV, and VI responsible for ocular movements and pupillary function, and at least one branch of the CN V. However, CSS does not involve the optic nerve. The most common causes of CSS are neoplastic (metastatic including head and neck tumors and primary tumors such as lymphoma), traumatic, vascular (aneurysms, fistulas, and thrombosis), congenital, infectious (fungal infection), inflammatory or granulomatous pathologies involving CS, and Tolosa Hunt syndrome (THS), the idiopathic granulomatous inflammation involving CS. 1−9,15−18 In pure CSS, the involvement of CNs III, IV, VI and V1, V2, V3 (n.mandibularis) and Horner's syndrome is present. The anterior CSS only involves CN V1 branch. The middle CSS is caused by the involvement of CN V1 and V2. The posterior CSS occurs in whole CN V involvement. 14 In CSS associated with Horner's syndrome, the pain is present in contrast to SOFS. 15 The frequency of the involvement of four CNs has been reported as is in about 78% of cases. 19 Superior orbital fissure syndrome (SOFS) Superior orbital fissure (SOF) lies at the back of the orbit between the lesser and greater wing of the sphenoid bone. It contains the SOV and IOV, superior and inferior branches of CN III, IV, VI, V1 and its branches including lacrimal, frontal, supraorbital, supratrochlear and nasociliary nerves. SOFS which is also known as Rochen-Duvigneaud syndrome is characterized by retro-orbital paralysis of EOMs and impairment of the CN V1 without optic neuropathy. SOFS is a symptom complex caused by compression of structures which exist in SOF just anterior to the orbital apex. 1−4,8−14 The main difference of SOFS from OAS is no optic nerve involvement in SOFS. 11 In SOFS, CNs III, IV, VI and V1 is involved. It is known that the most common cause of SOFS is trauma (craniomaxillofacial injury) including motorcycle accidents, zygomatic and orbital fractures. Other causes of SOFS are tumors including lymphoma and rhabdomyosarcoma, infectious diseases including syphilis (syphilitic periostitis), meningitis, sinusitis, herpes zoster, ischemic, vasculitic or inflammatory diseases including THS, sarcoidosis, systemic lupus eritematosus (SLE), Wegener's granulomatosis, polyarteritis nodosa, or temporal arteritis and vascular events including carotid-cavernous fistulas (CCF), retroorbital haematoma and carotid aneurysm. However, it may also be idiopathic. 1−4,8−14 In SOFS, the compression and inflammation of neural structures which is responsible for main clinical symptoms are caused by a bony fragment from facial fractures or any edema, bleeding or mass in SOF. CN VI is most commonly damaged because of its location within the central SOF and close to the greater wing. CN IV is the least commonly involved CN because of its protection by the common tendinous ring. 1−4,8−14 The main clinical features of SOFS include ophthalmoplegia due to damage to CN III, IV and VI, ptosis due to CN III injury and loss of sympathetic input, proptosis due to decreased tension in the EOMs with loss of innervation, fixed dilated pupil due to loss of parasympathetic innervation of the pupil by the CN III, lacrimal hyposecretion and eyelid or forehead anaesthesia and decreased corneal sensation due to damage to CN V1, chemosis and bruits caused by vascular congestion and occasionally visual loss due to mechanical compression of CN II (n.opticus). The proptosis, eyelid swelling, and chemosis indicate significant orbital masses. Additionally, the patients with SOFS caused by a facial trauma may show the findings such as subconjunctival hemorrhage, periorbital ecchymosis, and soft tissue contusion attributed to trauma. The partial SOFS is the involvement of the central sector associated with isolated CN III, CN VI, and nasociliary nerve. 1 Tolosa-hunt syndrome (THS) Tolosa-Hunt syndrome is a cause of the painful ophthalmoplegia in the fifth decade with unknown etiology located in SOF and anterior CSs. The cause of pain is granulomatous inflammation due to the infiltration of the septas or walls of SOF or CS by lymphocytes and macrophages. The good response of the pain to the steroid treatment within 72 hours is critical for diagnosis of THS. Common diagnostic criteria for THS are severe unilateral peri-or retro-orbital pain in a perforating manner, ipsilateral ophthalmoplegia with or without periarterial sympathetic fiber and optic nerve involvement and any local or systemic pathology, exclusion of traumatic, infective, vascular, neoplastic, metabolic, and inflammatory causes, and episodes with spontaneous remissions. The paralysis of one or more of the CNs III, IV, and CN VI occur. However, CN V1 and V2 may be affected. The most affected CN is CN III. If inflammation reaches to OA, CN II dysfunction may be developed. Horner syndrome may sometimes be associated with THS due to the involvement of carotid sympathetic fibers. Pupillary dysfunction may occur in some cases because of the involvement of parasympathetic fibers of CN III. 20−21 Orbital apex syndrome (OAS) The orbital apex (OA) is the most posterior part of the pyramidalshaped orbit at the craniofacial junction. OA includes the region (tendinous ring) where the rectus EOMs origins and also the entry of neuro-vascular structures transmitted from the intracranial compartment into the orbit through bony apertures (optic canal, SOF and inferior orbital fissure). [1][2][3][4][8][9][10][11]17,22 Orbital apex syndrome is also known as Jacod syndrome. OAS is characterized by the involvement of CNs II, III, IV and VI. The clinical characteristics of OAS are vision loss (if CN II involvement is present), optic neuropathy and ophthalmoplegia. The etiologies of OAS are neoplastic pathologies (nasopharyngeal carcinoma, hematological tumors, neural tumors, lymphoma, metastasis or middle cranial fossa (near the apex of the orbit), inflammatory causes such as idiopathic orbital inflammation, collagen vascular disease, sarcoidosis, systemic lupus erythematosus, granulomatosis with polyangiitis, giant cell arteritis, thyroid disease, iatrogenic causes (following sinonasal surgery), orbital apex fracture, vascular events like carotid aneurysm, trauma, mucocele, fungal infections such as Aspergillus and Mucormycosis. OAS is separated from SOFS with occurring a lesion immediately anterior to the orbital apex. [1][2][3][4][8][9][10][11]17,22 Cerebellopontine angle syndrome (CPAS) The cerebellopontine angle (CPA) is a triangular area situated at inferior to the tentorium, lateral to the pons, and ventral to the cerebellum. CPAS is characterized by MNCP including CN V, VII, VIII and cerebellar signs. Its etiology includes vascular (intracavernous sinus carotid aneurysm or post-cerebral aneurysm, cavernous sinus thrombosis, migraine, giant cell arteritis), inflammatory (THS, meningitis, syphilis, tuberculosis and herpes zoster infection, Wegener's granulomatosis, Sarcoidosis), tumoral (primarypituitary adenoma, meningioma, craniopharyngioma, secondary to nasopharyngeal carcinoma, lymphoma and distant metastasis), traumatic pathologies and CCF. CPAS arises from the closeness of the CPA to specific CNs. [1][2][3][4][8][9][10]12,13 The presentation includes unilateral hearing loss (most common), speech impediments, imbalance, vertigo, tinnitus, disequilibrium, tremors or slowly progressive deafness, other loss of motor control. CPAS is usually due to an acoustic schwannoma arising from the vestibular nerve. Other tumoral causes of the CPAS are pontine glioma, meningioma, epidermoid and cerebellar astrocytoma. Diplopia due to CN VI palsy, papilledema due to raised intracranial pressure, fine rapid gaze-evoked nystagmus to the side opposite the lesion, and ipsilateral low frequency and large amplitude vertical nystagmus, ipsilateral corneal reflex loss and facial weakness may associated to above mentioned signs. [1][2][3][4][8][9][10]12,13 Lateral medullary syndrome (LMS) Lateral medullary syndrome (LMS) is a neurological symptom complex due to ischemia from atherothrombotic occlusion most commonly in the vertebral artery or the posterior inferior cerebellar artery (PICA) in the brainstem. LMS etiology includes stroke and multiple sclerosis. 8,23 LMS is also called Wallenberg's syndrome, PICA syndrome, and vertebral artery syndrome. In LMS, CNs V, VIII, IX and X were involved. It is characterized by contralaterally sensory deficits affecting the trunk and extremities, and ipsilateral sensory deficits affecting the face and CNs. 8,23 Clinical features include contralaterally impaired pain or facial sensation and temperature sensation in the arms and legs (due to the involvement of CN V and spinothalamic tract, respectively), ipsilateral impaired taste sensation due to the involvement of the tractus and nucleus solitarius, nystagmus caused by the involvement of CN VIII, dysarthria, and dysphonia, nausea/vomiting, dizziness, nystagmus commonly associated with vertigo spells, falling caused by the involvement of Deiters' nucleus, imbalance in walking or maintaining (ataxia) caused by the damage to the cerebellum or the inferior cerebellar peduncle, dysphagia or difficulty swallowing caused by the involvement of the
2019-10-03T09:09:44.470Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "1d67f8a4aa709cd7868e8d01dfa15ae8c29f4dcf", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/AOVS/AOVS-09-00350.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0aa14f461521b190cc6265346203d1c2b6d62e91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3511502
pes2o/s2orc
v3-fos-license
The High-Dimensional Geometry of Binary Neural Networks Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of theoretical analysis to explain why we can effectively capture the features in our data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated the viability of such BNNs, our work explains why these BNNs work in terms of the HD geometry. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks. Introduction The rapidly decreasing cost of computation has driven many successes in the field of deep learning in recent years. Given these successes, researchers are now considering applications of deep learning in resource limited hardware such as neuromorphic chips, embedded devices and smart phones , Neftci et al. (2016), Andri et al. (2017)). A recent success for both theoretical researchers and industry practitioners is that traditional neural networks can be compressed because they are highly over-parameterized. While there has been a large amount of experimental work dedicated to compressing neural networks (Sec. 2), we focus on the particular approach that replaces costly 32-bit floating point multiplications with cheap binary operations. Our analysis reveals a simple geometric picture based on the geometry of high dimensional binary vectors that allows us to understand the successes of the recent efforts to compress neural networks. . Each oval corresponds to a tensor and the derivative of the cost with respect to that tensor. Rectangles correspond to transformers that specify forward and backward propagation functions. Associated with each binary weight, w b , is a continuous weight, w c , that is used to accumulate gradients. k denotes the kth layer of the network. b. Each binarize transformer has a forward function and a backward function. The forward function simply binarizes the inputs. In the backward propagation step, one normally computes the derivative of the cost with respect to the input of a transformer via the Jacobian of the forward function and the derivative of the cost with respect to the output of that transformer (δu ≡ dC/du where C is the cost function used to train the network). Since the binarize function is non-differentiable, they use a smoothed version of the forward function for the backward function (in particular, the straight-through estimator of Bengio et al. (2013)). binary weights and activations with stochastic gradient descent. While their work has shown how to train such networks, the existence of neural networks with binary weights and activations needs to be reconciled with previous work that has sought to understand weight matrices as extracting out continuous features in data (e.g. Zeiler & Fergus (2014)). Summary of contributions: 1. Angle Preservation Property: We demonstrate that binarization approximately preserves the direction of high dimensional vectors. In particular, we show that the angle between a random vector (from a standard normal distribution) and its binarized version converges to arccos 2/π ≈ 37 • as the dimension of the vector goes to infinity. This angle is an exceedingly small angle in high dimensions. Furthermore, we show that this property is present in the weight vectors of a network trained using the method of . 2. Dot Product Preservation Property: We show that the batch normalized weight-activation dot products, an important intermediate quantity in these BNNs, are approximately preserved under the binarization of the weight vectors. Relatedly, we argue that the continuous weights in the method aren't just a learning artifact -they correspond to continuous weights trained using an estimator of the true gradient. Finally, we argue that this dot product preservation property is a sufficient condition for the modified learning dynamics to approximate the true learning dynamics that would train the continuous weights. 3. Generalized Binarization Transformation: We show that the computations done by the first layer of the network on CIFAR10 are fundamentally different than the computations being done in the rest of the network because the high variance principal components are not randomly oriented relative to the binarization. Thus we recommend an architecture that uses a continuous convolution for the first layer to embed the image in a high dimensional binary space, after which it can be manipulated with cheap binary operations. Furthermore, we hypothesize that a GBT (rotate, binarize, rotate back) will be useful for dealing with low dimensional data embedded in a HD space that is not randomly oriented relative to the axes of binarization. Related Work Neural networks that achieve good performance on tasks such as IMAGENET object recognition are highly computationally intensive. For instance, AlexNet has 61 million parameters and executes 1.5 billion operations to classify one 224 by 224 image (30 thousand operations/pixel) (Rastegari et al. (2016)). There has been a substantial amount of work to reduce this computational cost for embedded applications. First, there are a variety of approaches that seek to compress pre-trained networks. Kim et al. (2015) uses a Tucker decomposition of the kernel tensor and fine tunes the network afterwards. Han et al. (2015b) train a network, prune low magnitude connections, and retrain. Han et al. (2015a) extend their previous work to additionally include a weight sharing quantization step and Huffman coding of the weights. Third, there are numerous works on training networks that have quantized weights and or activations. Classically, Bengio et al. (2013) looked at a variety of estimators for the gradient through a stochastic binary unit. trains networks with binary weights, and then later with binary weights and activations ). Rastegari et al. (2016) replace a continuous weight matrix with a scalar times a binary matrix (and have a similar approximation for weight activation dot products). Kim & Smaragdis (2016) Beyond merely seeking to compress neural networks, there is a variety of papers that seek to analyze the internal representations of neural networks. Agrawal et al. (2014) found that feature magnitudes in higher layers do not matter (e.g. binarizing features barely changes classification performance). analyze the robustness of neural network representations to a collection of different distortions. Dosovitskiy & Brox (2016) observe that binarizing features in intermediate layers of a CNN and then using backpropagation to find an image with those features leads to relatively little distortion of the image compared to dropping out features. These works naturally lead into our work where we are seeking to better understand the representations in neural networks based on the geometry of HD binary vectors. Theory and Experiments In this section, we outline two theoretical predictions and then verify them experimentally. We train a binary neural network on CIFAR-10 (same learning algorithm and architecture as in ). This convolutional neural network has six layers of convolutions, all of which have a 3 by 3 spatial kernel. The number of feature maps in each layer are 128, 128, 256, 256, 512, 512. After the second, fourth, and sixth convolutions, we do a 2 by 2 max pooling operation. Then we have two fully connected layers with 1024 units each. Each layer has a batch norm layer in between. We note that the dimension of the weight vector in consideration (i.e. convolution converted to a matrix multiply) is the patch size (= 3 * 3 = 9) times the number of channels. We also carried out experiments using MNIST and got similar results. Preservation of Direction During Binarization In the hyperdimensional computing theory of Kanerva (2009), one of the key ideas is that two random, high-dimensional vectors of dimension d whose entries are chosen uniformly from the set {−1, 1} are approximately orthogonal (by the central limit theorem, the cosine angle between two such random vectors is normally distributed with µ = 0 and σ ∼ 1/ √ d (cos θ ≈ 0 → θ ≈ π 2 )). We apply this approach of analyzing the geometry (i.e. angle distributions) of high-dimensional vectors to binary a b c d Figure 2: Binarization of Random Vectors Approximately Preserves their Direction: (a) Distribution of angles between two random vectors, and between a vector and its binarized version, for a rotationally invariant distribution. Looking at the angle between a random vector of dimension d and its binarized version, we get the red curves. We note that the distribution is peaked near the d → ∞ limit of arccos 2/π ≈ 37 • (SI, Sec. 1). Is 37 • a small angle or a large angle? In order to think about that, consider the distribution of angles between two random vectors (blue curves). We see that for low dimensions, the angle between a vector and its binarized version has a high probability of being similar to a pair of two random vectors. However, when we get to a moderately high dimension, we see that the red and blue curves are well separated. (b) Angle distribution between continuous and binary weight vectors by layer for a binary CNN trained on CIFAR-10. For the higher layers, we see a relatively close correspondence with the theory, but with a systematic deviation towards slightly larger angles. d is the dimension of the filters at each layer. (c) Standard deviations of the angle distributions from (b) by layer. We see a correspondence to the theoretical expectation that standard deviations of each of the angle distributions scales as d −0.5 (SI, Sec. 1). (d) Histogram of the components of the continuous weights at each layer. We note that this distribution is approximately Gaussian for all but the first layer. Furthermore, we note that there is a high density of weights near zero, which is the threshold for the binarization function. vectors. As a null distribution, we use the standard normal distribution, which is rotationally invariant, to generate our vectors. In moderately high dimensions, binarizing a random vector changes its direction by a small amount relative to the angle between two random vectors. This is contrary to our low-dimensional intuition that is guided by the fact that the angle between two random 2D vectors is uniformly distributed (Fig. 2). In order to test the applicability of our theory of Gaussian random vectors to real neural networks, we train a multilayer binary CNN on CIFAR10 (using the Courbariaux et al. (2016) method) and look at the weight vectors 1 in that trained network. We see a close correspondence between the experimental results and our theory for the angles between the binary and continuous weights (Fig. 2). We note that there is a small but systematic deviation from the theory towards larger angles for the higher layers of the network. Dot Product Preservation as a Sufficient Condition for Sensible Learning Dynamics One reasonable question to ask is: are these so-called 'continuous weights' just a learning artifact without a clear correspondence to the binary weights? While we know that w b = θ(w c ), there are many continuous weights that map onto a particular binary weight vector. Which one do we find when we apply the method of ? As we discuss below, we get the continuous weight that preserves the dot products with the activations. The key to our analysis is to focus on the transformers in our network whose forward and backward propagation functions are not related in the way that they would normally be related in typical gradient descent. We show that the modified gradient that we are using can be viewed as an estimator of the true gradient that would be used to train the continuous weights in traditional backpropagation. Furthermore, we show that a sufficient property for this estimator to be a good one is that the dot products of the activations with the pre-binarized and post-binarized weights are proportional. Suppose we have a neural network where we allocate two tensors, u, and v (and the associated derivatives of the cost with respect to those tensors, δu and δv). Suppose that the loss as a function of v is L(x)| x=v . Further, there are two potential forward propagation functions, f , and g. If we trained our network under normal conditions using g as the forward propagation function, then we would compute In the modified backpropagation scheme, we compute A sufficient condition for the updates δu to be the same is where a ∼ b means that the vector a is a scalar times the vector b. Now we specialize this argument to the binarize block that binarizes the weights in our networks. Here, u is the continuous weight, w c , f (u) is the pointwise binarize function, g(u) is the identity function 2 , and L is the loss of the network as a function of the weights in a particular layer. Given our architecture, we can write L(x) = M (a · x) where a are the activations corresponding to that layer (a is binary for all except the first layer) and M is the loss as a function of the weight-activation dot products. Then L (x) = M (a · x) a where denotes a pointwise multiply. Thus the sufficient condition is M (a · w b ) ∼ M (a · w c ). Since the dot products are followed by a batch normalization, We call this final relation the Dot Product Preservation (DPP) property. In summary, the learning dynamics where we use g for the forward and backward passes (i.e. training the network with continuous weights) is approximately equivalent to the modified learning dynamics (f on the forward pass, and g on the backward pass) when we have the DPP property. We also come at this problem from another direction. In SI, Sec. 2 we work out the learning dynamics of the modified backprop scheme in the case of a one layer neural network that seeks to do regression (this ends up being linear regression with binary weights). In this case, the learning dynamics for the weights end up being ∆w c ∼ C yx − θ(w c )C xx where C yx is the input-output correlation matrix and C xx is the input covariance matrix. Since θ forces the weight matrix to be binary, this equation cannot be satisfied exactly in general conditions. Specializing to the case of an identity input covariance matrix, we show that E(θ(w c )) = C yx . Intuitively, the entries of the weight matrix oscillate between +1 and −1 in the correct proportion in order to get the weight matrix correct in expectation. In high dimensions, these are likely to be out of phase, leading to a low variance estimator. Indeed, in our numerical experiments on CIFAR10, we see that the dot products of the activations with the pre-binarization and post-binarization weights are highly correlated (Fig. 3). Likewise, we verify a second relation that corresponds to ablating the other instance of binarize transformer in 2 For the weights, g as in Fig. 1 Layer 6 : r = 0.98 10 9 10 8 10 7 10 6 10 5 10 4 Figure 3: Binarization Preserves Dot Products: In this figure, we verify our hypothesis that binarization approximately preserves the dot-products that the network uses for computations. We train a convolutional neural network on CIFAR-10. Each figure shows a 2d histogram of the dot products between the binarized weights and the activations (horizontal axis) and the dot products between the continuous weights and the activations (vertical axis). Note the logarithmic color scaling. We see that these dot products are highly correlated, verifying our theory (r is the Pearson correlation coefficient). Note they may differ up to a scaling constant due to the subsequent batch norm layer. The top left figure (labeled as Layer 1) corresponds to the input and the first convolution. Note that the correlation is weaker in the first layer. The shaded quadrants correspond to dot products where the sign changes when replacing the binary weights with the continuous weights. Notice that for all but the first layer, a very small fraction of the dot products lie in these off diagonal quadrants. the network, the transformer that binarizes the activations : w b · a c ∼ w b · a b where a c denotes the pre-binarized (post-batch norm) activations (Fig. 4). For the practitioner, we recommend checking the DPP property in order to assess the areas in which the network's performance is being degraded by the compression of the weights or activations. Impact on Classification: As we've argued, the quantity that the network cares about, the batch normalized weight-activation dot products, is preserved under binarization of the weights. It is also natural to ask to what extent the classification performance depends on the binarization of the weights. In our experiments on CIFAR10, if we remove the binarization of the weights on all of the convolutional layers, the classification performance drops by only 3 percent relative to the original network. Looking at each layer individually, we see that removing the weight binarization for the first layer accounts for this entire percentage, and removing the binarization of the weights for each other layer causes no degradation in performance. We note that removing the binarization of the activations unsurprisingly has a substantial impact on the classification performance because that removes the main non-linearity of the network. Permutation of Activations Reveals Fundamental Difference Between First Layer and Subsequent Layers Looking at the correlations in Fig. 3, we see that the first layer has a much smaller dot product correlation than the other layers. In order to understand this observation better, we investigate the different factors that lead to the dot product correlation. For instance, it could be the case that the correlation between the two dot products is due to the two weight vectors being closely aligned. Another explanation is that the weight vectors are well-aligned with the informative directions in the data. To study this, we apply a random permutation to the activations in order to generate a distribution with the same marginal statistics as the original data but independent joint statistics. Such Layer 5 : r = 0.97 10 10 10 9 10 8 10 7 10 6 10 5 Figure 4: Left: Permuting the activations shows that the correlations observed in Fig. 3 are not merely due to correlations between the binary and continuous weight vectors. The correlations are due to these weight vectors corresponding to important directions in the data. Right: Activation Binarization Preserves Dot Products: Each figure shows a 2d histogram of the dot products between the binarized weights and binarized activations (vertical axis) and post-batch norm (but pre activation binarization) activations (horizontal axis). Again, we see that the binarization transformer does little to corrupt the dot products between weights and activations. a transformation gives us a distribution with a correlation equal to the normalized dot product of the weight vectors (SI Sec. 3). As we can see in Fig. 4, the correlations for the higher layers decrease substantially but the correlation in the first layer increases (for the first layer, the shuffling operation randomly permutes the pixels in the image). Thus we demonstrate that the binary weight vectors in the first layer are not well-aligned with the continuous weight vectors relative to the input data. We hypothesize that the core issue at play is that the input data is not randomly oriented relative to the axes of binarization. In order to be clear on what we mean by the axes of binarization, first consider the Generalized Binarization Transformation (GBT): where x is a column vector, R is a rotation matrix, and θ is the pointwise binarization function from before. We call the rows of R the axes of binarization. If R is the identity matrix, then we reduce back to our original binarization function and the axes of binarization are just the canonical basis vectors (..., 0, 1, 0, ...). Consider the 27 dimensional input to the first set of convolutions in our network: 3 color channels of a 3 by 3 patch of an image from CIFAR10 with the mean removed. 3 PCs capture 90 percent of the variance of this data and 4 PCs capture 94.5 percent of the variance. Furthermore, these PCs aren't randomly oriented relative to the binarization axes. For instance, the first two PCs are spatially uniform colors. More generally, natural images (such as those in IMAGENET) will have the same issue. Translation invariance of the pixel covariance matrix implies that the principal components are the filters of the 2D fourier transform. Scale invariance implies a 1/f 2 power spectrum, which results in the largest PCs corresponding to low frequencies. Stepping back, this control gives us important insight: the first layer is fundamentally different from the other layers due to the non-random orientation of the data relative to the axes of binarization. Practically speaking, we have two recommendations. First, we recommend an architecture that uses a set of continuous convolutional weights to embed images in a high-dimensional binary space, after which it can be manipulated efficiently using binary vectors. While there isn't a large accuracy degradation on CIFAR10, these observations are going to be more important on datasets with larger images such as IMAGENET. We note that this theoretically grounded recommendation is consistent with previous empirical work. Han et al. (2015b) find that compressing the first set of convolutional weights of a particular layer by the same fraction has the highest impact on performance if done on the first layer. Zhou et al. (2016) find that accuracy degrades by about 0.5 to 1 percent on SHVN when quantizing the first layer weights. Second, we recommend experimenting with a GBT where the rotation is chosen so that it can be computed efficiently. This solves the problem of low-dimensional data embedded in a high dimensional space that is not randomly oriented relative to the binarization function. Conclusion Neural networks with binary weights and activations have similar performance to their continuous counterparts with substantially reduced execution time and power usage. We provide an experimentally verified theory for understanding how one can get away with such a massive reduction in precision based on the geometry of HD vectors. First, we show that binarization of high-dimensional vectors preserves their direction in the sense that the angle between a random vector and its binarized version is much smaller than the angle between two random vectors (Angle Preservation Property). Second, we take the perspective of the network and show that binarization approximately preserves weight-activation dot products (Dot Product Preservation Property). More generally, when using a network compression technique, we recommend looking at the weight activation dot product histograms as a heuristic to help localize the layers that are most responsible for performance degradation. Third, we discuss the impacts of the low effective dimensionality on the first layer of the network and recommend either using continuous weights for the first layer or a Generalized Binarization Transformation. Such a transformation may be useful for architectures like LSTMs where the update for the hidden state declares a particular set of axes to be important (e.g. by taking the pointwise multiply of the forget gates with the cell state). More broadly speaking, our theory is useful for analyzing a variety of neural network compression techinques that transform the weights, activations or both to reduce the execution cost without degrading performance. Expected Angles We draw random n dimensional vectors from a rotationally invariant distribution and compare the angles between two random vectors and the binarized version of that vector. We note that a rotationally invariant distribution can be factorized into a pdf for the magnitude of the vector times a distribution on angles. In the expectations that we are calculating, the magnitude cancels out and there is only one rotationally invariant distribution on angles. Thus it suffices to compute these expectations using a Gaussian. Lemmas: 1. Consider a vector, v, chosen from a standard normal distribution of dimension n. • Distribution of angles between two random vectors. Since a Gaussian is a rotationally invariant distribution, we can say without loss of generality that one of the vectors is (1, 0, 0, . . . 0). Then the cosine angle between those two vectors is ρ as defined above. While we have the exact distribution, we note that -E(ρ) = 0 due to the symmetry of the distribution. Roughly speaking, we can see that the angle between a vector and a binarized version of that vector converges to arccos 2 π ≈ 37 • which is a very small angle in high dimensions. An Explicit Example of Learning Dynamics In this subsection, we look at the learning dynamics for the BNN training algorithm in a simple case and gain some insight about the learning algorithm. Consider the case of regression where we try and predict y with x with a binary linear predictor. Using a squared error loss, we have L = (y −ŷ) 2 = (y − w b x) 2 = (y − θ(w c )x) 2 . (In this notation, x is a column vector.) Taking the derivative of this loss with respect to the continuous weights and using the rule for back propagating through the binarize function, we get Finally, averaging over the training data, we get (1) It is worthwhile to compare this equation the corresponding equation from typical linear regression: ∆w c ∼ C yx − w c · C xx For simplicity, lets consider the case where C xx is the identity matrix. In this case, all of the components of w become independent and we get the equation δw = * (α − θ(w)) where is the learning rate and α is the entry of C yx corresponding to a particular element, w. If we were doing regular linear regression, it is clear that the stable point of these equations is when w = α. Since we binarize the weight, that equation cannot be satisfied. However, it can be shown ( ) that in this special case of binary weight linear regression, E(θ(w c )) = α. Intuitively, if we consider a high dimensional vector and the fluctuations of each component are likely to be out of phase, then w b · x ≈ w c · x is going to be correct in expectation with a variance that scales as 1 n . During the actual learning process, we anneal the learning rate to a very small number, so the particular state of a fluctuating component of the vector is frozen in. Relatedly, the equation C yx ≈ wC xx is easier to satisfy in high dimensions, whereas in low dimensions, we only satisfy it in expectation. Rough proof for ( ): Suppose that |α| ≤ 1. The basic idea of these dynamics is that you are taking steps of size proportional to whose direction depends on whether w > 0 or w < 0. In particular, if w > 0, then we take a step − · |1 − α| and if w < 0, we take a step · (α + 1). It is evident that after a sufficient burn-in period, |w| ≤ * max(|1 − α|, 1 + α) ≤ 2 . Suppose w > 0 occurs with fraction p and w < 0 occurs with fraction 1 − p. In order for w to be in equilibrium, oscillating about zero, we must have that these steps balance out on average: p(1 − α) = (1 − p)(1 + α) → p = (1 + α)/2. Then the expected value of θ(w) is 1 * p + (−1) * (1 − p) = α. When |α| > 1, the dynamics diverge because α − θbin(w) will always have the same sign. This divergence demonstrates the importance of some normalization technique such as batch normalization or attempting to represent w with a constant times a binary matrix. Dot Product Correlations After Activation Permutation Suppose that we look at A = w · a and B = v · a where a are now the randomly permuted activations. What does the distribution of A, B look like? To answer this, we look at the correlation between A and B and show that it is the correlation between w and v. First, let us assume that p(a) = i f (a i ) E(a i ) = 0, E(a 2 i ) = σ 2 . Then E(A) = E(B) = 0. Now we compute: E(AB) = i,j w i v j E(a i a j ) = σ 2 (w · v) Likewise, E(A 2 ) = σ 2 (w · w) and E(B 2 ) = σ 2 (v · v). Thus the correlation coefficient w·v |w||v| , as desired
2017-05-04T20:33:30.292Z
2017-02-16T00:00:00.000
{ "year": 2017, "sha1": "241ceceaff111fcdd91fdbd0538e7ab9b055aee0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bef82d428f7e976ca4a96a96d1f52881ce2be87f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246053647
pes2o/s2orc
v3-fos-license
Skin manifestations of neuroendocrine neoplasms: review of the literature Neuroendocrine neoplasms (NENs) are a heterogeneous group of rare tumours derived from peptidergic neurons and specialized neuroendocrine cells capable of secreting various peptides or amines. These cells may be present in endocrine tissue or diffused in the tissues of the digestive or respiratory system. The article reviews the characteristic features of NENs, with particular emphasis on skin manifestations, such as necrolytic migratory erythema (NME), tongue inflammation, angular cheilitis, venous thrombosis and alopecia in glucagonoma; “flushing”, “lion face”, pellagra skin symptoms, “scleroderma-like features without Raynaud’s phenomenon” in carcinoid tumours. The paper also presents the clinical picture of the neuroendocrine tumour of the skin – Merkel cell carcinoma. The aim of this study was to draw attention to the need for precise and comprehensive diagnosis of the patients, with particular emphasis on skin lesions as a revelator of neuroendocrine tumours. This management allows for the early implementation of appropriate treatment. Introduction Neuroendocrine neoplasms (NENs) are a heterogeneous group of rare tumours derived from peptidergic neurons and specialized neuroendocrine cells capable of secreting various peptides or amines [1]. A characteristic feature is the expression of neuroendocrine markers in the cells of these tumours, including an increased amount of somatostatin receptor protein (SSTR) [2]. These cells may be present in endocrine tissue (e.g. pituitary, parathyroid, adrenal), in glandular tissue (e.g. thyroid, pancreas) or diffused in the tissues of the digestive or respiratory system [3]. Among the neuroendocrine neoplasms, we distinguish tumours with different degrees of histological malignancy ("G" grading), ranging from highly differentiated NENs (from G1 to G3) to low differentiated neuroendocrine cancers (NECs) with a high degree of malignancy [4]. Gastrointestinal NEN (GEP-NEN) are mostly malignant tumours. The clinical picture of an active tumour is usually dominated by symptoms caused by excessive hormone secretion, while the size of the tumour is small, which makes it difficult to locate. The neuroendocrine characteristics of clinically inactive tumours can be often demonstrated only by immunohistochemical examination [4]. Rare occurring mixed neoplasms -MiNENs (mixed neuroendocrine non-neuroendocrine neoplasms) -are most often located in the pancreas, containing components derived from the exocrine part of the pancreas and from neuroendocrine cells; treatment is the same as other pancreatic cancers [6]. NEC represents about 0.5% of all cancers. In the last three decades, the incidence of these cancers has increased. The crude incidence is about 0.2/100,000. The risk of the disease increases with age and reaches a peak between 50 and 70 years of age. Based on available data, the incidence was estimated at 35/100,000/ year. Unfortunately, most NENs are diagnosed at advanced stages of development, which makes effective treatment difficult. In patients from Western countries primary tumours are mainly located in the small intestine, rectum and pancreas [8][9][10]. Different types of neuroendocrine tumours cause different symptoms, depending on the location of the tumour and whether NET is active or inactive. Active NETs are defined on the basis of the presence of clinical symptoms resulting from excessive hormone secretion by the tumour. Inactive NETs do not secrete hormones. They may produce symptoms caused by tumour growth [11]. The common feature is the secretion of hormones or bioactive substances, which cause both extensive multisystem effects and often various skin symptoms [10]. This article reviews the main dermatological manifestations of NENs. The aim of this study was to draw attention to the need for precise and comprehensive diagnosis of the patient, with particular emphasis on skin lesions as a revelator of NENs. Skin manifestations of neuroendocrine tumours Glucagonoma Glucagonomas are rare tumours, derived from α cells of the pancreas. These neoplasms exhibit typical features of islet cell tumours; they are usually 2 to 25 cm in size and are most often located in the tail of the pancreas. According to the World Health Organization (WHO) classification of gastrointestinal tumours, glucagonoma is a type of active pancreatic neuroendocrine neoplasm (pNEN) [14]. Despite its mild histological appearance, most tumours are malignant, prone to metastases, which already occur at the time of cancer diagnosis. Glucagonoma syndrome is an extremely rare disease with an estimated prevalence of 1/20 000 000/year. The peak incidence is in the fifth decade of life. Rarely, glucagonoma may be associated with multiple endocrine neoplasia (MEN) type 1 [15][16][17]. Glucagonoma is a neuroendocrine tumour of the pancreas (pNET), it secrets glucagon and causes symptoms called glucagonoma syndrome [18]. The clinical syndrome that is classically associated with glucagonoma includes necrolytic migratory erythema (NME), abdominal pain, diarrhoea, constipation, weight loss, diabetes, anaemia, lip inflammation, venous thrombosis and neuropsychiatric symptoms. NME and weight loss occur in approximately 65% to 70% of patients at the time of diagnosis [18]. Diabetes mellitus affects 75% to 95% of patients with glucagonoma. Hyperglycaemia is usually mild and easily controlled by diet and oral hypoglycaemic agents and is not associated with diabetic ketoacidosis because b cell function is preserved [19]. Surgical excision is currently the only fully effective method of glucagonoma treatment. An infusion of somatostatin analogues (SSA) and/or amino acid solution may cause rapid resolution of symptoms [20]. Transdermal chemotherapy, radiotherapy and radioligand therapy with peptide receptors can also be useful [18,21]. Skin changes in the course of the glucagonoma NME initially manifests itself by the occurrence of erythematous papules or plaques covering the face, perineum and limbs. Skin lesions usually occur in the periorificial, flexial and acral regions; they resemble changes associated with zinc deficiency [22,23]. In the next 7 to 14 days the lesions increase with the next subsidence and leaving brown, hardened areas in the central part. On the periphery there are blisters (with tendency to epidermis exfoliation) and erosions covered with crusts. The affected areas are often itchy and painful [19,[21][22][23]. Disorders in the structure of the epidermis causing the skin changes observed in NME are probably the result of several interdependent factors including hypoaminoacidemia, zinc and essential fatty acid (EFA) deficiency and induction of inflammatory mediators in the epidermis [24]. Histology of NME revealing parakeratosis, the loss of the granular layer, necrosis, and separation of the upper epidermis with vacuolization of the keratinocytes, dyskeratotic keratinocytes, and neutrophils in the upper epidermis [25]. In addition, patients with glucagonoma may develop tongue inflammation (red, shiny/smooth tongue), angular cheilitis, venous thrombosis and alopecia. Additionally, like other ulcerative dermatoses, NME may be complicated by secondary skin infections, most commonly caused by Candida and Staphylococcus aureus [26,27]. Carcinoid tumours Carcinoid tumours are rare, slow-growing tumours derived from the middle part of the gastrointestinal tract. The incidence rate is estimated at 5.25/100,000 [8,28]. According to the new classification, carcinoid refers only to tumours that secrete serotonin. Histopathologically, these are highly differentiated serotonin-producing neuroendocrine tumours. Depending on the location of the tumour and the presence of metastases, in addition to serotonin, carcinoid tumours may also secrete histamine, corticotropin, dopamine, substance P, neurotensin, prostaglandins, kallikrein, bradykinin and tachykinin. Most of the tumours are located in the gastrointestinal tract, the main bronchi and the lungs. They constitute about 50% of gastro-entero-pancreatic neuroendocrine tutors (GEP-NETs) [4,29,30]. Single cases of primary skin carcinoma outbreaks and metastases of carcinoid tumour to skin were also described. The primary outbreaks were in the form of single, hard, non-inflammatory, domed nodules, and metastatic lesions in the form of pink, fast growing or subcutaneous nodules [28,[30][31][32][33]. About 10% of patients with carcinoid tumour develop a syndrome of symptoms called carcinoid syndrome. It is usually caused by the spread of neuroendocrine neoplasm. The symptoms mainly concern the gastrointestinal tract, respiratory system, cardiovascular system and skin. The criteria for diagnosing carcinoid syndrome include: 1) presence of metastases of neuroendocrine neoplasm to the liver or a primary tumour in the lungs, 2) peripheral vasomotor symptoms (a flushing few-minute redness of the face and neck) with tachycardia, dizziness, sometimes with swelling and excessive sweating, gradually leading to permanent telangiectasias, 3) gastrointestinal symptoms -watery diarrhoea (occurs in 30-80% of patients) with concomitant coliclike pain, 4) bronchospasm (rarely observed). Other symptoms include a blood pressure drop, headaches, heart palpitations, weakness, weight loss and arthritis. In the course of the carcinoid syndrome, serotonin-induced fibrosis of the right endocardium may occur, and some patients develop tricuspid valve and pulmonary trunk defects [4,29,34]. Skin symptoms of the carcinoid syndrome include a characteristic "flushing" -sudden paroxysmal redness of the skin of the face, neck and anterior surface of the chest. As a result of repeated, prolonged relapses, the skin lesions become fixed, and a bluish erythema with telangiectasias develops. The aggravating factors include alcohol, stress and some foods. If the primary tumour is located in the stomach, where ECL cells primarily produce histamine, the face becomes blue in the course of the flushing and sometimes the skin overgrowth getting the characteristics of a "lion face" [4,30,35]. Patients may also develop pellagra skin symptoms caused by a deficiency of tryptophan, which consumption for serotonin synthesis is high -erythema, xerosis, scaling, hyperkeratosis and pigmentation. Cases of "scleroderma-like features without Raynaud's phenomenon" were also described. Some patients have dry skin and itching [4,30,35]. Neuroendocrine tumours of skin -Merkel cell carcinoma Merkel cell carcinoma (MCC), otherwise known as neuroendocrine or trabecular carcinoma, is a rare skin neuroendocrine cancer, characterized by an aggressive course, prone to local recurrence and metastases to regional lymph nodes and distant organs. The aetiology is not fully understood, and the risk factors include UV exposure, immunosuppression and polyoma infection (Merkel-cell polyavirus, Merkel-cell polyomavirus -MCPyV). It is usually diagnosed in elderly people over 50 years of age (mean age about 75 years) [36,37]. The most common locations where cancer from Merkel cells occurs include areas exposed to chronic UV exposure -mainly the head and neck, and less often, the limbs or trunk. It is usually a painless dome, purpleblue or cherry-red nodule with a cohesive consistency. Sometimes the skin lesions take on an erythematous and infiltrated form. Rare clinical manifestations include the "giant variant", mucosal form, ulcerative tumours and numerous nodular lesions. Merkel cell carcinoma is characterized by rapid growth [30,36,37]. AEIOU acronym (asymptomatic, expanding rapidly, immune suppression, older than 50 years, and ultraviolet-exposed site) is used to describe the most common symptoms. In histopathological examination, the tumour is made up of blue cells with blurred boundaries (kurky cells), evenly distributed, or sometimes arranged in a trabecular system. The gold standard in the treatment is surgical removal of the lesion [30,36,37]. Neuroendocrine tumours treatment Patients diagnosed with metastatic disease are usually not eligible for curative surgery since the disease has spread to other parts of the body. Systemic treatment can be administered to individuals who are not candidates for surgery, which can help alleviate symptoms as well as slow the growth of tumours. Cancer treatment options based on evidence include Somatostatin analogues, mTOR inhibitors, TK inhibitors, peptide receptor radionuclide therapy (PRRT), chemotherapy, as well as cytoreductive techniques. There is, nevertheless, a growing demand for new treatments. Somatostatin Somatostatin (SST) is a neuropeptide that is released by paracrine cells located throughout the gastrointestinal tract and the brain. It works by binding to five G-protein-coupled receptors (SST receptors 1-5, SSTR1-5) [38]. It suppresses the release of numerous hormones, works as an immunological regulator, and acts as a neurotransmitter [39]. It also has cytotoxic and cytostatic properties, and under some conditions, may trigger apoptosis [40]. SSAs are usually well tolerated and with limited side effects, the more frequent being pain in the injection site and gastrointestinal side effects (abdominal pain, diarrhoea, nausea) [39]. Although many patients treated with SSA have symptomatic improvement and tumour growth stabilization for varied periods, tumour regression is uncommon, and hence multimodal therapy techniques are required to further improve the clinical care of patients with advanced NETs [38,39]. Interferon (IFN) α In the 1980s, IFN-α was approved for the treatment of NETs [41]. It acts through a variety of methods on cell proliferation and differentiation [42]. IFN is reserved for patients who are resistant to, or are unable to tolerate, SSA and other systemic medications, in addition to SSA for improved symptom control, or as a bridge therapy before initiating other treatments [43]. Telotristat ethyl Telotristat ethyl is a new oral inhibitor of tryptophan hydroxylase, which is required for serotonin production. Based on the results of the clinical trials [44,45], the United States Food and Drug Administration (FDA) recently approved telotristat ethyl (Xermelo, Lexicon Pharmaceuticals, Inc.) as the first and only oral treatment, in combination with SSAs, for adult patients with carcinoid syndrome-related diarrhoea who are not adequately controlled with SSA therapy alone. Targeted therapies -mammalian (mechanistic) Target of Rapamycin (mTOR) inhibitors mTOR is a protein kinase that controls cell growth, proliferation, and survival [46]. Many cancer models, including NETs, have abnormal over-activation of mTOR, and inhibition of mTOR by rapamycin and its analogues, such as everolimus (Afinitor, Novartis Oncology), has been shown to arrest tumour cell proliferation and slow tumour growth [46,47]. Everolimus was approved (by FDA) for the treatment of PNETs non-functional progressive intestinal and lung NETs. Importantly, the combination of everolimus and SSAs is thought to have a synergistic effect and should be used in patients with progressive NETs [48]. Sunitinib maleate (Sutent ® , Pfizer, Inc.) is a tyrosine kinase inhibitor (TKI) with anti-tumoral and antiangiogenic properties against several solid tumours. Sunitinib has been shown to be effective in both preclinical and clinical studies for pNETs [49]. Vascular endothelial growth factor inhibitor Bevacizumab is a vascular endothelial growth factor inhibitor (VEGF). In the treatment of GI-NETs, the combination of bevacizumab and capecitabine demonstrated clinical activity and a manageable safety profile that warrants validation in a randomised phase III trial [50]. Summary In our article we wanted to draw attention to the characteristics of skin symptoms occurring among patients suffering from neuroendocrine tumours. Precise examination of the patients, with particular emphasis on dermatological examination, may significantly accelerate the diagnosis of neuroendocrine tumours, allowing for early implementation of appropriate treatment.
2022-01-20T16:24:10.953Z
2022-01-18T00:00:00.000
{ "year": 2022, "sha1": "ae037752e921f28897ed4e6b58d032f92de3d003", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5114/ada.2021.112073", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d6db9afde9e9f68eefef0c8ba9854c7f2d219f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86649942
pes2o/s2orc
v3-fos-license
Predicting the Short Term Outcome in Acute Organophosphorus Compound ( OPC ) Poisoning with Poison Severity Scale on Hospital Admission in Dhaka Medical College Hospital Introduction Organophosphorus Compound (OPC) pesticide intoxication is estimated at 3 million per year worldwide with approximately 300,000 deaths mostly in Asia pacific region. Severe organophosphorus pesticides poisoning is a major clinical problem in Bangladesh. Objectives To assess the short term outcome OPC poisoning based on Peradeniya Organophosphorus Poisoning Scale (POP) soon after hospital admission into DMCH. Study design We evaluated the usefulness of the severity scale and Glasgow coma score by observational study for predicting the outcome. Place of study Medicine Ward of Dhaka Medical College Hospital, Dhaka. Duration of study July 2010 to December 2010. Study materials Study population All patients with Acute OPC poisoning admitted in Medicine ward of Dhaka Medical College Hospital, Dhaka. Study methods Suspected cases of OPC poisoning were enrolled and observed for at least 96 hours. Detailed history and clinical manifestations of all enrolled cases were taken in a pre-design case record form. All admitted cases were treated with traditional antidote of atropine and pralidoxime and other supportive treatment also given. Results Fifty patients of OPC poisoning who attended in the medicine ward who fulfilled the inclusion criteria were enrolled in the study. 88% of study patients were used OPC for deliberate self harm and 12% were accidental. Common clinical presentation were Pupillary (90%), Vomiting (80%), Bradycardia (52%), Abdominal cramp (42%), Tachypnea (34%), Salivation (32%) and Altered consciousness (30%) and Fasciculation (8%) respectively. 64% patients got recovered completely, 12% recovered with minor symptoms, 16% died and 8% remained in life threatening condition. In our study, according to POP scale, 62.50% were fatal and only 11.76% patients were survived among the severe graded patients. In moderate grade, non fatal cases were 64.70% and fatal cases were 37.50%. No fatal case was detected in mild grade of POP scale. Conclusion In our study 16% patients died. It is one of the commonest causes of death in medicine ward. The POP scale appears useful in assessing the severity of poisoning in term of management plan. The patients with evidence of moderate and severe degree of poisoning need to be monitored closely. Citation: Ahasan HAMN, Faruk AA, Bala CS, Minnat B (2017) Predicting the Short Term Outcome in Acute Organophosphorus Compound (OPC) Poisoning with Poison Severity Scale on Hospital Admission in Dhaka Medical College Hospital. J Toxicol Cur Res 1: 002. Introduction Poisoning is a common medical emergency.Self inflicted violence accounts for around half of the 1.6 million deaths that occur every year worldwide [1].Currently self poisoning with pesticide has become a major clinical problem of the developing countries [2,3].Bangladesh is a developing country of South Asia.Rural Population of this country is mostly dependant on agricultural cultivations.With the advancement of times, the limited availability of cultivating land results in widespread use of insecticides such as Organophosphorus Compound (OPC) [4] and these are readily available as Over the Counter (OTC) drugs in village shops and act as a common agent for suicidal purpose after trivial family problems [5].Industrialized countries are also affected by it, where a significant proportion of suicidal deaths are caused by pesticide ingestion [6,7]. Organophosphorus Compound (OPC) intoxication is estimated at 3 million per year worldwide with approximately 300,000 deaths largely in Asia pacific region [8].Fatality rate following deliberate ingestion of op pesticides in developing country in Asia is approximately 20% and may reach 70% during certain seasons and at rural hospitals [9].According to annual report 2009, Department of Medicine DMCH, Bangladesh, it shown that about 18% of total poisoning patient are organophosphorus pesticides poisoning [10]. Introduction Organophosphorus Compound (OPC) pesticide intoxication is estimated at 3 million per year worldwide with approximately 300,000 deaths mostly in Asia pacific region.Severe organophosphorus pesticides poisoning is a major clinical problem in Bangladesh. Objectives To assess the short term outcome OPC poisoning based on Peradeniya Organophosphorus Poisoning Scale (POP) soon after hospital admission into DMCH. Study design We evaluated the usefulness of the severity scale and Glasgow coma score by observational study for predicting the outcome. Place of study Medicine Ward of Dhaka Medical College Hospital, Dhaka. Duration of study July 2010 to December 2010. Study materials Study population -All patients with Acute OPC poisoning admitted in Medicine ward of Dhaka Medical College Hospital, Dhaka. Study methods Suspected cases of OPC poisoning were enrolled and observed for at least 96 hours.Detailed history and clinical manifestations of all enrolled cases were taken in a pre-design case record form.All admitted cases were treated with traditional antidote of atropine and pralidoxime and other supportive treatment also given. Results Fifty patients of OPC poisoning who attended in the medicine ward who fulfilled the inclusion criteria were enrolled in the study.88% of study patients were used OPC for deliberate self harm and 12% were accidental.Common clinical presentation were Pupillary (90%), Vomiting (80%), Bradycardia (52%), Abdominal cramp (42%), Tachypnea (34%), Salivation (32%) and Altered consciousness (30%) and Fasciculation (8%) respectively.64% patients got recovered completely, 12% recovered with minor symptoms, 16% died and 8% remained in life threatening condition.In our study, according to POP scale, 62.50% were fatal and only 11.76% patients were survived among the severe graded patients.In moderate grade, non fatal cases were 64.70% and fatal cases were 37.50%.No fatal case was detected in mild grade of POP scale. Conclusion In our study 16% patients died.It is one of the commonest causes of death in medicine ward.The POP scale appears useful in assessing the severity of poisoning in term of management plan.The patients with evidence of moderate and severe degree of poisoning need to be monitored closely.The basic mechanism of toxic effect of organophosphorus compound is the inhibition of acetyl cholinesterase at the nerve ending resulting accumulation of excess acetylcholine [4].Most patient die from cardio respiratory failure [11]. Assessment of Severity A number of systems have been proposed for predicting outcome in Organophosphorus Compound (OPC) poisoning.Many are reliant on laboratory tests [12][13][14][15][16] and are, therefore, less useful in resource poor locations.Others that use clinical parameters have only been validated using small numbers of patients. The Peradeniya Organophosphorus Poisoning (POP) scale assesses the severity of the poisoning based on the symptoms at presentation and is simple to use.In a study by Senayeke et al., patients with a high score on the POP scale had a high rate of morbidity and mortality [17]. This study used to investigate whether it was possible to predict inpatient mortality in organophosphorus poisoning using a scoring system based on simple clinical parameters recorded only at admission.This might enable clinicians to identify patients at high risk of dying soon after presentation, allowing more intensive monitoring and treatment.A simple system based on clinical features is likely to be most useful in low income countries where the majority of organophosphorus poisoning occurs [18]. There is no universal consensus regarding diagnosis, grading of severity and management of this serious life threatening poisoning [19].Different individualized spectrum of management is available around the world.The supportive and specific antidote are given and tuned according to individual patient.Psychological assessment and prevention strategies are lacking, ICU setup and logistic supports for proper management are also not available in all health facilities. From this hospital based study the purpose was to see if this POP scale could predict mortality in organophosphorus pesticides poisoning using data collect prospectively on a patient's admission in Dhaka Medical College Hospital in Bangladesh. The degree of severity is dependent on the degree of inhibition of synaptic cholinesterase, which can be indirectly assessed by serum cholinesterase activity [20].The most serious manifestations and the usual cause of death are respiratory failure results from weakness of respiratory muscles and depression of the respiratory centre, aggravated by excessive bronchial secretion and bronchospasm. Miosis is one of the most characteristic signs and is found in almost all patients of moderately severe and severely poisoning.Miosis may persist even after death.Transient hyperglycemia and glycosuria are found in severe OPC poisoning. The Peradeniya Organophosphorus Poisoning (POP) scale [17], which is based on five cardinal manifestations of OPC poisoning (miosis, fasciculation, respiratory difficulty, bradycardia and impairment of consciousness), can be used to assess the severity OPC poisoning at bed side immediately after admission (Table 1). Materials and Methods The study was carried out in the Medicine Ward of Dhaka Medical College Hospital, Dhaka from July 2010 to December 2010.Our objectives were to predict the outcome OPC poisoning based on clinical parameters soon after hospital admission, to assess the severity of poisoning, to describe the outcome of OPC poisoning of hospital admitted patient. All patients had a history of organophosphorus pesticide ingestion as stated by the patient or relatives, the transferring doctor or the pesticide bottle.Any cases with suspected OPC poisoning with clinical manifestations.Those patients were excluded who refused to give consent voluntarily and patients who had taken more than one poison were excluded.Purposive type non probability sampling technique was used in the study. All data was collected by using a performed data sheet, this was done by detailed history from patient relatives, complete physical examination.Short term outcome of all the cases was recorded.Statistical analysis was done by SPSS.Results presented by choosing of variables in the form of tables, graph, percentage, chart etc.The frequency rates of various information were described & compared using statistical methods. Results Out of 50 patients in our study 36 (72%) were male and 14 (28%) were female giving male-female ratio 2.57:1.Age distribution of study patients were 15 to 50 years.Maximum numbers of the patients were in the age group of 15 to 20 years (32%).Overall 76% of patients were < 30 years and 24% were > 30 years and above.Cultivator and students were the most common groups of patients in our study 28% and 25% respectively followed by house wives 16%, service holder 15%, Businessmen 12% and others were 4%. 28 (82.35%) of recovered patients GCS were > 10 and among the fatal cases 12 (75%) patients GCS were < 10 at admission.10 (62.50%) of fatal patients POP scale was in severe grade but only 4 (11.76%) of recovered patients POP scale was in severe grade. Relationship of Glasgow Coma Scale (GCS) with fatality rate In this study 75% of fatal patient in comparison 17.64% of nonfatal patients GCS were in between 5-10 at admission.25% fatal patients and 82.35% nonfatal patients GCS were > 10 (Figure 2). Relationship of POP scale with outcome Regarding POP scale, in our study 10 (62.50%) of fatal patients and only 4 (11.76%) of nonfatal patients were in severe grade in POP scale.In moderate grade of POP scale fatal cases were 6 (37.50%) and nonfatal 22 (64.70%).None of fatal patients and 8 (23.52%) of nonfatal patients POP scale were mild grade (Figure 3). Outcome of the OPC poisoning In this study shows 64% were completely recovered, 8% were in life threatening form and 16% died (Figure 4). Discussion The objective of the present study was to find out the severity of OPC poisoning and short term outcome of OPC poisoning.We enrolled 50 diagnosed cases of OPC poisoning who were admitted in the medicine ward of Dhaka Medical College Hospital, fulfilling the inclusion criteria. In our study shows ages of the patients in OPC poisoning were as follows: 32% were in between 15-20 years, 28% were in between 20-25 years, 16% were in between 25-30 years and 24% were above 30 years.It reveals age distribution of 76% patients of OPC poisoning was below 30 years of age, reflecting that incidence of OPC poisoning is more common among young group.Ahmed et al. [21], reported maximum incidence of OPC poisoning (88.3%) were in between 10-30 years of age groups.Faiz MA et al. [22], reported 76% of OPC poisoning were among 11-30 years age group. In this study out of 50 patients 36 (72%) were male and 14 (28%) were female giving male-female ratio 2.57:1.Faiz MA et al. [22], showed male-female ratio were 2.21:1, this was consistent with other studies done in Bangladesh.It may be due to male were main user OPI during cultivation.Out of 50 patients 14 (28%) were cultivators, 10 (20%) were students, 10 (20%) were service holders, other professions were housewives 8 (16%) and businessmen 6 (12%).This is consistent with the study done in national academy of medical science, Kathmandu, Nepal, by Rehiman S et al. [23].It may be due to lack of awareness of farmers about OP intoxication, unemployment, conflicting relationship of young couple, stressful life events in developing country like Bangladesh.64% patients received substances from prior purchased for household use, 28% patients received self purchased.The data suggested that readily available and widely used OPC was one of the common agents for suicidal use.Out of 50 patients 44 (88%) were Used OPC deliberate self harm 12% were accidental, no case was detected as homicidal.In study by Eddelston M et al., also showed similar data [18].Out of 50 patients only 14% patients were known psychiatric patients. The study showing outcome of patients in OPC poisoning, 64% were completely recovered, 8% were in life threatening form and 16% died.In Bangladesh Ahmed R et al. [21], showed the fatality rate is about 58.3% another study by Faiz MA et al. [22], found fatality 16.7%.In Nepal Rehiman S et al. [23], studies shows 14% patients were died.So still OPC poisoning is one of the commonest causes of death in medicine ward.This may be due to delay in transporting patients across long distances to hospital, paucity of health care workers compared with the large number of patients, lack of training for the management of OPC poisoning, the high toxicity of locally available poisons and lack of logistic support, efficient manpower, ICU availability and poisoning centre. In this study showed 64.70% patients stayed hospital in between 2-5 days, 23.52% patients < 2 days and 11.76% patients stayed > 5 days.Among the fatal patients 37.5% died on 1 st day of admission, 25% patients died on 4 th day and 12.5% patients died on 2 nd, 3 rd and 5 th day respectively.In this study most of the patients (37.5%) died on the 1 st day of admission most likely due to respiratory failure with severe grading at presentation and lack of ICU availability.In this study 43.75% of fatal patients were admitted in hospital after 4 hours of ingestion of poison in comparison only 11.76% of recovered patients were admitted in hospital after 4 hours, 31.25% of fatal patients and 17.64% of nonfatal patients were admitted in hospital in between 2-4 hours.25% of fatal patients and 41.17% of nonfatal patients were admitted in hospital in between 1-2 hours.No fatal patient was noted within 1 hour of poisoning, in comparison 29.4% of nonfatal patients were admitted in hospital within 1 hour of ingestion. In our study showed 75.50% of fatal patients in comparison 17.64% of nonfatal patients GCS were in between 5-10.25% of fatal patients and 82.35% of nonfatal patients GCS were > 10 on admission.This study shows that the GCS on admission are able to predict fatality in patients with organophosphorus poisoning, which is much easier to apply clinically.In Eddleston M [11], study GCS used as good indicator for predicting the outcome. Regarding POP scale in our study; 10 (62.50%) of fatal patients and only 4 (11.76%) of nonfatal patients were in severe grade.In moderate grade of POP scale, fatal cases were 6 (37.50%) and nonfatal 22 (64.70%).None of fatal patients and 8 (23.52%) of nonfatal patients were in mild grade POP scale.In Senanayake N et al. [17], study showed, patients were graded as severely intoxicated had unfavorable outcome when compare to those who were graded as mildly or moderately intoxicated indicating POP scale was useful to assess the grading of severity of organophosphorus compound intoxication at first contact and help in predicting possible outcome.In our study 62.50% patients in severe grade and 37.50% patients in moderate grade were fatal indicating that patients with even moderate grade of poisoning had also fatality.So the patients with evidence of moderate and severe degrees of poisoning need close monitoring. Conclusion Organophosphorus compounds are widely used in agricultural and industrial area and as a domestic insecticide agent.It is the leading cause of morbidity and mortality due to poisoning specially agriculture based developing country like Bangladesh.In our study 16% patients died.Early diagnosis and treatment is essential to reduce the mortality and morbidity from this lethal compound.Resources for laboratory estimation of blood cholinesterase and organophosphorus level are not available in most area of developing countries.POP scale appears to be very useful to assess the severity of OPC poisoning in our study, as in severe grade showing more fatality than in mild grade.In moderate grade of POP scale also shows significant fatality.So patients with moderate and severe grade of POP scale need close monitoring to reduce the mortality. Figure 1 : Figure 1: Bar chart showing clinical presentations of OPC poisoning. Figure 3 : Figure 3: Relationship of POP scale (Y axis) grading with fatality rate(X axis). Figure 4 :Volume 1 • Figure 4: Bar chart shows outcome of OPC poisoning Patient. : Ahasan HAMN, Faruk AA, Bala CS, Minnat B (2017) Predicting the Short Term Outcome in Acute Organophosphorus Compound (OPC) Poisoning with Poison Severity Scale on Hospital Admission in Dhaka Medical College Hospital.J Toxicol Cur Res 1: 002.
2019-03-28T13:33:28.462Z
2017-11-08T00:00:00.000
{ "year": 2017, "sha1": "94cd3ff9a5ddf9a2a0cc89e7587750198bca563f", "oa_license": "CCBY", "oa_url": "http://www.heraldopenaccess.us/fulltext/Toxicology-Current-Research/Predicting-the-Short-Term-Outcome-in-Acute-Organophosphorus-Compound-(OPC)-Poisoning-with-Poison-Severity-Scale-on-Hospital-Admission-in-Dhaka-Medical-College-Hospital.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "94cd3ff9a5ddf9a2a0cc89e7587750198bca563f", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
255843572
pes2o/s2orc
v3-fos-license
CD73 blockade enhances the local and abscopal effects of radiotherapy in a murine rectal cancer model Anti-tumor effects of radiation therapy (RT) largely depend on host immune function. Adenosine with its strong immunosuppressive properties is an important immune checkpoint molecule. We examined how intra-tumoral adenosine levels modify anti-tumor effects of RT in a murine model using an anti-CD73 antibody which blocks the rate-limiting enzyme to produce extracellular adenosine. We also evaluated CD73 expression in irradiated human rectal cancer tissue. LuM-1, a highly metastatic murine colon cancer, expresses CD73 with significantly enhanced expression after RT. Subcutaneous (sc) transfer of LuM-1 in Balb/c mice developed macroscopic sc tumors and microscopic pulmonary metastases within 2 weeks. Adenosine levels in the sc tumor were increased after RT. Selective RT (4Gyx3) suppressed the growth of the irradiated sc tumor, but did not affect the growth of lung metastases which were shielded from RT. Intraperitoneal administration of anti-CD73 antibody (200 μg × 6) alone did not produce antitumor effects. However, when combined with RT in the same protocol, anti-CD73 antibody further delayed the growth of sc tumors and suppressed the development of lung metastases presumably through abscopal effects. Splenocytes derived from RT+ CD73 antibody treated mice showed enhanced IFN-γ production and cytotoxicity against LuM-1 compared to controls. Immunohistochemical studies of irradiated human rectal cancer showed that high expression of CD73 in remnant tumor cells and/or stroma is significantly associated with worse outcome. These results suggest that adenosine plays an important role in the anti-tumor effects mediated by RT and that CD73/adenosine axis blockade may enhance the anti-tumor effect of RT, and improve the outcomes of patients with locally advanced rectal cancer. Background Neoadjuvant radiation therapy (RT) can down-stage locally advanced rectal cancer (RC) which results in a lower rate of postoperative local recurrences [1,2] and is now considered standard treatment for locally advanced RC worldwide. Recent studies have shown that combined RT and fluorouracil-based chemotherapy results in a further improved locoregional control rate without a significant increase in side effects [3,4]. More recently, other radiosensitizers have been used in clinical trials to improve the efficacy and tolerability of RT. Although direct cytotoxicity via DNA double-strand breaks or the induction of apoptosis have been considered to be the main mechanisms, a reduction in tumor size is also strongly dependent on host immune responses [5,6]. In general, it is believed that RT induces transient immunosuppression. However, multiple reports have suggested that tumor cells which are dead or dying due to RT can present tumor-associated antigens to host immune cells and thereby evoke innate and adaptive immune responses [7,8]. This not only increases the cytotoxic effect on tumor cells directly exposed to RT but also causes regression of tumors outside the irradiated field, the so-called "abscopal effect" [9,10]. With the recent remarkable progress in the understanding of immune checkpoint molecules, many studies have been performed to evaluate the efficacy combined RT and immunotherapy. Pre-clinical studies have demonstrated that anti-tumor effects of RT are further enhanced by the concurrent administration of antibodies to CTLA-4 and PD-1 [10][11][12]. Clinical trials have suggested synergistic effects between RT and recently approved antibody preparations against PD-1 and CTLA-4 [13,14]. In other clinical studies, however, benefits from combined modality therapy have not been confirmed [15,16]. Therefore, the optimal dose or fractionation of RT as well the nature of agents to optimize the response to RT remain to be elucidated. Adenosine is an important endogenous regulator of innate and the adaptive immune system. Adenosine strongly suppresses immune cells mainly through the A2A receptor and plays a critical role in the maintenance of homeostasis in various tissues [17,18]. Adenosine is either released from stressed or injured cells or generated from extracellular adenine nucleotides (ATP (adenosine triphosphate), ADP (adenosine diphosphate) and AMP (adenosine monophosphate)) by the concerted action of the ectoenzymes ectoapyrase (CD39) and 5′ectonucleotidase (CD73). CD39 catalyzes the hydrolysis of ATP/ADP to AMP and CD73 converts AMP to adenosine, and CD73 mediated conversion is considered to be the rate-limiting enzyme in adenosine production [19,20]. ATP is one of the damage-associated molecular patterns (DAMPs) that function as immunostimulatory signals [21]. Since adenosine, in contrast, exerts strong immunosuppressive functions, balancing ATP and adenosine is believed to be crucial for the local immune response [18,22]. Malignant cells often express CD73 and high CD73 expression in tumor tissue has been linked to poor clinical outcomes [23] [24,25], suggesting that adenosine produced by the enzymatic activity of CD73 promotes metastases and survival of tumor cells through immunosuppression. In fact, many pre-clinical studies have shown that inhibition of the CD73/adenosine axis can inhibit tumor progression [26][27][28]. Those results suggest that modulation of adenosine levels in the tumor microenvironment can be a novel therapeutic strategy to suppress tumor growth [29,30]. In this study, we examined the role of the CD73/adenosine axis on the tumor response to local RT using a murine model of spontaneous lung metastases and tissue samples from patients with RC. Cell culture and animal experiments LuM-1, a highly metastatic sub-clone of murine colon cancer, colon26 [31] was kindly obtained from Dr. Oguri, Aichi Cancer Center, Japan., and maintained in DMEM supplemented with 10% FCS, 100 U/mL penicillin and 100 μg/mL streptomycin (Sigma-Aldrich, St. Louis, MO, USA). After achieving > 80% confluence, cells were removed by treatment with 0.25% (w/v) trypsin solution containing 0.04% (w/v) EDTA, and then used. The cultured cells were tested by the Mycoplasma Detection Kit (R&D Systems, Minneapolis, MN, USA) in every 3 months and cells with passages 3 to 5 were used for experiments. Female Balb/c mice age 7-8 weeks were purchased from CLEA Japan (Shizuoka, Japan) and housed in specific pathogen-free (SPF) conditions. LuM-1 cells (1 × 10 6 ) were subcutaneously injected in the right flank of 8-9 weeks-old female Balb/c mice. When the primary tumors reached a volume of 100 to 150 mm 3 at day12, the mice were divided into groups with each group containing 5~8 mice to enable the statistical evaluation. Local RT was delivered using MX-160 Labo (mediXtec, Chiba, Japan), as described previously [32]. In short, anesthetized mice were held in the decubitus position, and X-ray irradiation was delivered only to the subcutaneous (sc) tumor with the remainder of the body of the mouse including the lung covered with a 5 mm lead plate. We confirmed the effectiveness of shielding by this method. Mice received 3 fractions of 4 Gy every other day (days 12,14,16). For immunotherapy, mice received intraperitoneal injection of 200 μg anti-CD73 mAb or Rat IgG2a isotype control on days 12, 14, 16, 19, 22 and 25. All of the mice were sacrificed with cervical dislocation on day 28, and the weight of the sc tumor and number of macroscopic metastatic nodules in lung were evaluated. All the procedures were approved by Animal Care Committee of Jichi Medical University (No 17005-02) and performed according to the Japanese Guidelines for Animal Research. Flow cytometry LuM-1 cells were cultured at a density of 1 × 10 6 cells/ 10 cm dish and 10 Gy RT given with the MX-160 Labo and incubated for an additional 24 h. The cells were harvested, incubated with 10 μl FcR blocking reagent for 10 min at 4°C and incubated with PE-conjugated anti-CD39 and APC-conjugated anti-CD73 mAb for 30 min at a final concentration of 2.5 μg/mL. After washing twice with staining buffer, the cells were incubated with 7-AAD for 15 min on ice and staining intensity analyzed in 7-AAD (−) live cell population using FACS Calibur (BD Bioscience, Franklin Lakes, NJ, USA). For in vivo experiments, LuM-1 (1 × 10 6 ) cells were subcutaneously injected in Balb/c mice and treated with 2 fractions of 4 Gy RT as described above. Two days later, tumors were excised and digested using the Tumor Dissociation Kit, mouse (Miltenyi Biotec) with gentleMACs Dissociators (Miltenyi Biotec). After lysis of red blood cells (RBC) with RBC lysis buffer (BioLegend), cells were passed through a 40-μm filter and single cell suspensions stained with APC conjugated anti-CD73 mAb and PE conjugated anti-CD45 mAb, and the expression level of CD73 was analyzed in live tumor cell population defined in 7-AAD (−) CD45 (−) gated area. Quantification of adenosine levels in tumor tissue Quantitative analysis of adenosine, AMP and inosine was performed using an LC-MS system consisting of Nexera X2, LCMS-8060 and a LC/MS/MS Method Package for Primary Metabolite Version 2 (Shimadzu Corp, Kyoto, Japan) as described previously [33]. In brief, sc tumors irradiated as described above (4Gyx2) were resected at 12, 24 and 48 h after treatment and dissociated. Chromatographic separation was performed at 40°C on a Discovery® HS F5-3 column, 150 × 2.1 mm, 3 μm, (Sigma-Aldrich) with a flow rate of 0.25 mL/min. A gradient elution of mobile phase A consisting of 0.1% of formic acid in water and mobile phase B consisting of 0.1% of formic acid in acetonitrile. The mobile phase B concentration was programmed as follows: 0% (0 min) -0% (2.0 min) -25% (5.0 min) -35% (11 min) -95% (15 min) -95% (20 min) -0% (20.1 min). Nitrogen gas was used as the nebulizer gas with drying gases at flow rates of 3.0 and 10 L/min, respectively. Dry air for the heating gas was at 10 L/min. Collision-induced dissociation (CID) was conducted by argon gas (purity, > 99.9995%). Interface, heat block, and desolvationline temperatures were set at 300, 400, and 250°C, respectively. Multiple reaction monitoring ( Immunohistochemistry of patient samples Between 2008 and 2015, 64 patients with locally advanced RC received neoadjuvant chemoradiotherapy (CRT) in the Department of Surgery, Division of Gastroenterological General and Transplant Surgery, Jichi Medical University Hospital. Patients were treated with long-course RT (a dose of 50.4 Gy in 25 fractions) using 4-field box techniques. Some patients received concurrent chemotherapy with oral UFT or S1. Radical resections were performed at 8-10 weeks after the end of CRT. The excised tumors were immediately fixed in 10% buffered formalin, and consecutive formalin-fixed paraffin-embedded 4-μm sections prepared for immunohistochemical evaluation. After treatment with xylene and ethanol and washing with phosphate-buffered saline (PBS), tumor specimens were subjected to heat-induced antigen retrieval in citrate buffer (Muto Pure Chemicals Co., Ltd., Tokyo, Japan) followed by endogenous peroxidase blocking by Peroxidase-Blocking solution (DAKO, Santa Clara, CA, USA). The tissues were washed with PBS and incubated with 5% bovine serum albumin for 30 min to block nonspecific antibody binding. The slides were then incubated overnight at 4°C with monoclonal antibodies against CD73 (D7F9A, Rabbit IgG, Cell Signaling Technology, Danvers, MA, USA) at a dilution of 1:200 in humid chambers overnight at 4°C. After three 5-min washes with PBS, sections were incubated with anti-rabbit secondary antibody conjugated with peroxidase for 30 min at room temperature. After washing, the enzyme substrate 3,30-diaminobenzidine (Dako REAL EnVision Detection System, DAKO) was used for visualization and counterstained with Meyer's hematoxylin. Staining intensities in remnant tumor cells or stroma were independently scored from 0 to 3 (Fig. S6) by two different evaluators who were unaware of the clinical findings, and the cases were divided into high (score = 2 or > 2) and low (score < 2) expression groups by the mean score of the two evaluators. This study protocol was approved by the institutional IRB of Jichi Medical University (Rin A17-164) and conducted in accordance with the guiding principles of the Declaration of Helsinki. Written informed consent was obtained from all participants. The cells were stained with mAbs to CD73 and CD45, and MFI for CD73 were analyzed in live tumor cells defined by 7AAD (−) CD45(−) gated area. P value was calculated with one-way ANOVA followed by Tukey test. c Two fractions of 4 Gy RT were delivered to sc tumors as described above, which were removed at 12, 24 and 48 h after RT. Levels of AMP (adenosine monophosphate), adenosine, and inosine levels in those samples were measured with the LC-MS system (Shimadzu Corp) as described in material and methods. The Y axis shows the height ratio between the 2-morpholino-ethanesulfonic acid (2-ME) as an internal standard and the target molecules. P value was calculated with one-way ANOVA followed by Dunnett's test and * showed p < 0.05 Statistical analysis Data are presented as the means ± SEM or median (min-max). Statistical differences were analyzed by student-t-test, the Mann-Whitney test, one-way ANOVA with post hoc test with Tukey's or Dunnett's procedure, the χ2-test or Fisher's exact test and p values less than 0.05 were considered significant. Recurrence free survival (RFS) and overall survival (OS) rates were calculated using the Kaplan-Meier method and differences were evaluated using the log-rank test. Uniand multivariate analyses were performed using the Cox proportional hazard model to evaluate the predictors of prognosis. Statistical analysis was performed using GraphPad Prism 7 software (GraphPad Software Inc., San Diego, CA, USA) or IBM SPSS Statistics 21 (IBM, Chicago, IL, USA). RT enhances the expression of CD73 by LuM-1 cells and increases adenosine levels in subcutaneous tumors The expression of CD39 and CD73 in cultured LuM-1cells was examined with flow cytometry. CD39 was scarcely expressed on LuM-1 cells and not changed after RT (Fig. 1a, left panel). In comparison, LuM-1 cells positively expressed CD73 and its expression level was further enhanced 24 h after treatment with 10 Gy RT (Fig. 1a, right panel and Fig. S1A). RT (4Gy × 2) was given to sc LuM-1 tumors implanted in syngeneic mice, and CD73 expression in the LuM-1 cells recovered from resected tumors were evaluated by mean fluorescein intensity (MFI) in CD45 (−) tumor cells. Consistent with the in vitro results, CD73 expression level in LuM-1 cells was significantly enhanced compared with non-irradiated controls (Fig. 1b). MFI in CD45 (+) cells did not show significant difference (Fig. S1B). The levels of adenosine, as well as its precursor, AMP and its metabolite, inosine, in irradiated sc tumors were examined using an LC-MS system. As evaluated by the peak height ratio against the internal standard, adenosine levels in tumors were significantly increased at 24 h after 2 cycles of RT, and inosine levels were significantly increased at 48 h after RT (Fig. 1c). Anti-CD73 mAb combined with RT suppresses nonirradiated lung metastases as well as irradiated tumor In a preliminary experiment, we confirmed that all mice developed sc tumors with micrometastases in both lungs at 12 days after subcutaneously injection of LuM-1 cells, although all mice were healthy with apparent sc tumor. When local RT (4Gy × 3) was delivered selectively to sc tumors after 12 days, the weight of the sc tumor at day 28 was significantly reduced (2.5 ± 0.61 g vs 4.8 ± 0.61 g, p < 0.05, n = 5), while the number of lung metastases was not altered. Treatment with anti-CD73 mAb alone did not show significant difference from isotype control for the sc tumor or the lung metastases (Fig. 2). However, when RT was delivered to sc tumor together with administration of anti-CD73 mAb or isotype control, the growth of sc tumor was significantly delayed in mice treated with anti-CD73 mAb (p < 0.05, at day 18 and later) and tumor volume at day 28 was reduced to about 50% (Fig. 3a, b). Moreover, the number of lung metastases was significantly reduced in anti-CD73 mAb treated mice (1, 0~30 vs 12, 1~70, p = 0.04, n = 8). No metastases were observed in 4/8 mice treated with RT+ anti-CD73 mAb, although metastases developed in all mice in the control group (Fig. 3c, d). Same trend was observed in 2 different Fig. 2 Anti-tumor effects of radiation therapy (RT) or anti-CD73 antibody used alone. Tumor bearing mice received local RT to subcutaneous (sc) tumors (3 fractions of 4 Gy) as described above on days 12, 14, 16 and intraperitoneal injection of 200 μg anti-CD73 mAb or Rat IgG2a isotype control at days 12, 14, 16, 19, 22 and 25. All of the mice were sacrificed on day 28, and the weight of sc tumors and number of macroscopic metastatic nodules in the lungs evaluated. P value was calculated with one-way ANOVA followed by Tukey test and * showed p < 0.05 experiments with 2 cycles of RT although the differences were not statistically significant (Fig. S2). Anti-CD73 mAb combined with RT enhances the systemic immune response We then examined lymphocyte populations in the spleens and infiltrating lymphocytes in sc tumor of tumor-bearing mice. The ratios of CD4 (+) or CD8a (+) T cells and CD11b (+) Gr-1(+) myeloid derived suppressor cells were not altered comparing the anti-CD73 mAb treated and isotype control groups (Fig. S3, S4). However, as shown in Figs. 4a and b, intracellular staining showed that IFN-γ producing cells were significantly increased in CD4 (+) and CD8a (+) T cells in anti-CD73 Fig. 3 Anti-tumor effects of anti-CD73 antibody combined with radiation therapy (RT). Tumor bearing mice received local RT together with immunotherapy using the same protocol shown in Fig. 2. The growth of subcutaneous (sc) tumors was evaluated by their volume calculated by length×width 2 /2 (b). All mice were sacrificed on day 28, and the volume of sc tumor (a) as well as the number of macroscopic metastatic nodules (c, d) in the lungs counted. P value was calculated with the Mann-Whitney test and * showed p < 0.05 Fig. 4 Effects of anti-CD73 antibody on splenocytes of irradiated tumor bearing mice. CD73 mAb enhances IFN-γ production and cytotoxicity of splenocytes from irradiated mice. a, b Tumor bearing mice received 3 fractions of 4 Gy local radiation therapy (RT) together with intraperitoneal injection of 200 μg anti-CD73 mAb or Rat IgG2a isotype control on days 12, 14, 16, and sacrificed at day 18. The splenocytes were cultured in RPMI-1640 + 10% FCS in the presence of brefeldin A and then fixed, permeabilized and stained with PE-conjugated IFN-γ or isotype control and APC-conjugated anti-CD3 and BV421-conjugated anti-CD4 mAb and FITC-conjugated anti-CD8 mAb as well as FVS780 for dead cell exclusion. The ratio of IFN-γ positive cells were calculated in CD3 (+) CD4 (+) or CD3 (+) CD8a (+) gated area. c The splenocytes were cultured with irradiated LuM1 in 2 ml 10% FCS+ RPMI-1640 medium supplemented with 20 ng/ml mouse recombinant IL-2 for 12 days, and then incubated with LuM-1 cells at an E/T ratio of 20:1. After 4 h incubation, all cells were stained with FITC-conjugated Annexin-V, 7-AAD and APC-conjugated anti-CD45 mAb, and ratios of 7-AAD positive dead cells calculated in LuM-1 population defined in FSC/SCC and CD45 (−) gated area. P value was calculated with the Mann-Whitney test and * showed p < 0.05 mAb treated mice (CD4; 10.8 ± 1.2% vs 4.7 ± 1.6%, p < 0.05, n = 6: CD8a; 16.2 ± 1.7% vs 6.9 ± 2.3%, p < 0.05, n = 6). Moreover, infiltrating lymphocytes in sc tumor showed the same trend with statistical significance in CD4 (+) population (Fig. S5). Expression of CD73 in tumor cells or stroma correlates with the outcomes of patients who received neoadjuvant RT The expression of CD73 in 64 surgically resected specimens from patients with RC who had received neoadjuvant CRT was immunohistochemically evaluated. The outcomes of these patients was evaluated with regard to CD73 expression. As shown in Fig. 5, remnant cancer cells and stroma were stained positive for CD73 and the staining pattern was highly variable among the patients. Therefore, we separately evaluated the staining intensity in remnant tumor cells and stroma (Fig. S6) and divided these into high and low expression groups (Fig. 5 and Table 1). The CD73 expression level did not show significant correlation with clinical or pathological findings including pathological response (Table 1). However, recurrence in distant sites tended to be observed frequently in patients with higher-expressing CD73 tumors (Table 1). Accordingly, patients with tumors showing high CD73 expression either in remnant tumor cells or stroma tended to have shorterRFS and OS compared to patients with low CD73 expression (Fig. 6). Especially, 13 patients with tumors that highly express CD73 both in remnant tumor cells and stroma showed markedly worse outcomes compared to the other 51 patients (p = 0.0059) with mean RFS of 22 months (Fig. 6 right panels). In the univariate analysis, high CD73 expression both in remnant tumor cells and stroma was significantly associated with worse prognosis (Table S1). In the multivariate analysis, high CD73 expression both in remnant tumor cells and stroma remained an independent predictor of RFS and OS (Table S1). Discussion RT has been widely used for the treatment of solid tumors either with curative intent or as palliative treatment. Recent clinical [13,14] as well as pre-clinical [10][11][12] studies have suggested that tumor responses to RT are significantly enhanced by combination with immune checkpoint blockade. Adenosine has a strong immunosuppressive property and is now considered as an important "metabolic immune checkpoint molecule" [22,34]. Inhibition of the CD73/adenosine axis attracts attention as a novel form of immunotherapy that could be combined with RT [35,36]. However, it is unclear how the modulation of adenosine levels affect the outcome of RT. CD73 expression in tumor cells cannot be appropriately evaluated in 1 patient with a grade 2 response due to few remaining tumor cells as well as in 3 patients with grade 3 responses (pathological complete response). Statistical significance of the differences was evaluated by student-t-test, the Mann-Whitney test, the χ 2test and Fisher's exact test In this study, we found that CD73 is significantly expressed in a highly metastatic clone of colon26, LuM-1, and was further upregulated by irradiation both in vitro and in vivo. Previous studies have shown that CD73 gene expression is enhanced by hypoxia [37] and proinflammatory cytokines [38] which are often associated with RT. RT has been shown to upregulate CD73 expression in esophageal [39] and bladder cancer [40] cells as well as immune cells [41], which is consistent with the present results. Since after RT large amounts of adenosine precursors are expected to be released into the extracellular space from damaged cells, it is possible that upregulation of CD73 causes large amounts of adenosine to accumulate in irradiated tumor tissue. Accurate quantification of tissue adenosine levels is challenging because of its low molecular weight, high polarity and short half-life due to enzymatic degradation [42]. Previous studies using reversed phase high pressure liquid chromatography showed that extracellular adenosine levels in solid tumors were 50-100 μM, which is higher than those in normal tissue and enough to suppress local antitumor immune responses [43,44]. In this study, we used the LC-MS method with superior sensitivity and selectivity compared with conventional liquid chromatography [45], and found that adenosine levels in sc LuM-1 tumors are significantly elevated 24 h after RT. To the best of our knowledge, this is the first report to directly evaluate changes in adenosine levels in irradiated tumors. Levels of inosine, a stable metabolite of adenosine, were increased at a later time. These results suggest that adenosine levels in the microenvironment of irradiated tumors are maintained at considerably high levels, at least for hours, which may attenuate the antitumor immune response elicited by RT. In this study, RT (4Gy × 3) delayed the growth of sc LuM-1 tumors while anti-CD73 antibody did not show anti-tumor effects when used alone. However, when combined with RT, antibody administration further suppressed the growth of irradiated tumors compared with tumor growth in isotype control treated mice. Anti-CD73 antibody significantly reduced the number of metastases in the lungs, which had not been irradiated. No metastases were observed in the lungs of 50% of mice treated with anti-CD73 together with RT. Since microscopic metastases already existed in the lungs at the time of treatment, it is suggested that the combination of RT and anti-CD73 antibody evokes a systemic immune response which eliminated tumor cells in the lung. Splenocytes from mice treated with RT and anti-CD73 antibody had an increased ability to produce IFN-γ and enhanced cytotoxicity against autologous LuM-1 in vitro. These results suggest that anti-CD73 antibody can induce abscopal effects of RT, which might be partially attributed to T cells stimulated by RT-induced tumor-associated antigen. CD73 is a multifunctional molecule expressed in various cells. Previous studies have shown that CD73 on tumor cells can mediate proliferation and migration Fig. 6 Impact of CD73 expression on outcome of 64 patients who received preoperative radiation therapy for locally advanced rectal cancer. Patients were divided into CD73 high and low expression groups either in remnant tumor cells (left panels) or stroma (middle panels), as well as high in both areas or others (right panels), and recurrence free survival (RFS; upper panels) and overall survival (OS; lower panels) were compared with Kaplan-Meier method. P values were calculated by the log-rank test and * showed p < 0.05 apart from its enzymatic activity and that blocking CD73 can suppress tumor growth [46,47]. In other studies, CD73 has been shown to contribute to the process of angiogenesis via both its enzymatic and non-enzymatic functions [48,49]. These results suggest that CD73 blockade may suppress the growth of lung metastases through mechanisms unrelated to immunity. In this study, however, it seems to be unlikely because anti-CD73 mAb, when used alone, did not show significant inhibition in lung metastases in vivo. In fact, in vitro proliferation and migration of LuM-1 cells were not affected by CD73 mAb treatment (data not shown). Immunostaining experiments showed that CD73 was expressed both in remnant tumor cells and/or stroma in surgically resected human RC after CRT. Although the expression pattern differs among patients, high expression of CD73 was associated with poor prognosis with a higher incidence of distant recurrence, which is consistent with previous studies of non-irradiated tumors [23] [24,25]. This might be partially caused by the concurrent chemotherapy, since chemotherapy induced CD73 expression in epithelial ovarian cancer and CD73 blockade improved the therapeutic efficacy [50]. However, together with the results of the murine experiments, it is suggested that increased adenosine levels, by enhanced CD73 in irradiated tumor tissue, may impair systemic immune responses which might be causally related to the growth of micrometastases in distant organs in human. There is growing evidence that RT can result in in situ tumor vaccination by exposing tumor specific neoantigens to the host innate immune system, and thus radio-immunotherapy has the possibility of being an effective novel therapy for patients with advanced cancer. However, there are still major challenges to understanding the dual face of RT-induced effects on the immune system. This is the first report to suggest that the anti-tumor response may be reduced by adenosine in irradiated tumor which is restored by functional blockade of CD73. Anti-CD73 mAb has already been used in a phase 1 clinical trial (NCT025 03774) [51]. These results of the present study encourage the clinical appreciation of anti-CD73 mAb combined with RT as a promising preoperative treatment for patients with locally advanced RC. Conclusion After local RT, adenosine levels in irradiated tumor is considerably elevated which may reduce the anti-tumor effects mediated by RT through the induction of immunosuppression. The combination with CD73/adenosine axis blockade may enhance local and abscopal effects of RT and improve the outcomes of patients with locally advanced rectal cancer. Additional file 1: Figure S1. (A) Cultured LuM-1 cells were treated with or without 10 Gy RT using the MX-160 Labo (mediXtec), and incubated for an additional 24 h. The cells were stained with anti-CD73 mAb and MFI in the 7AAD (−) live cell population were examined by FACS. Data in 5 different experiments were expressed. (B) Two fractions of 4 Gy RT were delivered selectively to sc tumors of LuM-1 with the remainder of Balb/c mice shielded by a lead plate. Two days later, tumors were resected and single cell suspensions obtained using a Tumor Dissociation Kit. The cells were stained with mAbs to CD73 and CD45, and MFI for CD73 were analyzed in live tumor cells defined by 7AAD (−) CD45(+) gated area. P value was calculated with one-way ANOVA followed by Tukey test. Figure S2. Tumor bearing mice received local RT to sc tumors (2 fractions of 4 Gy) on days 14, 16 and/or an intraperitoneal injection of 200 μg anti-CD73 mAb or Rat IgG2a isotype control at days 16, 19, 22 and 25. The growth of sc tumors and the number of lung metastases were evaluated by their volume calculated by length×width 2 /2. P value were calculated with ANOVA with Tukey's test. Figure S3. Tumor bearing mice treated as Fig. 3 and sacrificed on day 18. Their splenocytes were stained with mAbs to CD3, CD4, CD8a, CD11b and Ly-6G/Gr-1 with FVS780 and positive cells were calculated in FVS780 (−) live cell population. Figure S4. Tumor bearing mice treated as Fig. 3 and sacrificed on day 18 and the sc tumors were dissociated with cell dissociation kit and the cells recovered from each tumor were stained with mAbs to CD45, CD3, CD4, CD8a, CD11b and Ly-6G/Gr-1 with FVS780 and positive cells were calculated in FVS780 (−) CD45 (+) live cell population. Figure S5. Tumor infiltrating cells were cultured in RPMI-1640 + 10% FCS in the presence of brefeldin A and then fixed, permeabilized and stained with PE-conjugated IFN-γ or isotype control and APC-conjugated anti-CD3 and BV421-conjugated anti-CD4 mAb and FITC-conjugated anti-CD8a mAb as well as FVS780 for dead cell exclusion. The ratio of IFN-γ positive cells were calculated in CD3 (+) CD4 (+) or CD3 (+) CD8a (+) gated area. P value was calculated with the Mann-Whitney test. Figure S6. Classification of high and low expression of CD73. Staining intensities of CD73 were evaluated in remnant tumor cells or stroma separately by scoring (0, 1+, 2++, 3+++). Table S1. Univariate and Multivariate analysis on the correlation between clinicopathological variables and outcomes.
2023-01-16T14:30:39.692Z
2020-05-12T00:00:00.000
{ "year": 2020, "sha1": "b8c9e425814cd76b1cf45e6da76ff8c11e80edb4", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12885-020-06893-3", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "b8c9e425814cd76b1cf45e6da76ff8c11e80edb4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
55689423
pes2o/s2orc
v3-fos-license
Amenability and Weak Amenability of the Semigroup Algebra Introduction Let S be a semigroup and be a left multiplier on S. We present a general method of defining a new product on S which makes S a semigroup. Let denote S with the new product. These two semigroups are sometims different and we try to find conditions on S and such that the semigroups S and have the same properties. This idea has started by Birtel in [1] for Banach algebras and continued by Larsen in [11]. Recently, this notion developed by some authors, for more details see , [10], , [12] and [15]. One of the best result in this work expresses that is Arens regular if and only if is a compact group [10]. We continue this direction on the regularity of S and and the amenability of their semigroup algebras. The term of semigroup will be a non-empty set S endowed with an associative binary operation on , defined by . If S is also a Hausdorff topological space and the binary operation is jointly continuous, then S is called a topological semigroup. Introduction Let S be a semigroup and be a left multiplier on S. We present a general method of defining a new product on S which makes S a semigroup. Let denote S with the new product. These two semigroups are sometims different and we try to find conditions on S and such that the semigroups S and have the same properties. This idea has started by Birtel in [1] for Banach algebras and continued by Larsen in [11]. Recently, this notion developed by some authors, for more details see , [10], , [12] and [15]. One of the best result in this work expresses that is Arens regular if and only if is a compact group [10]. We continue this direction on the regularity of S and and the amenability of their semigroup algebras. The term of semigroup will be a non-empty set S endowed with an associative binary operation on , defined by . If S is also a Hausdorff topological space and the binary operation is jointly continuous, then S is called a topological semigroup. Semigroup Let . Then we define a new binary operation " " on as follow : The set equipt with the new operation " " is denoted by and sometimes called "induced semigroup of S" . Now we have the following results. There are many properties that induced from to semigroup . But sometimes they are different. Theorem2.2. Let be a Hausdorff topological semigroup and If is commutative then so is . The converse is true if Proof. Suppose is commutative and take . Then So, is commutative. Conversely, Let be commutative and take ϵ Then there exist nets and in such that and So, we have = . Thus is commutative . In the sequel, we investigate some relations between two semigroup and according to the role of the left multiplier . We show that is a homomorphism . Take , then we have = . So is a homomorphism. Then by proposition 2.1[ , is weakly amenable. In the case that is a group, it is easy to see that the amenability of implies the amenability of . Indeed, when is a group, by theorem 2. (ii) is regular and is nuital. (ii) ) is regular and is semisimple. Proof. Refer to . 4.4. There are semigroups and such that and are amenable but is not regular and also, is not amenable. Also, the inequality shows that is not weakly amenable. In the next example we show that in the theorem 3.2 (iii) the condition "injectivity of " can not be omitted. 4.5 There are a semigroup and such that is not injecyive and the corresponding is not an isometry. Suppose that is a semigroup as in example 4.4 and for some fixed . If then and also . But , so + . Hence It shows that is not an isometry. 4.6. There are semigroups and such that is semisimple. But is not semisimple. This example remind that, in theorem 3.1 the multiplier must be injective.
2018-12-06T22:37:56.961Z
2016-09-10T00:00:00.000
{ "year": 2016, "sha1": "3d1d1eac2a96985335e46d7b539da818a1f3c526", "oa_license": null, "oa_url": "http://mmr.khu.ac.ir/files/site1/user_files_5e9ba5/laali-A-10-219-13-1e8122c.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "216891c97711be46ce59978c4221242ae046c6d8", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244207430
pes2o/s2orc
v3-fos-license
Analysis of earnings management practices using the modified jones model on the industry company index * Corresponding author, email address: pujiono@unesa.ac.id. ABSTRACT This study aimed to examine the issue of differences in earnings management patterns in companies listed on the Indonesia Stock Exchange (IDX). Earnings management can occur because company management wants to take advantage of accounting descriptions/policies under the character of the assets, existing in each of these industries. This study used a Modified Jones Model approach in determining earnings management proxies. Besides, it also used analysis of variance (ANOVA) to test whether there were differences in earnings management patterns. The data were consisted of 450 companies from 8 industrial sectors in the Kompas 100 Stock Index during 2015-2019. They were from various industries; essential and chemical industry; consumer goods; services; mining, oil, natural gas; plantation; property and real estate; and banking. The result shows that there are differences in earnings management patterns between industrial sectors. Therefore, company management practices earnings management following the characteristics of each industry. The research also suggests that the next study should analyze the comparison of earnings management with other models to determine the consistency of results. INTRODUCTION Earnings management is one of the factors that can reduce the credibility of financial reports carried out by management intervention in external financial reporting to benefit itself. Company managers use various ways to determine the size of the profits, namely by recognizing and recording income too quickly, recognizing and recording false income, recognizing and recording expenses sooner or later than they should be. By doing so, the do not not disclose their obligation by simply changing accounting methods and procedures with other accounting methods and procedures on the components of financial statements. These are arranged according to the company manager's wishes. APB Statement No.4 explains the principles of the nature and basic elements of accounting where accruals are the determination of income and expenses from the position of assets and liabilities. This is determined based on the occurrence regardless of whether the payment transaction for cash receipts has been made or not. Discretionary accruals are an accrual component resulting from managerial engineering by utilizing freedom and flexibility in estimating and using accounting standards (Sulistiyanto, 2008). Several studies also support that earnings manipulation is also often carried out by management. Earnings arranged based on the accruals will provide the management with opportunities to maximize its utility through accrual policy. This occurs because of the managers' freedom to choose the accounting methods when treating their company business transactions. This is the freedom for a certain reason, which is considered opportunistic (Dechow, Sloan, and Sweeney, 1995). This is supported by the result of a study by Healy (1985) and Watts & Zimmerman (1986). They found that managers manipulate profits by using discretionary accrual strategies. This can describe the influence of the company's business conditions when there is a greater discretionary value of the company accruals and a greater profit management practices. It can be identified by a large profit but with its low profit quality. There are various models or detection methods that can be used for identifying and measuring earnings management as proposed by some accounting researchers. These proponents are such as the Healey Model (1985), The De Angelo Model (1986), the Jones Model (1991), the Industrial Model (1991), the Modified Model Jones (1995). Of the five models, this study uses the Jones model modified by Dechow et al., (1995). This modified Jones model is that of the Jones Model refinement (1991) designed by Dechow et al. (1995) to reduce the tendency of testing, specifically in calculating total accruals of earnings which separates non-discretionary accruals and discretionary accruals. It is a model with a time series and is statistically the best compared to other models. Besides, it also has a fairly good level of accuracy in detecting earnings management (Abdurrahim, 2015). So that the Jones modified model is a model that will be used as a research measurement tool. Based on the background above, the problem formulated in this study is whether each industry in the Kompas 100 Index has a different pattern of earnings management from other industries. THEORETICAL FRAMEWORK AND HYPOTHESES Positive Accounting Theory Positive accounting theory is a theory that provides understanding and predictions of a choice of accounting policies to be used by companies (Wats and Zimmerman, 1990). In positive accounting theory, there are three hypotheses put forward by Watts and Zimmerman as the basis for selecting accounting procedures to test a person's ethical behavior in recording and compiling financial reports. These hypotheses are: (1) The bonus plan hypothesis is the use of procedures that company managers tend to use by providing improvements. High profits in the company's financial statements through bonuses or rewards. (2) Debt Covenant Hypothesis is a hypothesis which states that company managers that have a large leverage (debt/ equity) ratio will prefer to choose accounting procedures that can transfer the recognition of earnings for future periods to the present period. (3) The Political Cost Hypothesis states that the greater the political costs of the company, the more likely company managers are to choose accounting procedures that defer earnings reports from current to future periods. This is with the assumption that large companies will tend to choose accounting procedures that can reduce reported earnings in financial reports compared to smaller companies. Agency Theory Agency theory is a theory stating that there is a working relationship between the party giving the authority (principal), namely the investor, and the party receiving the authority (agent), namely the manager, in the form of a contract in running the company's business (Jensen & Meckling, 1976). According to Scott (2015: 137), there is two information asymmetries e.g., the parties that are directly related to company business transactions that have more information than other parties. Signaling Theory Signalling theory suggests how a company should provide signals for their users of financial statements in the form of their information that aims to show more value or competitive advantage over other companies. Companies or managers have more knowledge about the company's condition than external parties do. Earnings Management Earnings management is an action, reporting the the income level by increasing or decreasing earnings at a certain time for the management and stakeholders' benefits (Belkaoui, 2006). a. Earnings Management Patterns Scott (2015: 447) stated that earning management patterns can be done in four ways, namely: 1) Taking A Bath, is a pattern that occurs during the reorganization, including when the appointment of a new CEO by reporting large losses. This action is expected to increase future profits, 2) Income Minimisation is an action that is carried out by quickly recognizing costs such as marketing costs, research, and the imposition of advertising costs when the company earns a large enough profit. 3) Income Maximation, is an action that aims to report high net income for a larger bonus (bonus plan hypothesis) through the selection of accounting methods and timing of transaction recognition, such as accelerating recording and delaying fees. Likewise, in companies that are approaching a violation of long-term debt contracts (debt hypothesis), managers of these companies will tend to maximize profits. 4) Income Smoothing, is the pattern that is most often done and the most popular because this pattern is not done to generate bonuses. This pattern is carried out by the company by leveling the reported profit so that it can reduce too large fluctuations in earnings. It is due to the investors that generally like relatively stable profits. The more volatile the reported net income is, the more likely it is that debt covenant violations will occur. It is due to this pattern that is not done to earn a bonus. This pattern is carried out by the company by leveling the reported profit so that it can reduce too large fluctuations in earnings because investors generally like relatively stable profits. Also, the more volatile the reported net income is, the more likely it is that debt covenant violations will occur. It is due to this pattern that is not done to earn a bonus. This pattern is carried out by the company by leveling the reported profit so that it can reduce too large fluctuations in earnings because investors generally like relatively stable profits. The more volatile the reported net income is, the more likely it is that debt covenant violations will occur. b. Earnings Management Motivation According to Scott (2015: 454) (1991) The Jones (1991) model tries to control the impact of a company's economic changes on non-discretionary accruals which are constant from one period to another. By doing so, changes that occur in discretionary accruals will also result in changes in accruals. Changes in accruals can be caused by considerations from the management of the company, namely management and changes in economic conditions. Therefore, this model adds income changes and PPE to the earnings management estimation model. d. Industrial Model This industrial model is a model for measuring earnings management created by Dechow and Sloan (1991). This model assumes that the variations that exist in the determinants of nondiscretionary accruals can occur in firms in the same industry. e. Modified Jones model The difference between the Modified Jones Model and the Jones Model lies in the determination of nondiscretionary accruals, where to measure non-discretionary accruals, it includes elements of changes in accounts receivable to estimate nondiscretionary accruals. This model is widely used in accounting research because it is considered to be the best model in detecting earnings management and gives the strongest results and has a standard error of εit (error term) the regression result of the estimated total actual value is the smallest compared to other models (Sulistiyanto, 2008: 225). Discretionary Accrual Accrual is all events that are operational in a year and they affect cash flow, changes in accounts receivable and payable, and changes in inventories. Meanwhile, depreciation expense is a negative accrual. Accountants take into account accruals to compare costs to income through the treatment of transactions related to net income as expected. According to Jones (1991), total accruals are separated into two, namely discretionary accruals and nondiscretionary accruals as a tool to determine whether earnings management practices occur. Total accruals are used to measure earnings management at an early stage, then specialize in discretionary accruals as a measure of earnings management. The Relationship between Earnings Management and Discretionary Accrual Companies with high discretionary accrual values show low-quality company profits. It is likewise, when it is companies with low discretionary accrual values show highquality company profits. According to Chan in Siallagan (2009: 63), there are three possible explanations for why accruals can be used to predict indications of earnings management. The conventional interpretation of high accruals indicates the existence of earnings manipulation by managers. a. Accruals can be the main indicator of changes in the company's prospects without manipulation by managers. b. Accruals can predict returns if the market views accruals as a reflection of past growth. Types of Industry on Earnings Management A type of industry is a characteristic of a company related to the business, business risks, employees, and the company environment. Sari (2012) stated that there are two types of industries including: (1) High-profile industries, namely companies that have a high level of sensitivity to consumer visibility (environment), a tight level of competition, or a high level of political risk. For example, companies engaged in chemicals, plastics, paper, automotive, food and beverage, cigarettes, pharmaceuticals, cosmetics, and utensils/ furniture. (2) A low-profile industry, namely companies that have a level of consumer visibility, a level of political risk, and a low level of competition. Research Framework Each type of industry has a different description of the level of earnings management action produced. The focus of this research will be to test how much the earnings management pattern is carried out through the calculation of The Modified Jones Model in the Kompas100 Index company industry listed on the Indonesia Stock Exchange. So the hypothesis in this study are: RESEARCH METHOD This study used a quantitative approach to examine certain populations or samples. This approach focuses on numerical data processed by statistical methods to test predetermined hypotheses. The secondary data were collected in the form of audited annual financial reports of companies. It was published on the Kompas100 Index for the period 2015-2019 which are listed on the Indonesia Stock Exchange with eight samples of traded industrial sectors, namely: This study used a purposive sampling approach for taking the sample of 450 numbers observations. The criteria for selecting the sample are described as in Table 1. The data were analyzed using the modified Jones model developed by Dechow et al., (1995). This is used to determine the value of discretionary accruals through the following stages: a. Calculate total accruals, which is the difference between net income and operating cash flows: Determine the values for parameters 1, 2, and 3 by scaling the data divided by the previous year's assets with the following formula: Using parameter values 1, 2, and 3, the nondiscretionary accrual value is calculated by the formula: Total accruals are also the sum of discretionary accruals and nondiscretionary accruals. To calculate the discretionary accrual value, which is an indicator of earnings management, it is done by reducing the total accruals with nondiscretionary accruals. To test the hypotheses that have been developed in the study, namely using dummy variable regression analysis by calculating the data in groups into several parts based on certain types of categories so that later you can get the right data from the smallest to the largest data. For a measure of earnings management that is negative, the majority of companies are indicated to have taken earnings management actions with income minimization and a measure of earnings management that is positive, so the majority of companies are indicated to have taken earnings management actions using income maximization. 0 = Discretionary accruals are negative 1 = Discretionary accruals are positive DATA ANALYSIS AND DISCUSSION This study uses the formula from the Jones modified model to find the value of discretionary accruals which is used to measure whether there are differences in patterns from each industrial sector related to earnings management actions both in terms of income smoothing, income minimization, and income maximization. Descriptive Statistics Based on Table 2, it can be seen that the number of samples used in this study was (N) 450 which was divided into 90 company sectors on the Kompas100 Index listed on the Indonesia Stock Exchange during the 2015-2019 period. The selection of the 8 sector samples is based on the differences in indicators of success and success of each entity that has industrial profit levels which aim to report the level of profitability. The explanation of the results for table 2 is as follows: a. The AI sector shows a minimum value of -5 and a maximum of 8 with an average of 0.11 and a standard deviation of 2.985. This shows that an average of 25 samples of companies used have discretionary accruals that are positive and negative, which means that the earnings management indication that is carried out by the company for five consecutive years has carried out the earnings management pattern in two ways, namely by income minimization and income maximization. Table 1 Sampling Criteria Criteria Number of Companies The number of companies listed in the Kompas100 Index on the Indonesia Stock Exchange (BEI), the period August 2015 -January 2019 100 The company does not have a complete financial report based on consecutive discretionary accrual data for 2015-2019 (10) The number of companies that qualify as samples 90 Total Samples (90 x 5 years observation period) 450 Source: Index kompas100 b. The IDK sector shows a minimum value of -2 and a maximum of 4 with an average of 0.70 and a standard deviation of 1.150. This shows that an average of 65 samples of companies used have discretionary accruals that are negative and also positive, which means an indication of earnings management carried out by the company for five consecutive years of making earnings management patterns using income minimization and income maximization. c. The consumption industry sector shows a minimum value of -7 and a maximum value of 4 with an average of -0.31 and a standard deviation of 1.391. This shows that an average of 85 samples of companies used have discretionary accruals with high negative value compared to positive discretionary accruals, which means that the indication of earnings management carried out by the company for five consecutive years is more likely to practice earnings management using income minimization. d. The service industry sector shows a minimum value of -7 and a maximum of 6 with an average of -0.44 and a standard deviation of 1.683. This shows that an average of 80 samples of companies used have discretionary accruals that tend to be negative and positive, which means an indication of earnings management carried out by the company for five consecutive years of earning management patterns using income minimization and income maximization. e. The mining industry sector shows a minimum value of -3 and a maximum value of 4 with an average of -0.016 and a standard deviation of 1.311. This shows that an average of 35 samples of companies used has discretionary accruals which tend to have high positive values compared to negative discretionary accruals. This indicates that the indication of earnings management that has been carried out in the company for five consecutive years has carried out a pattern of earnings management using income maximization. f. The plantation industry sector shows a minimum value of -11 and a maximum of 10 with an average of -0.57 and a standard deviation of 4.397. This shows that an average of the 25 samples of companies used have discretionary accruals with the highest negative and positive values, which means that the indication of earnings management carried out by the company for five consecutive years has carried out the earnings management pattern in two ways, both in terms of income minimization and income. maximization. g. The property and real estate industry sector shows a minimum value of -9 and a maximum of 7 with an average of -0.85 and a standard deviation of 2.592. This shows that an average of 85 samples of companies used has discretionary accruals that are negative and positive are more likely to indicate earnings management carried out at the company for five consecutive years in terms of income minimization compared to income maximization. h. In the banking industry sector, it shows a minimum value of -5 and a maximum of 6 with an average of -0.46 and a standard deviation of 2.096. This shows that an average of 50 samples of companies used have negative and positive discretionary accruals, which means an indication of earnings management carried out at the company for five consecutive years of earning management patterns using income minimization and income maximization. Based on the results of the output table 3 above, the Levene Statistic number is 11.878 with a significance value of 0.000. Because the significance value of 0.000 <0.05, it can be concluded that the average of all 100 compass industrial sectors has a significant difference. Based on the results of the output Based on the results of the test analysis and calculation of discretionary accruals using the modified jones model that has been carried out, as a whole, the kompas100 index company industry has differences in earnings management patterns, so the research hypothesis is accepted. This is indicated by differences in discretionary accruals that are positive and negative with different motives for management action in each industry. Where the percentage level of the results of all industrial companies is more likely to take earnings management patterns using income maximization. Discretionary accruals that have the highest average value and the lowest value of -0.57 are owned by the plantation industry and the lowest value is owned by the IDK industry of 0.70. This proves that the company has taken many earnings management actions in the form of increasing accrual earnings by reporting the maximum possible profit to obtain personal gain. The higher the reported profit, the higher the profit in attracting investors to invest. The results of this study are following the positive accounting theory developed by Watts & Zimmerman, (1990) that in obtaining profits, managers will try their best to choose the best accounting policies by manipulating earnings on the reported earnings report to cover the adverse effects of the company which are not profitable for the purpose. according to personal interests. CONCLUSION, IMPLICATION, SUG-GESTION AND LIMITATION Based on the results of this analysis, it can be concluded that there are differences in earnings management patterns in each Kompas100 Index company listed on the Indonesia Stock Exchange using The Modified Jones Model. The company's management practices earnings management with different patterns, both with income minimization and income maximization and according to the character of the industry. Thus, it is possible to manipulate earnings in published financial statements in the interests of management. The implication is that this research can be used as an input to users of financial reports, either the investor side of the accounting standard maker. Investors should be careful in making investment decisions because the published profit figures are the result of management's processing that has been adjusted according to their interests. By doing it, the decision that will be taken later is not wrong. Meanwhile, standard makers should always revise accounting standards to minimize earnings management which can harm investors and other stakeholders. For the next researcher, he can make observations on the LQ45 Index, or with other sampling techniques so that the results of this study will be better. One limitation of this study is only using data from the Kompas 100 index.
2021-11-18T00:04:24.305Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "602b8f369149c72abdf5624b39efb04ed9cb1374", "oa_license": "CCBY", "oa_url": "https://journal.perbanas.ac.id/index.php/tiar/article/download/2383/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9c35e4c7e22399e19c6f088d8a2b12060cc80234", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
30921332
pes2o/s2orc
v3-fos-license
THE CONCISE GUIDE TO PHARMACOLOGY 2017/18: G protein‐coupled receptors The Concise Guide to PHARMACOLOGY 2017/18 provides concise overviews of the key properties of nearly 1800 human drug targets with an emphasis on selective pharmacology (where available), plus links to an open access knowledgebase of drug targets and their ligands (www.guidetopharmacology.org), which provides more detailed views of target and ligand properties. Although the Concise Guide represents approximately 400 pages, the material presented is substantially reduced compared to information and links presented on the website. It provides a permanent, citable, point‐in‐time record that will survive database updates. The full contents of this section can be found at http://onlinelibrary.wiley.com/doi/10.1111/bph.13878/full. G protein‐coupled receptors are one of the eight major pharmacological targets into which the Guide is divided, with the others being: ligand‐gated ion channels, voltage‐gated ion channels, other ion channels, nuclear hormone receptors, catalytic receptors, enzymes and transporters. These are presented with nomenclature guidance and summary information on the best available pharmacological tools, alongside key references and suggestions for further reading. The landscape format of the Concise Guide is designed to facilitate comparison of related targets from material contemporary to mid‐2017, and supersedes data presented in the 2015/16 and 2013/14 Concise Guides and previous Guides to Receptors and Channels. It is produced in close conjunction with the Nomenclature Committee of the Union of Basic and Clinical Pharmacology (NC‐IUPHAR), therefore, providing official IUPHAR classification and nomenclature for human drug targets, where appropriate. Orphans 87 (54) 33 11 a Numbers in brackets refer to orphan receptors for which an endogenous ligand has been proposed in at least one publication, see [414]; b [1511]; c [1362]; d [1941]. Much of our current understanding of the structure and function of GPCRs is the result of pioneering work on the visual pigment rhodopsin and on the β 2 adrenoceptor, the latter culminating in the award of the 2012 Nobel Prize in chemistry to Robert Lefkowitz andBrian Kobilka [1021, 1137]. G protein-coupled receptors → Orphan and other 7TM receptors → Class A Orphans Overview: Table 1 lists a number of putative GPCRs identified by NC-IUPHAR [557], for which preliminary evidence for an endogenous ligand has been published, or for which there exists a potential link to a disease, or disorder. These GPCRs have recently been reviewed in detail [414]. The GPCRs in Table 1 are all Class A, rhodopsin-like GPCRs. Class A orphan GPCRs not listed in Table 1 are putative GPCRs with as-yet unidentified endogenous ligands. GPR1 GPR3 GPR4 GPR6 GPR12 GPR15 GPR17 GPR20 GPR22 GPR26 GPR31 GPR34 GPR35 GPR37 GPR39 GPR50 GPR63 GRP65 GPR68 GPR75 GPR84 GPR87 GPR88 GPR132 GPR149 GPR161 GPR183 LGR4 LGR5 LGR6 MAS1 MRGPRD MRGPRX1 MRGPRX2 P2RY10 TAAR2 In addition the orphan receptors GPR18, GPR55 and GPR119 which are reported to respond to endogenous agents analogous to the endogenous cannabinoid ligands have been grouped together (GPR18, GPR55 and GPR119). Gpr21 knockout mice were resistant to diet-induced obesity, exhibiting an increase in glucose tolerance and insulin sensitivity, as well as a modest lean phenotype [1516]. Gene disruption results in increased severity of functional decompensation following aortic banding [10]. Identified as a susceptibility locus for osteoarthritis [520,975,2011]. - Has been reported to activate adenylyl cyclase constitutively through G s [923]. Gpr26 knockout mice show increased levels of anxiety and depression-like behaviours [2209]. Knockdown of Gpr27 reduces endogenous mouse insulin promotor activity and glucose-stimulated insulin secretion [1059]. resolvin D1 has been demonstrated to activate GPR32 in two publications [331,1052]. The pairing was not replicated in a recent study based on arrestin recruitment [1854]. GPR32 is a pseudogene in mice and rats. See reviews [258] and [414]. Reported to associate and regulate the dopamine transporter [1269] and to be a substrate for parkin [1267]. Gene disruption results in altered striatal signalling [1268]. The peptides prosaptide and prosaposin are proposed as endogenous ligands for GPR37 and GPR37L1 [1324]. Comments The peptides prosaptide and prosaposin are proposed as endogenous ligands for GPR37 and GPR37L1 [1324]. Zn 2+ has been reported to be a potent and efficacious agonist of human, mouse and rat GPR39 [2176]. obestatin (GHRL, Q9UBU3), a fragment from the ghrelin precursor, was reported initially as an endogenous ligand, but subsequent studies failed to reproduce these findings. GPR39 has been reported to be down-regulated in adipose tissue in obesity-related diabetes [285]. Gene disruption results in obesity and altered adipocyte metabolism [1567]. Reviewed in [414]. -GPR50 is structurally related to MT 1 and MT 2 melatonin receptors, with which it heterodimerises constitutively and specifically [1155]. Gpr50 knockout mice display abnormal thermoregulation and are much more likely than wild-type mice to enter fasting-induced torpor [117]. Comments First small molecule agonist reported [1774]. sphingosine 1-phosphate and dioleoylphosphatidic acid have been reported to be low affinity agonists for GPR63 [1459] but this finding was not replicated in an arrestin-based assay [2182]. GPR4, GPR65, GPR68 and GPR132 are now thought to function as proton-sensing receptors detecting acidic pH [414,1775]. Reported to activate adenylyl cyclase; gene disruption leads to reduced eosinophilia in models of allergic airway disease [1044]. CCL5 (CCL5, P13501) was reported to be an agonist of GPR75 [856], but the pairing could not be repeated in an arrestin assay [1854]. GPR78 has been reported to be constitutively active, coupled to elevated cAMP production [923]. -Mice with Gpr82 knockout have a lower body weight and body fat content associated with reduced food intake, decreased serum triglyceride levels, as well as higher insulin sensitivity and glucose tolerance [507]. Proposed to regulate hippocampal neurogenesis in the adult, as well as neurogenesis-dependent learning and memory [319]. Mutations in GPR101 have been linked to gigantism and acromegaly [1982]. Yosten et al. demonstrated inhibition of proinsulin C-peptide (INS, P01308)-induced stimulation of cFos expression folllowing knockdown of GPR146 in KATO III cells, suggesting proinsulin C-peptide as an endogenous ligand of the receptor [2193]. GPR160, Q9UJ42 Comments -Gpr149 knockout mice displayed increased fertility and enhanced ovulation, with increased levels of FSH receptor and cyclin D2 mRNA levels [491]. --- --Comments A C-terminal truncation (deletion) mutation in Gpr161 causes congenital cataracts and neural tube defects in the vacuolated lens (vl) mouse mutant [1289]. The mutated receptor is associated with cataract, spina bifida and white belly spot phenotypes in mice [1039]. Gene disruption is associated with a failure of asymmetric embryonic development in zebrafish [1151]. -GPR171 has been shown to be activated by the endogenous peptide BigLEN {Mouse}. This receptor-peptide interaction is believed to be involved in regulating feeding and metabolism responses [654]. -S e e [ 859] which discusses characterization of agonists at this receptor. -Rat GPR182 was first proposed as the adrenomedullin receptor [947]. However, it was later reported that rat and human GPR182 did not respond to adrenomedullin [973] and GPR182 is not currently considered to be a genuine adrenomedullin receptor [756]. LGR4 does not couple to heterotrimeric G proteins or recruit arrestins when stimulated by the R-spondins, indicating a unique mechanism of action. R-spondins bind to LGR4, which specifically associates with Frizzled and LDL receptor-related proteins (LRPs) that are activated by the extracellular Wnt molecules and then trigger canonical Wnt signalling to increase gene expression [277,426,1686]. Gene disruption leads to multiple developmental disorders [911,1219,1849,2092]. Comments An endogenous peptide with a high degree of sequence similarity to angiotensin-(1-7) (AGT, P01019), alamandine (AGT), was shown to promote NO release in MRGPRD-transfected cells. The binding of alamandine to MRGPRD to was shown to be blocked by D-Pro 7 -angiotensin-(1-7), β-alanine and PD123319 [1102]. Genetic ablation of MRGPRD+ neurons of adult mice decreased behavioural sensitivity to mechanical stimuli but not to thermal stimuli [292]. See reviews [414] and [1847]. A diverse range of substances has been reported to be agonists of MRGPRX2, with cortistatin 14 the highest potency agonist in assays of calcium mobilisation [1667], also confirmed in an independent study using an arrestin recruitment assay [1854]. See reviews [414] and [1847]. TAAR3 is thought to be a pseudogene in man though functional in rodents [414]. --TAAR9 appears to be functional in most individuals but has a polymorphic premature stop codon at amino acid 61 (rs2842899) with an allele frequency of 10-30% in different populations [2023]. Class C Orphans G protein-coupled receptors → Orphan and other 7TM receptors → Class C Orphans Comments -------G P R C 6 i s a r e l a t e d G q -coupled receptor which responds to basic amino acids [2090]. Taste 1 receptors G protein-coupled receptors → Orphan and other 7TM receptors → Taste 1 receptors Overview: Whilst the taste of acid and salty foods appear to be sensed by regulation of ion channel activity, bitter, sweet and umami tastes are sensed by specialised GPCR. Two classes of taste GPCR have been identified, T1R and T2R, which are similar in sequence and structure to Class C and Class A GPCR, respectively. Activation of taste receptors appears to involve gustducin-(Gαt3) and Gα14-mediated signalling, although the precise mechanisms remain obscure. Gene disruption studies suggest the involvement of PLCβ2 [2215], TRPM5 [2215] and IP3 [802] receptors in post-receptor signalling of taste receptors. Although predominantly associated with the oral cavity, taste receptors are also located elsewhere, including further down the gastrointestinal system, in the lungs and in the brain. Sweet/Umami T1R3 acts as an obligate partner in T1R1/T1R3 and T1R2/T1R3 heterodimers, which sense umami or sweet, respectively. T1R1/T1R3 heterodimers respond to L-glutamic acid and may be positively allosterically modulated by 5'-nucleoside monophosphates, such as 5'-GMP [1162]. T1R2/T1R3 heterodimers respond to sugars, such as sucrose, and artificial sweeteners, such as saccharin [1440]. Overview: The composition and stoichiometry of bitter taste receptors is not yet established. Bitter receptors appear to separate into two groups, with very restricted ligand specificity or much broader responsiveness. For example, T2R5 responded to cycloheximide, but not 10 other bitter compounds [302], while T2R14 responded to at least eight different bitter tastants, including (-)-α-thujone and picrotoxinin [124]. Specialist database BitterDB contains additional information on bitter compounds and receptors [2113]. Further reading on Adenosine receptors Fredholm BB et al. (2011) Adhesion Class GPCRs G protein-coupled receptors → Adhesion Class GPCRs Overview: Adhesion GPCRs are structurally identified on the basis of a large extracellular region, similar to the Class B GPCR, but which is linked to the 7TM region by a GPCR autoproteolysis-inducing (GAIN) domain [56] containing a GPCR proteolytic site. The N-terminus often shares structural homology with proteins such as lectins and immunoglobulins, leading to the term adhesion GPCR [571,2187]. The nomenclature of these receptors was revised in 2015 as recommended by NC-IUPHAR and the Adhesion GPCR Consortium [718]. Comments -----A mutation destabilizing the GAIN domain sensitizes mast cells to IgE-independent vibration-induced degranulation [202]. Nomenclature ADGRG6 ADGRG7 ADGRL1 ADGRL2 ADGRL3 ADGRL4 ADGRV1 HGNC, UniProt u n c t i o n m u t a t i o n s a r e associated with Usher syndrome, a sensory deficit disorder [885]. Comments The agonists indicated have less than two orders of magnitude selectivity [85]. Recombinant α 1D -adrenoceptors have been shown in some heterologous systems to be mainly located intracellularly but cell-surface localization is encouraged by truncation of the N-terminus, or by co-expression of α 1B -or β 2 -adrenoceptors [706,1993]. In blood vessels all three α 1-adrenoceptor subtypes are located on the surface and intracellularly [1320,1321]. Signalling is predominantly via G q/11 but α 1 -adrenoceptors also couple to G i/o , G s and G 12/13 . Several α 1A -adrenoceptor agonists display ligand directed signalling bias relative to noradrenaline [521]. There are also differences between subtypes in coupling efficiency to different pathways. In vascular smooth muscle, the potency of agonists is related to the predominant subtype, α 1D -conveying greater agonist sensitivity than α 1A -adrenoceptors [553]. Adrenoceptors, α 2 ARC-239 and prazosin show selectivity for α 2B -and α 2Cadrenoceptors over α 2A -adrenoceptors.Oxymetazoline is a reduced efficacy imidazoline agonist but also binds to non-GPCR binding sites for imidazolines, classified as I 1 , I 2 and I 3 sites [406]; catecholamines have a low affinity, while rilmenidine and moxonidine are selective ligands evoking hypotensive effects in vivo. I 1 -imidazoline receptors cause central inhibition of sympathetic tone, I 2 -imidazoline receptors are an allosteric binding site on monoamine oxidase B, and I 3 -imidazoline receptors regulate insulin secretion from pancreatic β-cells. α 2A -adrenoceptor stimulation reduces insulin secretion from β-islets [2171], with a polymorphism in the 5'-UTR of the ADRA2A gene being associated with increased receptor expression in β-islets and heightened susceptibility to diabetes [1673]. α 2A -and α 2C -adrenoceptors form homodimers [1829]. Heterodimers between α 2A -and either the α 2c -adrenoceptor or μ opioid peptide receptor exhibit altered signalling and trafficking properties compared to the individual receptors [1829,1931,2036]. Signalling by α 2 -adrenoceptors is primarily via G i/o , although the α 2A -adrenoceptor also couples to G s [487]. Imidazoline compounds display bias relative to each other at the α 2A -adrenoceptor [1544]. The noradrenaline reuptake inhibitor desipramine acts directly on the α 2A -adrenoceptor to promote internalisation via recruitment of arrestin [385]. Adrenoceptors, β [ 125 I]ICYP can be used to define β 1 -or β 2 -adrenoceptors when conducted in the presence of a β 1 -or β 2 -adrenoceptor-selective antagonist. A fluorescent analogue of CGP 12177 can be used to study β 2 -adrenoceptors in living cells [88]. [ 125 I]ICYP at higher (nM) concentrations can be used to label β 3 -adrenoceptors in systems with few if any other β-adrenoceptor subtypes. The β 3 -adrenoceptor has an intron in the coding region, but splice variants have only been described for the mouse [522], where the isoforms display different signalling characteristics [850]. There are 3 β-adrenoceptors in turkey (termed the tβ, tβ3c and tβ4c) that have a pharmacology that differs from the human β-adrenoceptors [86]. Numerous polymorphisms have been described for the β-adrenoceptors; some are associated with signalling and trafficking, altered susceptibility to disease and/or altered responses to pharmacotherapy [1169]. All β-adrenoceptors couple to G s (activating adenylyl cyclase and elevating cAMP levels), but also activate G i and β-arrestin-mediated signalling. Many β 1 -and β 2 -adrenoceptor antagonists are agonists at β 3adrenoceptors (CL316243, CGP 12177 and carazolol). Many 'antagonists' of cAMP accumulation, for example carvedilol and bucindolol, weakly activate MAP kinase pathways [89,523,589,590,1721,1722] and thus display 'protean agonism'. Bupranolol acts as a neutral antagonist in most systems so far examined. Agonists also display biased signalling at the β 2 -adrenoceptor via G s or arrestins [470]. X-ray crystal structures have been described of the agonist bound [2075] and antagonist bound forms of the β 1 -[2076], agonist-bound [328] and antagonist-bound forms of the β 2 -adrenoceptor [1632,1672], as well as a fully active agonistbound, G s protein-coupled β 2 -adrenoceptor [1633]. Carvedilol and bucindolol bind to a site on the β 1 -adrenoceptor involving contacts in TM2, 3, and 7 and extracellular loop 2 that may facilitate coupling to arrestins [2076]. Compounds displaying arrestinbiased signalling at the β 2 -adrenoceptor have a greater effect on the conformation of TM7, whereas full agonists for G s coupling promote movement of TM5 and TM6 [1192]. Recent studies using NMR spectroscopy demonstrate significant conformational flexibility in the β 2 -adrenoceptor that is stabilized by both agonist and G proteins highlighting the dynamic nature of interactions with both ligand and downstream signalling partners [992,1260,1479]. Such flexibility likely has consequences for our understanding of biased agonism, and for the future therapeutic exploitation of this phenomenon. Further reading on Adrenoceptors Baker JG et al. (2011) [425,2021], angiotensin III (AGT, P01019) [425] angiotensin III (AGT, P01019) [394,425,2105], angiotensin II (AGT, P01019) [425,1838,2105], angiotensin-(1-7) (AGT, P01019) [194] Selective agonists L-162,313 [1559] CGP42112 [194] Comments: AT 1 receptors are predominantly coupled to G q/11 , however they are also linked to arrestin recruitment and stimulate G protein-independent arrestin signalling [1221]. Most species express a single AGTR1 gene, but two related agtr1a and agtr1b receptor genes are expressed in rodents. The AT 2 receptor counteracts several of the growth responses initiated by the AT 1 receptors. The AT 2 receptor is much less abundant than the AT 1 receptor in adult tissues and is upregulated in pathological conditions. AT 1 receptor antagonists bearing substituted 4-phenylquinoline moieties have been synthesized, which bind to AT 1 receptors with nanomolar affinity and are slightly more potent than losartan in functional studies [275]. The antagonist activity of CGP42112 at the AT 2 receptor has also been reported [1469]. The AT 1 and bradykinin B 2 receptors have been proposed to form a heterodimeric complex [3]. There is also evidence for an AT 4 receptor that specifically binds angiotensin IV (AGT, P01019) and is located in the brain and kidney. An additional putative endogenous ligand for the AT 4 receptor has been described (LVV-hemorphin (HBB, P68871), a globin decapeptide) [1351]. Further reading on Angiotensin receptors de Gasparo M et al. (2000) [1938]. A second family of peptides dis-covered independently and named Elabela [338] or Toddler, that has little sequence similarity to apelin, has been proposed as a second endogenous apelin receptor ligand [1542]. Comments: Potency order determined for heterologously expressed human apelin receptor (pD 2 values range from 9.5 to 8.6). The apelin receptor may also act as a co-receptor with CD4 for isolates of human immunodeficiency virus, with apelin blocking this function [293]. A modified apelin-13 peptide, apelin-13(F13A) was reported to block the hypotensive response to apelin in rat in vivo [1132], however, this peptide exhibits agonist activity in HEK293 cells stably expressing the recombinant apelin receptor [529]. Further reading on Apelin receptor Cheng B et al. (2012) Neuroprotection of apelin and its signaling pathway. Comments: The triterpenoid natural product betulinic acid has also been reported to inhibit inflammatory signalling through the NFκB pathway [1916]. Disruption of GPBA expression is reported to protect from cholesterol gallstone formation [2031]. A new series of 5-phenoxy-1,3-dimethyl-1H-pyrazole-4-carboxamides have been reported as highly potent agonists [1204]. A physiological role for the BB 3 receptor has yet to be fully defined although recently studies using receptor knockout mice and newly described agonists/antagonists suggest an important role in glucose and insulin regulation, metabolic homeostasis, feeding, regulation of body temperature and other CNS behaviors, obesity, diabetes mellitus and growth of normal/neoplastic tissues [659,1249,1249,1496,1496,2145]. Calcitonin receptors G protein-coupled receptors → Calcitonin receptors Overview: This receptor family comprises a group of receptors for the calcitonin/CGRP family of peptides. The calcitonin (CT), amylin (AMY), calcitonin gene-related peptide (CGRP) and adrenomedullin (AM) receptors (nomenclature as agreed by the NC-IUPHAR Subcommittee on CGRP, AM, AMY, and CT receptors [755,1600]) are generated by the genes CALCR (which codes for the CT receptor) and CALCRL (which codes for the calcitonin receptor-like receptor, CLR, previously known as CRLR). Their function and pharmacology are altered in the presence of RAMPs (receptor activity-modifying proteins), which are single TM domain proteins of ca. 130 amino acids, identified as a family of three members; RAMP1, RAMP2 and RAMP3. There are splice variants of the CT receptor; these in turn produce variants of the AMY receptor [1600], some of which can be potently activated by CGRP. The endogenous agonists are the peptides calcitonin (CALCA, P01258), α-CGRP (CALCA, P06881) (formerly known as CGRP-I), β-CGRP (CALCB, P10092) (formerly known as CGRP-II), amylin (IAPP, P10997) (occasionally called islet-amyloid polypeptide, diabetes-associated polypeptide), adrenomedullin (ADM, P35318) and adrenomedullin 2/intermedin (ADM2, Q7Z4H4). There are species differences in peptide sequences, particularly for the CTs. CTR-stimulating peptide {Pig} (CRSP) is another member of the family with selectivity for the CT receptor but it is not expressed in humans [952]. Olcegepant (also known as BIBN4096BS, pKi˜10.5) and telcagepant (also known as MK0974, pKi˜9) are the most selective antagonists available, showing selectivity for CGRP receptors, with a particular preference for those of primate origin. CLR (calcitonin receptor-like receptor) by itself binds no known endogenous ligand, but in the presence of RAMPs it gives receptors for CGRP, adrenomedullin and adrenomedullin 2/intermedin. The ligands described have limited selectivity. Adrenomedullin has appreciable affinity for CGRP receptors. CGRP can show significant cross-reactivity at AMY receptors and AM 2 receptors. Adrenomedullin 2/intermedin also has high affinity for the AM 2 receptor [818]. CGRP-(8-37) acts as an antagonist of CGRP (pK i8 ) and inhibits some AM and AMY responses (pK i˜6 -7). It is weak at CT receptors. Human AM-(22-52) has some selectivity towards AM receptors, but with modest potency (pK i˜7 ), limiting its use [754]. Olcegepant shows the greatest selectivity between receptors but still has significant affinity for AMY 1 receptors [2057]. G s is a prominent route for effector coupling for CLR and CTR but other pathways (e.g. Ca 2+ , ERK, Akt), and G proteins can be activated [2056]. There is evidence that CGRP-RCP (a 148 amino-acid hydrophilic protein, ASL (P04424) is important for the coupling of CLR to adenylyl cyclase [524]. [ 125 I]-Salmon CT is the most common radioligand for CT receptors but it has high affinity for AMY receptors and is also poorly reversible. [ 125 I]-Tyr 0 -CGRP is widely used as a radioligand for CGRP receptors. Calcium-sensing receptor G protein-coupled receptors → Calcium-sensing receptor Overview: The calcium-sensing receptor (CaS, provisional nomenclature as recommended by NC-IUPHAR [557]) responds to multiple endogenous ligands, including extracellular calcium and other divalent/trivalent cations, polyamines and polycationic peptides, L-amino acids (particularly L-Trp and L-Phe), glutathione and various peptide analogues, ionic strength and extracellular pH (reviewed in [1122]). While divalent/trivalent cations, polyamines and polycations are CaS receptor agonists [234,1618], L-amino acids, glutamyl peptides, ionic strength and pH are allosteric modulators of agonist function [375,557,803,1616,1617]. Indeed, L-amino acids have been identified as "co-agonists", with both concomitant calcium and L-amino acid binding required for full receptor activation [623,2205]. The sensitivity of the CaS receptor to primary agonists is increased by elevated extracellular pH [270] or decreased extracellular ionic strength [1617]. This receptor bears no sequence or structural relation to the plant calcium receptor, also called CaS. Nomenclature CaS receptor Amino-acid rank order of potency L-phenylalanine, L-tryptophan, L-histidine > L-alanine > L-serine, L-proline, L-glutamic acid > L-aspartic acid (not L-lysine, L-arginine, L-leucine and L-isoleucine) [375] Cation rank order of potency [234] Glutamyl peptide rank order of potency Comments: The CaS receptor has a number of physiological functions, but it is best known for its central role in parathyroid and renal regulation of extracellular calcium homeostasis [728]. This is seen most clearly in patients with loss-of-function CaS receptor mutations who develop familial hypocalciuric hypercalcaemia (heterozygous mutations) or neonatal severe hyperparathyroidism (heterozygous, compound heterozygous or homozygous mutations) [728] and in Casr null mice [307,803], which exhibit similar increases in PTH secretion and blood calcium levels. Gain-of-function CaS mutations are associated with autosomal dominant hypocalcaemia and Bartter syndrome type V [728]. The CaS receptor primarily couples to G q/11 , G 12/13 and G i/o [418,634,836,1954], but in some cell types can couple to G s [1258]. However, the CaS receptor can form heteromers with Class C GABAB [308,327] and mGlu1/5 receptors [595], which may introduce further complexity in its signalling capabilities. Multiple other small molecule chemotypes are positive and negative allosteric modulators of the CaS receptor [980,1441]. Further, etelcalcetide is a novel peptide agonist of the receptor [2059]. Agonists and positive allosteric modulators of the CaS receptor are termed Type I and II calcimimetics, respectively, and can suppress parathyroid hormone (PTH (PTH, P01270)) secretion [1443]. Negative allosteric modulators are called calcilytics and can act to increase PTH (PTH, P01270) secretion [1442]. Where functional pK B values are provided for allosteric modulators, this refers to ligand affinity determined in an assay that measures a functional readout of receptor activity (i.e. a receptor signalling assay), as opposed to affinity determined in a radioligand binding assay. The functional pK B may differ depending on the signalling pathway studied. Consult the 'More detailed page' for the assay description, as well as other functional readouts. [1564]) are activated by endogenous ligands that include N-arachidonoylethanolamine (anandamide), N-homo-γ-linolenoylethanolamine, N-docosatetra-7,10,13,16-enoylethanolamine and 2-arachidonoylglycerol. Potency determinations of endogenous agonists at these receptors are complicated by the possibility of differential susceptibility of endogenous ligands to enzymatic conversion [35]. There are currently three licenced cannabinoid medicines each of which contains a compound that can activate CB 1 and CB 2 receptors [1562]. Two of these medicines were developed to suppress nausea and vomiting produced by chemotherapy. These are nabilone (Cesamet®), a synthetic CB 1 /CB 2 receptor agonist, and synthetic 9 -tetrahydrocannabinol (Marinol®; dronabinol), which can also be used as an appetite stimulant. The third medicine, Sativex®, contains mainly 9 -tetrahydrocannabinol and cannabidiol, both extracted from cannabis, and is used to treat multiple sclerosis and cancer pain. 1852]). Anandamide is also an agonist at vanilloid receptors (TRPV1) and PPARs [1484]. There is evidence for an allosteric site on the CB 1 receptor [1603]. All of the compounds listed as antagonists behave as inverse agonists in some bioassay systems [1564]. For some cannabinoid receptor ligands, additional pharmacological targets that include GPR55 and GPR119 have been iden-tified [1564]. Moreover, GPR18, GPR55 and GPR119, although showing little structural similarity to CB 1 and CB 2 receptors, respond to endogenous agents that are structurally similar to the endogenous cannabinoid ligands [1564]. Overview: The chemerin receptor (nomenclature as recommended by NC-IUPHAR [414]) is activated by the lipid-derived, anti-inflammatory ligand resolvin E1 (RvE1), which is the result of sequential metabolism of EPA by aspirin-modified cyclooxygenase and lipoxygenase [60,61]. In addition, two GPCRs for resolvin D1 (RvD1) have been identified, FPR2/ALX, the lipoxin A 4 receptor, and GPR32, an orphan receptor [1052]. ) comprise a large subfamily of 7TM proteins that bind one or more chemokines, a large family of small cytokines typically possessing chemotactic activity for leukocytes. Chemokine receptors can be divided by function into two main groups: G protein-coupled chemokine receptors, which mediate leukocyte trafficking, and "Atypical chemokine receptors", which may signal through non-G protein-coupled mechanisms and act as chemokine scavengers to downregulate inflammation or shape chemokine gradients [81]. Chemokines in turn can be divided by structure into four subclasses by the number and arrangement of conserved cysteines. CC (also known as β-chemokines; n= 28), CXC (also known as α-chemokines; n= 17) and CX3C (n= 1) chemokines all have four conserved cysteines, with zero, one and three amino acids separating the first two cysteines respectively. C chemokines (n= 2) have only the second and fourth cysteines found in other chemokines. Chemokines can also be classified by function into homeostatic and inflammatory subgroups. Most chemokine receptors are able to bind multiple high-affinity chemokine ligands, but the ligands for a given receptor are almost always restricted to the same structural subclass. Most chemokines bind to more than one receptor subtype. Receptors for inflammatory chemokines are typically highly promiscuous with regard to ligand specificity, and may lack a selective endogenous ligand. G protein-coupled chemokine receptors are named acccording to the class of chemokines bound, whereas ACKR is the root acronym for atypical chemokine re-ceptors [82]. Listed are those human agonists with EC 50 values <50nM in either Ca 2+ flux or chemotaxis assays at human recombinant G protein-coupled chemokine receptors expressed in mammalian cell lines. There can be substantial cross-species differences in the sequences of both chemokines and chemokine receptors, and in the pharmacology and biology of chemokine receptors. Endogenous and microbial non-chemokine ligands have also been identified for chemokine receptors. Many chemokine receptors function as HIV co-receptors, but CCR5 is the only one demonstrated to play an essential role in HIV/AIDS pathogenesis. The tables include both standard chemokine receptor names [2191] and aliases. Numerical data quoted are typically pK i or pIC 50 [1471]) are activated by the endogenous peptides cholecystokinin-8 (CCK-8 (CCK, P06307)), CCK-33 (CCK, P06307), CCK-58 (CCK, P06307) and gastrin (gastrin-17 (GAST, P01350)). There are only two distinct subtypes of CCK recep-tors, CCK 1 and CCK 2 receptors [1038,2073], with some alternatively spliced forms most often identified in neoplastic cells. The CCK receptor subtypes are distinguished by their peptide selectivity, with the CCK 1 receptor requiring the carboxyl-terminal heptapeptide-amide that includes a sulfated tyrosine for high affinity and potency, while the CCK 2 receptor requires only the carboxyl-terminal tetrapeptide shared by each CCK and gastrin peptides. These receptors have characteristic and distinct distributions, with both present in both the central nervous system and peripheral tissues. Comments: While a cancer-specific CCK receptor has been postulated to exist, which also might be responsive to incompletely processed forms of CCK (Gly-extended forms), this has never been isolated. An alternatively spliced form of the CCK 2 receptor in which intron 4 is retained, adding 69 amino acids to the intracel-lular loop 3 (ICL3) region, has been described to be present particularly in certain neoplasms where mRNA mis-splicing has been commonly observed [1833], but it is not clear that this receptor splice form plays a special role in carcinogenesis. Another alternative splicing event for the CCK 2 receptor was reported [1850], with alternative donor sites in exon 4 resulting in long (452 amino acids) and short (447 amino acids) forms of the receptor differing by five residues in ICL3, however, no clear functional differences have been observed. [300], which are highly conserved across species. While SMO shows structural resemblance to the 10 FZDs, it is functionally separated as it mediates effects in the Hedgehog signaling pathway [1747]. FZDs are activated by WNTs, which are cysteine-rich lipoglycoproteins with fundamental functions in ontogeny and tissue homeostasis. FZD signalling was initially divided into two pathways, being either dependent on the accumulation of the transcription regulator β-catenin (CTNNB1, P35222) or being β-catenin-independent (often referred to as canonical vs. non-canonical WNT/FZD signalling, respectively). WNT stimulation of FZDs can, in cooperation with the low density lipoprotein receptors LRP5 (O75197) and LRP6 (O75581), lead to the inhibition of a constitutively active destruction complex, which results in the accumulation of β-catenin and subsequently its translocation to the nucleus. β-Catenin, in turn, modifies gene transcription by interacting with TCF/LEF transcription factors. β-Catenin-independent FZD signalling is far more complex with regard to the diversity of the activated pathways. WNT/FZD signalling can lead to the activation of heterotrimeric G proteins [447], the elevation of intracellular calcium [1828], activation of cGMP-specific PDE6 [19] and elevation of cAMP as well as RAC-1, JNK, Rho and Rho kinase signalling [730]. Furthermore, the phosphoprotein Dishevelled constitutes a key player in WNT/FZD signalling. As with other GPCRs, members of the Frizzled family are functionally dependent on the arrestin scaffolding protein for internalization [321], as well as for β-catenin-dependent [242] and -independent [243, 986] signalling. The pattern of cell signalling is complicated by the presence of additional ligands, which can enhance or inhibit FZD signalling (secreted Frizzled-related proteins (sFRP), Wnt-inhibitory factor (WIF1, Q9Y5W5) (WIF), sclerostin (SOST, Q9BQB4) or Dickkopf (DKK)), as well as modulatory (co)receptors with Ryk, ROR1, ROR2 and Kremen, which may also function as independent signalling proteins. Selective antagonists ---vismodegib (pK i 7.8) [2065] Comments: There is limited knowledge about WNT/FZD specificity and which molecular entities determine the signalling outcome of a specific WNT/FZD pair. Understanding of the coupling to G proteins is incomplete (see [447]). There is also a scarcity of information on basic pharmacological characteristics of FZDs, such as binding constants, ligand specificity or concentrationresponse relationships [984]. Comments: SB290157 has also been reported to have agonist properties at the C3a receptor [1282]. The putative chemoattractant receptor termed C5a 2 (also known as GPR77, C5L2) binds [ 125 I]C5a with no clear signalling function, but has a putative role opposing inflammatory responses [267,599,616]. Binding to this site may be displaced with the rank order C5a des-Arg (C5)> C5a (C5, P01031) [267,1508] while there is controversy over the abil-ity of C3a (C3, P01024) and C3a des Arg (C3, P01024) to compete [817,936,937,1508]. C5a 2 appears to lack G protein signalling and has been termed a decoy receptor [1753]. However, C5a 2 does recruit arrestin after ligand binding, which might provide a signaling pathway for this receptor [94,2015], and forms heteromers with C5a 1 . C5a, but not C5a-des Arg, induces upregulation of heteromer formation between complement C5a receptors C5a 1 and C5a 2 [395]. There are also reports of pro-inflammatory activity of C5a 2 , mediated by HMGB1, but the signaling pathway that underlies this is currently unclear (reviewed in [1161]). More recently, work in T cells has shown that C5a 1 and C5a 2 act in opposition to each other and that altering the equilibrium between the two receptors, by differential expression or production of C5a-des Arg (which favours C5a 2 ), can affect the final cellular response [57]. Comments: A CRF binding protein has been identified (CRHBP, P24387) to which both corticotrophin-releasing hormone (CRH, P06850) and urocortin 1 (UCN, P55089) bind with high affinities, which has been suggested to bind and inactivate circulating corticotrophin-releasing hormone (CRH, P06850) [1558]. [1897,1962] dopamine [252,573,1725] Agonists fenoldopam [1962] rotigotine [448], cabergoline (Partial agonist) [1337], aripiprazole (Partial agonist) [2199], bromocriptine [573,1337,1725], MLS1547 (Biased agonist) [572], ropinirole [766], apomorphine (Partial agonist) [252,573,1337,1725,1844], pramipexole [1332,1725], benzquinamide [677] [1897,1962] quinpirole [252,1332,1539,1844,1846,2019] Selective agonists SKF-83959 (Biased agonist) [377], SKF-81297 [47] -Rat sumanirole [1301] Antagonists flupentixol (pK i 7-8.4) [1897,1962] blonanserin (pK i 9.9) [1487], pipotiazine (pK i 9.7) [1845], perphenazine (pK i 8.9-9.6) [1055,1761], risperidone (pK i 9.4) [64], perospirone (pK i 9.2) [1762], trifluoperazine (pK i 8.9-9) [1055,1763] Sub/family-selective antagonists SCH-23390 (pK i 7.4-9.5) [1897,1962], SKF-83566 (pK i 9.5) [1897], ecopipam (pK i 8.3) [1963] haloperidol (pK i 7.4-8.8) [573,1230,1332,1844,1963] Selective antagonists -L-741,626 (pK i 7.9-8.5) [688,1069], domperidone (pK i 7. Comments: The selectivity of many of these agents is less than two orders of magnitude. Comments: Splice variants of the ET A receptor have been identified in rat pituitary cells; one of these, ET A R-C13, appeared to show loss of function with comparable plasma membrane expression to wild type receptor [748]. Subtypes of the ET B receptor have been proposed, although gene disruption studies in mice suggest that only a single gene product exists [1350]. G protein-coupled estrogen receptor G protein-coupled receptors → G protein-coupled estrogen receptor Further reading on Endothelin receptors Overview: The G protein-coupled estrogen receptor (GPER, nomenclature as agreed by the NC-IUPHAR Subcommittee on the G protein-coupled estrogen receptor [1607]) was identified following observations of estrogen-evoked cyclic AMP signalling in breast cancer cells [65], which mirrored the differential expression of an orphan 7-transmembrane receptor GPR30 [276]. There are observations of both cell-surface and intracellular expression of the GPER receptor [1647,1953]. Further reading on G protein-coupled estrogen receptor Prossnitz ER et al. (2015) (butyric acid) and C5 (pentanoic acid)) activate FFA2 [231,1117,1465] and FFA3 [231,1117] receptors. The crystal structure for agonist bound FFA1 has been described [1862]. propanoic acid [231,1117,1465,1741], acetic acid [231,1117,1465,1741], butyric acid [231,1117,1465,1741], trans-2-methylcrotonic acid [1741], 1-methylcyclopropanecarboxylic acid [1741] Selective agonists AMG-837 [1176], compound 4 [347], TUG-770 [346], TUG-905 [345], GW9508 (Partial agonist) [222], fasiglifam [935,1434,1862,1985] [795,1372]. GPR42 was originally described as a pseudogene within the family (ENSFM00250000002583), but the discovery of several polymor-phisms suggests that some versions of GPR42 may be functional [1167]. GPR84 is a structurally-unrelated G protein-coupled receptor which has been found to respond to medium chain fatty acids [2067]. Overview: Functional GABA B receptors (nomenclature as agreed by the NC-IUPHAR Subcommittee on GABA B receptors [199,1579]) are formed from the heterodimerization of two similar 7TM subunits termed GABA B1 and GABA B2 [199,506,1578,1579,2002]. GABA B receptors are widespread in the CNS and regulate both pre-and postsynaptic activity. The GABA B1 subunit, when expressed alone, binds both antagonists and agonists, but the affinity of the latter is generally 10-100fold less than for the native receptor. Co-expression of GABA B1 and GABA B2 subunits allows transport of GABA B1 to the cell sur-face and generates a functional receptor that can couple to signal transduction pathways such as high-voltage-activated Ca 2+ channels (Ca v 2.1, Ca v 2.2), or inwardly rectifying potassium channels (Kir3) [147,199,200]. The GABA B1 subunit harbours the GABA (orthosteric)-binding site within an extracellular domain (ECD) [199,622,624,1578]. The two subunits interact by direct allosteric coupling [1367], such that GABA B2 increases the affinity of GABA B1 for agonists and reciprocally GABA B1 facilitates the coupling of GABA B2 to G proteins [622,1060,1578]. GABA B1 and GABA B2 subunits assemble in a 1:1 stoichiometry by means of a coiled-coil interaction between α-helices within their carboxy-termini that masks an endoplasmic reticulum retention motif (RXRR) within the GABA B1 subunit but other domains of the proteins also contribute to their heteromerization [147,250,1578]. Recent evidence indicates that higher order assemblies of GABA B receptor comprising dimers of heterodimers occur in recombinant expression systems and in vivo and that such complexes exhibit negative functional cooperativity between heterodimers [373,1577]. Adding further complexity, KCTD (potassium channel tetramerization proteins) 8, 12, 12b and 16 associate as tetramers with the carboxy terminus of the GABA B2 subunit to impart altered signalling kinetics and agonist potency to the receptor complex [108, 1751,1990] and are reviewed by [1580]. The molecular complexity of GABAB receptors is further increased through association with trafficking and effector proteins [ and GABA B1b isoforms, which are most prevalent in neonatal and adult brain tissue respectively, differ in their ECD sequences as a result of the use of alternative transcription initiation sites. GABA B1a -containing heterodimers localise to distal axons and mediate inhibition of glutamate release in the CA3-CA1 terminals, and GABA release onto the layer 5 pyramidal neurons, whereas GABA B1b -containing receptors occur within dendritic spines and mediate slow postsynaptic inhibition [1613,2035]. Only the 1a and 1b variants are identified as components of native receptors [199]. Additional GABA B1 subunit isoforms have been described in rodents and humans [1130] and reviewed by [147]. [199,580,581]. Radioligand K D values relate to binding to rat brain membranes. CGP 71872 is a photoaffinity ligand for the GABA B1 subunit [128]. CGP27492 (3-APPA), CGP35024 (3-APMPA) and Nomenclature CGP 44532 act as antagonists at human GABA A ρ1 receptors, with potencies in the low micromolar range [580]. In addition to the ligands listed in the table, Ca 2+ binds to the VTM of the GABA B1 subunit to act as a positive allosteric modulator of GABA [594]. Synthetic positive allosteric modulators with low, or no, intrinsic activity include CGP7930, GS39783, BHF-177 [2040] and (+)-BHFF [9,147,154,580]. The site of action of CGP7930 and GS39783 appears to be on the heptahelical domain of the GABA B2 subunit [483,1578]. In the presence of CGP7930 or GS39783, CGP 35348 and 2-hydroxy-saclofen behave as partial agonists [580]. A negative allosteric modulator of GABA B activity has been reported [318]. Knock-out of the GABA B1 subunit in C57B mice causes the development of severe tonic-clonic convulsions that prove fatal within a month of birth, whereas GABA B1 -/-BALB/c mice, although also displaying spontaneous epileptiform activity, are viable. The phenotype of the latter animals additionally includes hyperalgesia, hyperlocomotion (in a novel, but not familiar, environment), hyperdopaminergia, memory impairment and behaviours indicative of anxiety [510,2008]. A similar phenotype has been found for GABA B2 -/-BALB/c mice [613]. [525]; in other species, it is 29 amino acids long and C-terminally amidated. Amino acids 1-14 of galanin are highly conserved in mammals, birds, reptiles, amphibia and fish. Shorter peptide species (e.g. human galanin-1-19 [143] and porcine galanin-5-29 [1809]) and N-terminally extended forms (e.g. N-terminally seven and nine residue elongated forms of porcine galanin [144,1809]) have been reported. galanin-(1-11) is a high-affinity agonist at GAL 1 /GAL 2 (pK i 9), and galanin(2-11) is selective for GAL 2 and GAL 3 compared with GAL 1 [1212]. 26 ]galanin binds to all three subtypes with K d values generally reported to range from 0.05 to 1 nM, depending on the assay conditions used [552,1821,1834,1835,2070]. Porcine galanin-(3-29) does not bind to cloned GAL 1 , GAL 2 or GAL 3 receptors, but a receptor that is functionally activated by porcine galanin-(3-29) has been reported in pituitary and gastric smooth muscle cells [691,2142]. Additional galanin receptor subtypes are also suggested from studies with chimeric peptides (e.g. M15, M35 and M40), which act as antagonists in functional assays in the cardiovascular system [2000] [552,1834,1835]. Recent studies have described the synthesis of a series of novel, systemically-active, galanin analogues, with modest preferential binding at the GAL 2 receptor. Specific chemical modifications to the galanin backbone increased brain levels of these peptides after i.v. injection and several of these peptides exerted a potent antidepressant-like effect in mouse models of depression [1698]. Ghrelin receptor G protein-coupled receptors → Ghrelin receptor Overview: The ghrelin receptor (nomenclature as agreed by the NC-IUPHAR Subcommittee for the Ghrelin receptor [415]) is activated by a 28 amino-acid peptide originally isolated from rat stomach, where it is cleaved from a 117 aminoacid precursor (GHRL, Q9UBU3). The human gene encoding the precursor peptide has 83% sequence homology to rat preproghrelin, although the mature peptides from rat and human differ by only two amino acids [1285]. Alternative splicing results in the formation of a second peptide, [des-Gln 14 ]ghrelin (GHRL, Q9UBU3) with equipotent biological activity [822]. A unique post-translational modification (octanoylation of Ser 3 , catalysed by ghrelin O-acyltransferase (MBOAT4, Q96T53) [2170] occurs in both peptides, essential for full activity in binding to ghrelin receptors in the hypothalamus and pituitary, and for the release of growth hormone from the pituitary [1029]. Structure activity studies showed the first five N-terminal amino acids to be the minimum required for binding [122], and receptor mutagenesis has indicated overlap of the ghrelin binding site with those for small molecule agonists and allosteric modulators of ghrelin (GHRL, Q9UBU3) function [814]. In cell systems, the ghrelin receptor is constitutively active [815], but this is abolished by a naturally occurring mutation (A204E) that results in decreased cell surface receptor expression and is associated with familial short stature [1527]. [121], which raises the possible existence of different receptor subtypes in peripheral tissues and the central nervous system. A potent inverse agonist has been identified ([D-Arg 1 ,D-Phe 5 ,D-Trp 7,9 ,Leu 11 ]substance P, pD 2 8.3; [812]). Ulimorelin, described as a ghrelin receptor agonist (pK i 7.8 and pD 2 7.5 at human recombinant ghrelin receptors), has been shown to stimulate ghrelin receptor mediated food intake and gastric emptying but not elicit release of growth hormone, or modify ghrelin stimulated growth hormone release, thus pharmacolog-ically discriminating the orexigenic and gastrointestinal actions of ghrelin (GHRL, Q9UBU3) from the release of growth hormone [567]. A number of selective antagonists have been reported, including peptidomimetic [1393] and non-peptide small molecules including GSK1614343 [1556,1699]. Comments: The glucagon receptor has been reported to interact with receptor activity modifying proteins (RAMPs), specifically RAMP2, in heterologous expression systems [349], although the physiological significance of this has yet to be established. Gonadotrophin-releasing hormone receptors G protein-coupled receptors → Gonadotrophin-releasing hormone receptors Overview: GnRH 1 and GnRH 2 receptors (provisonal nomenclature [557], also called Type I and Type II GnRH receptor, respectively [1342]) have been cloned from numerous species, most of which express two or three types of GnRH receptor [1341, 1342,1810]. GnRH I (GNRH1, P01148) (p-Glu-His-Trp-Ser-Tyr-Gly-Leu-Arg-Pro-Gly-NH2) is a hypothalamic decapeptide also known as luteinizing hormone-releasing hormone, gonadoliberin, luliberin, gonadorelin or simply as GnRH. It is a member of a family of similar peptides found in many species [1341,1342,1810] including GnRH II (GNRH2, O43555) (pGlu-His-Trp-Ser-His-Gly-Trp-Tyr-Pro-Gly-NH 2 (which is also known as chicken GnRH-II). Receptors for three forms of GnRH exist in some species but only GnRH I and GnRH II and their cognate receptors have been found in mammals [1341, 1342,1810]. GnRH 1 receptors are expressed by pituitary gonadotrophs, where they mediate the effects of GnRH on gonadotropin hormone synthesis and secretion that underpin central control of mammalian reproduction. GnRH analogues are used in assisted reproduction and to treat steroid hormone-dependent conditions [981]. Notably, agonists cause desensitization of GnRH-stimulated gonadotropin secretion and the consequent reduction in circulating sex steroids is exploited to treat hormone-dependent cancers of the breast, ovary and prostate [981]. GnRH1 receptors are selectively activated by GnRH I and all lack the COOH-terminal tails found in other GPCRs. GnRH 2 receptors do have COOH-terminal tails and (where tested) are selective for GnRH II over GnRH I. GnRH2 receptors are expressed by some primates but not by humans [1377]. Phylogenetic classifications divide GnRH receptors into three [1342] or five groups [2117] and highlight examples of gene loss through evolution, with humans retaining only one ancient gene. GPR18, GPR55 and GPR119 G protein-coupled receptors → GPR18, GPR55 and GPR119 Overview: GPR18, GPR55 and GPR119 (provisional nomenclature), although showing little structural similarity to CB 1 and CB 2 cannabinoid receptors, respond to endogenous agents analogous to the endogenous cannabinoid ligands, as well as some natural/synthetic cannabinoid receptor ligands [1564]. Although there are multiple reports to indicate that GPR18, GPR55 and GPR119 can be activated in vitro by N-arachidonoylglycine, lysophosphatidylinositol and N-oleoylethanolamide, respectively, there is a lack of evidence for activation by these lipid messengers in vivo. As such, therefore, these receptors retain their orphan status. Comments The pairing of N-arachidonoylglycine with GPR18 was not replicated in two studies based on arrestin assays [1854,2182]. See [414] for discussion. See reviews [414] and [1800]. In addition to those shown above, further small molecule agonists have been reported [722]. Comments: GPR18 failed to respond to a variety of lipidderived agents in an in vitro screen [2182], but has been reported to be activated by 9 -tetrahydrocannabinol [1308]. GPR55 responds to AM251 and rimonabant at micromolar concentrations, compared to their nanomolar affinity as CB 1 receptor antagonists/inverse agonists [1564]. It has been reported that lysophosphatidylinositol acts at other sites in addition to GPR55 [2164]. N-Arachidonoylserine has been suggested to act as a low efficacy agonist/antagonist at GPR18 in vitro [1306]. It has also been suggested oleoyl-lysophosphatidylcholine acts, at least in part, through GPR119 [1466]. Although PSN375963 and PSN632408 produce GPR119-dependent responses in heterologous expression systems, comparison with N-oleoylethanolamidemediated responses suggests additional mechanisms of action [1466]. Comments: Further closely-related GPCRs include the 5-oxoeicosanoid receptor (OXER1, Q8TDS5) and GPR31 (O00270). Lactate activates HCA 1 on adipocytes in an autocrine manner. It inhibits lipolysis and thereby promotes anabolic effects. HCA 2 and HCA 3 regulate adipocyte lipolysis and immune functions under conditions of increased FFA formation through lipolysis (e.g., during fasting). HCA 2 agonists acting mainly through the receptor on immune cells exert antiatherogenic and anti-inflammatory effects. HCA 2 is also a receptor for butyrate and mediates some of the beneficial effects of short-chain fatty acids produced by gut microbiota. The human BLT 1 receptor is the high affinity LTB 4 receptor whereas the BLT 2 receptor in addition to being a low-affinity LTB 4 receptor also binds several other lipoxygenase-products, such as 12S-HETE, 12S-HPETE, 15S-HETE, and the thromboxane synthase product 12-hydroxyheptadecatrienoic acid. The BLT receptors mediate chemotaxis and immunomodulation in several leukocyte populations and are in addition expressed on non-myeloid cells, such as vascular smooth muscle and endothelial cells. In addition to BLT receptors, LTB 4 has been reported to bind to the peroxisome proliferator activated receptor (PPAR) α [1178] and the vanilloid TRPV1 ligand-gated nonselective cation channel [1307]. The receptors for the cysteinyl-leukotrienes (i.e. LTC 4 , LTD 4 and LTE 4 ) are termed CysLT 1 and CysLT 2 and exhibit distinct expression pat-terns in human tissues, mediating for example smooth muscle cell contraction, regulation of vascular permeability, and leukocyte activation. There is also evidence in the literature for additional CysLT receptor subtypes, derived from functional in vitro studies, radioligand binding and in mice lacking both CysLT 1 and CysLT 2 receptors [258]. Cysteinyl-leukotrienes have also been suggested to signal through the P2Y 12 receptor [570,1473,1534], GPR17 Further reading on Hydroxycarboxylic acid receptors [359] and GPR99 [943]. [330] as well as annexin I (ANXA1, P04083) (ANXA1) and its N-terminal peptides [379,1560]. In addition, a soluble hydrolytic product of protease action on the urokinase-type plasminogen activator receptor has been reported to activate the FPR2/ALX receptor [1646]. Furthermore, FPR2/ALX has been suggested to act as a receptor mediating the proinflammatory actions of the acute-phase reactant, serum amyloid A [1840,1883]. The agonist activity of the lipid mediators described has been questioned [732,1585], which may derive from batch-tobatch differences, partial agonism or biased agonism. Recent results from Cooray et al. (2013) [379] have addressed this issue and the role of homodimers and heterodimers in intracellular signaling. A receptor selective for LXB 4 has been suggested from functional studies [58,1232,1670]. Note that the data for FPR2/ALX are also reproduced on the Formylpeptide receptor pages. Nomenclature Oxoeicosanoid receptors (OXE, nomenclature agreed by the NC-IUPHAR subcommittee on Oxoeicosanoid Receptors [219]) are activated by endogenous chemotactic eicosanoid ligands oxidised at the C-5 position, with 5-oxo-ETE the most potent agonist identified for this receptor. Initial characterization of the heterologously expressed OXE receptor suggested that polyunsaturated fatty acids, such as docosahexaenoic acid and EPA, acted as receptor antagonists [823]. Further reading on Leukotriene receptors Bäck M et al. (2011) [414,983]) are activated by the endogenous phospholipid metabolite LPA. The first receptor, LPA 1 , was identified as ventricular zone gene-1 (vzg-1), leading to deorphanisation of members of the endothelial differentiation gene (edg) family as other LPA receptors along with sphingosine 1-phosphate (S1P) receptors. Additional LPA receptor GPCRs were later identified. Gene names have been codified as LPAR1, etc. to reflect the receptor function of proteins. The crystal structure of LPA 1 was recently solved and demonstrates extracellular LPA access to the binding pocket, consistent with proposed delivery via autotaxin. These studies have also implicated cross-talk with endocannabinoids via phosphorylated intermediates that can also activate these receptors. The identified receptors can account for most, although not all, LPAinduced phenomena in the literature, indicating that a majority of LPA-dependent phenomena are receptor-mediated. Radioligand binding has been conducted in heterologous expression systems using [ 3 H]LPA (e.g. [586]). In native systems, analysis of binding data is complicated by metabolism and high levels of nonspecific binding, and therefore the relationship between recombinant and endogenously expressed receptors is unclear. Targeted deletion of LPA receptors has clarified signalling pathways and identified physiological and pathophysiological roles. Independent valida-tion by multiple groups has been reported in the peer-reviewed literature for all six LPA receptors described in the tables, including further validation using a distinct read-out via a novel TGFα "shedding" assay [864]. LPA has also been described as an agonist for the transient receptor potential (Trp) ion channel TRPV1 [1461] and TRPA1 [1012]. In addition, orphan GPCRs (PSP24 [547] and GPR87 [1488]) are proposed as LPA receptors. LPA was originally proposed to be a ligand for GPCR35, but recent data shows that in fact it is a receptor for CXCL17 (CXCL17, Q6UXB2) [1266]. Further, the nuclear hormone receptor PPARγ [1309,1812], has been reported as an LPA receptor. All of these proposed entities require confirmation and are not currently recognized as bona fide LPA receptors. Lysophospholipid (S1P) receptors G protein-coupled receptors → Lysophospholipid (S1P) receptors Overview: Sphingosine 1-phosphate (S1P) receptors (nomenclature as agreed by the NC-IUPHAR Subcommittee on Lysophospholipid receptors [983]) are activated by the endogenous lipid sphingosine 1-phosphate (S1P) and with lower apparent affinity, sphingosylphosphorylcholine (SPC). Originally cloned as orphan members of the endothelial differentiation gene (edg) family, deorphanisation as lysophospholipid receptors for S1P was based on sequence homology to LPA receptors. Current gene names have been codified as S1P 1 R, etc. to reflect the receptor function of these proteins. Most cellular phenomena ascribed to S1P can be explained by receptor-mediated mechanisms; S1P has also been described to act at intracellular sites [1915], and awaits precise definition. Previously-proposed SPC (or lysophophosphatidylcholine) receptors-G2A, TDAG8, OGR1 and GPR4-continue to lack confirmation of these roles [414]. The relationship between recombinant and endogenously expressed receptors is unclear. Radioligand binding has been conducted in heterologous expression systems using [ 32 P]S1P (e.g [1505]). In native systems, analysis of binding data is complicated by metabolism and high levels of nonspecific binding. Targeted dele-tion of several S1P receptors and key enzymes involved in S1P biosynthesis or degradation has clarified signalling pathways and physiological roles. A crystal structure of an S1P 1 -T4 fusion protein has been described [733]. The S1P receptor modulator, fingolimod (FTY720, Gilenya), has received world-wide approval as the first oral therapy for relapsing forms of multiple sclerosis. This drug has a novel mechanism of action involving modulation of S1P receptors in both the immune and nervous systems [340, 369,687], although the precise nature of its interaction requires clarification. The MT 3 binding site of hamster brain and peripheral tissues such as kidney and testis, also termed the ML 2 receptor, binds selectively 2-iodo-[ 125 I]5MCA-NAT [1356]. Pharmacological investigations of MT 3 binding sites have primarily been conducted in hamster tissues. At this site, The endogenous ligand N-acetylserotonin [495,1215,1356,1588] and 5MCA-NAT [1588] appear to function as agonists, while prazosin [1215] functions as an antagonist. The MT 3 binding site of hamster kidney was also identified as the hamster homologue of human quinone reduc-tase 2 (NQO2, P16083 [1474,1475]). The MT 3 binding site activated by 5MCA-NAT in eye ciliary body is positively coupled to adenylyl cyclase and regulates chloride secretion [842]. Xenopus melanophores and chick brain express a distinct receptor (x420, P49219; c346, P49288, initially termed Mel 1C ) coupled to the G i/o family of G proteins, for which GPR50 has recently been suggested to be a mammalian counterpart [479] although melatonin does not bind to GPR50 receptors. Several variants of the MTNR1B gene have been associated with increased type 2 diabetes risk. solved [1075,1364,1408,1984]. The structure of the 7 transmembrane (TM) domains of both mGlu1 and mGlu5 have been solved, and confirm a general helical organization similar to that of other GPCRs, although the helices appear more compacted [465,2136]. mGlu form constitutive dimers crosslinked by a disulfide bridge. Although mGlu receptors have been thought to only form homodimers, recent studies revealed the possible formation of heterodimers between either group-I receptors, or within and between group-II and -III receptors [468]. Although well characterized in transfected cells, co-localization and specific pharmacological properties also suggest the existence of such heterodimers in the brain [2183]. The endogenous ligands of mGlu are L-glutamic acid, L-serine-O-phosphate, N-acetylaspartylglutamate (NAAG) and L-cysteine sulphinic acid. Group-I mGlu receptors may be activated by 3,5-DHPG and (S)-3HPG [204] and antagonized by (S)-hexylhomoibotenic acid [1235]. Group-II mGlu receptors may be activated by LY389795 [1365], LY379268 [1365], eglumegad [1744,2138], DCG-IV and (2R,3R)-APDC [1745], and antagonised by eGlu [890] and LY307452 [518,2096]. Group-III mGlu receptors may be activated by L-AP4 and (R,S)-4-PPG [610]. An example of an antagonist selective for mGlu receptors is LY341495, which blocks mGlu 2 and mGlu 3 at low nanomolar concentrations, mGlu 8 at high nanomolar concentrations, and mGlu 4 , mGlu 5 , and mGlu 7 in the micromolar range [1001]. In addition to orthosteric ligands that directly interact with the glutamate recognition site, allosteric modulators that bind within the TM domain have been described. Negative allosteric modulators are listed separately. The positive allosteric modulators most often act as 'potentiators' of an orthosteric agonist response, without significantly activating the receptor in the absence of agonist. Nomenclature mGlu 1 receptor mGlu 2 receptor mGlu 3 receptor mGlu 4 receptor mGlu 5 receptor Endogenous agonists L-glutamic acid [1574] L-glutamic acid [1574] L-glutamic acid [1574], NAAG [1750] L-glutamic acid [1574] L-glutamic acid [1574] Agonists PCCG-4 (pIC 50 Endogenous agonists L-glutamic acid [1574] L-glutamic acid [1574] L-serine-O-phosphate [1254,2138], L-glutamic acid [1574] Agonists -LSP4-2022 [666], L-serine-O-phosphate [2138], L-AP4 [2138] (S)-3,4-DCPG [1952], L-AP4 [1254] Selective agonists 1-benzyl-APDC [1987 [48] at mGlu 5 receptors. Although a number of radioligands have been used to examine binding in native tissues, correlation with individual subtypes is limited. Many pharmacological agents have not been fully tested across all known subtypes of mGlu receptors. Po-tential differences linked to the species (e.g. human versus rat or mouse) of the receptors and the receptor splice variants are generally not known. The influence of receptor expression level on pharmacology and selectivity has not been controlled for in most studies, particularly those involving functional assays of receptor coupling. (S)-(+)-CBPG is an antagonist at mGlu 1 , but is an agonist (albeit of reduced efficacy) at mGlu 5 receptors. DCG-IV also exhibits agonist activity at NMDA glutamate receptors [2007], and is an antagonist at all Group-III mGluRs with an IC 50 of 30μM. A potential novel metabotropic glutamate receptor coupled to phosphoinositide turnover has been observed in rat brain; it is ac-tivated by 4-methylhomoibotenic acid (ineffective as an agonist at recombinant Group I metabotropic glutamate receptors), but is resistant to LY341495 [356]. There are also reports of a distinct metabotropic glutamate receptor coupled to phospholipase D in rat brain, which does not readily fit into the current classification [1013,1549] A related class C receptor composed of two distinct subunits, T1R1 + T1R3 is also activated by glutamate and is responsible for umami taste detection. All selective antagonists at metabotropic glutamate receptors are competitive. Further reading on Metabotropic glutamate receptors Conn PJ et al. (1997) , which may also generate a motilin-associated peptide (MLN, P12872). These receptors promote gastrointestinal motility and are suggested to be responsible for the gastrointestinal prokinetic effects of certain macrolide antibiotics (often called motilides; e.g. erythromycin), although for many of these molecules the evidence is sparse. Nomenclature motilin receptor Endogenous agonists motilin (MLN, P12872) [386,1286,1287,1288] Agonists alemcinal [1947], erythromycin-A [533,1947], azithromycin [225] Selective agonists camicinal [105,1712], mitemcinal [1023,1918] Comments: In terms of structure, the motilin receptor has closest homology with the ghrelin receptor. Thus, the human motilin receptor shares 52% overall amino acid identity with the ghrelin receptor and 86% in the transmembrane regions [759,1918,1947]. However, differences between the N-terminus regions of these receptors means that their cognate peptide ligands do not readily activate each other [408,1712]. In laboratory rodents, the gene encoding the motilin percursor appears to be absent, while the receptor appears to be a pseudogene [759,1710]. Functions of motilin (MLN, P12872) are not usually detected in rodents, al-though brain and other responses to motilin and the macrolide alemcinal have been reported and the mechanism of these actions is obscure [1311,1462]. Notably, in some non-laboratory rodents (e.g. the North American kangaroo rat (Dipodomys) and mouse (Microdipodops) a functional form of motilin may exist but the motilin receptor is non-functional [1159]. Marked differences in ligand affinities for the motilin receptor in dogs and humans may be explained by significant differences in receptor structure [1711]. Note that for the complex macrolide structures, selectivity of action has often not been rigorously examined and other ac-tions are possible (e.g. P2X inhibition by erythromycin; [2216]). Small molecule motilin receptor agonists are now described [1159,1712,2100]. The motilin receptor does not appear to have constitutive activity [812]. Although not proven, the existence of biased agonism at the receptor has been suggested [1288,1348,1709]. A truncated 5-transmembrane structure has been identified but this is without activity when transfected into a host cell [533]. Receptor dimerisation has not been reported. has also been identified as an endogenous agonist [1378]. NmS-33 is, as its name suggests, a 33 amino-acid product of a precursor protein derived from a single gene and contains an amidated Cterminal heptapeptide identical to NmU. NmS-33 appears to activate NMU receptors with equivalent potency to NmU-25. Nomenclature NMU1 receptor NMU2 receptor Antagonists -R-PSOP (pK B 7) [1193] Comments: NMU1 and NMU2 couple predominantly to G q/11 although there is evidence of good coupling to G i/o [218,825,833]. 10-fold) than human NPS receptor Asn107 [1645]. Several epidemiological studies reported an association between Asn 107 Ile receptor variant and susceptibility to panic disorders [458,460,1506,1621]. The SNP Asn 107 Ile has also been linked to sleep behavior [662], inflammatory bowel disease [402], schizophrenia [1145], increased impulsivity and ADHD symptoms [1083]. Interestingly, a carboxy-terminal splice variant of human NPS receptor was found to be overexpressed in asthmatic patients [1091]. [584,1792]. C-terminally extended forms of the peptides (neuropeptide W-30 (NPW, Q8N729) and neuropeptide B-29 (NPB, Q8NG41)) also activate NPBW1 [216]. Unique to both forms of neuropeptide B is the N-terminal bromination of the first tryptophan residue, and it is from this post-translational modification that the nomenclature NPB is derived. These peptides were first identified from bovine hypothalamus and therefore are classed as neuropeptides. Endogenous variants of the peptides with-out the N-terminal bromination, des-Br-neuropeptide B-23 (NPB, Q8NG41) and des-Br-neuropeptide B-29 (NPB, Q8NG41), were not found to be major components of bovine hypothalamic tissue extracts. The NPBW2 receptor is activated by the short and C-terminal extended forms of neuropeptide W and neuropeptide B [216]. [2080]. It has been reported that neuropeptide W may have a key role in the gating of stressful stimuli when mice are exposed to novel environments [1392]. Two an-tagonists have been discovered and reported to have affinity for NPBW1, ML181 and ML250, the latter exhibiting improved selectivity ( ∼ 100 fold) for NPBW1 compared to MCH1 receptors [694,695]. Computational insights into the binding of antagonists to this receptor have also been described [1541]. [1330]) are activated by the endogenous peptides neuropeptide Y (NPY, P01303), neuropeptide Y- , peptide YY (PYY, P10082), PYY- and pancreatic polypeptide (PPY, P01298) (PP). The receptor originally identified as the Y3 receptor has been identified as the CXCR4 chemokine recepter (originally named LESTR, [1201]). The y6 receptor is a functional gene product in mouse, absent in rat, but contains a frameshift mutation in primates producing a truncated non-functional gene [676]. Many of the agonists exhibit differing degrees of selectivity dependent on the species examined. For example, the potency of PP is greater at the rat Y 4 receptor than at the human receptor [513]. In addition, many agonists lack selectiv-ity for individual subtypes, but can exhibit comparable potency against pairs of NPY receptor subtypes, or have not been examined for activity at all subtypes. [ Comments: neurotensin (NTS, P30990) appears to be a lowefficacy agonist at the NTS 2 receptor [2039], while the NTS 1 receptor antagonist meclinertant is an agonist at NTS 2 receptors [2039]. An additional protein, provisionally termed NTS 3 (also known as NTR3, gp95 and sortilin; ENSG00000134243), has been suggested to bind lipoprotein lipase and mediate its degradation [1460]. It has been reported to interact with the NTS 1 receptor [1273] and the NTS 2 receptor [260], and has been implicated in hormone traf-ficking and/or neurotensin uptake. A splice variant of the NTS 2 receptor bearing 5 transmembrane domains has been identified in mouse [195] and later in rat [1561]. Further reading on Neurotensin receptors Boules M et al. endomorphin-1 and endomorphin-2 are also potential endogenous peptides. The Greek letter nomenclature for the opioid receptors, μ, δ and κ, is well established, and NC-IUPHAR considers this nomenclature appropriate, along with the symbols spelled out (mu, delta, and kappa), and the acronyms, MOP, DOP, and KOP. [390,441,557]. The human N/OFQ receptor, NOP, is considered 'opioid-related' rather than opioid because, while it exhibits a high degree of structural homology with the conventional opioid receptors [1361], it displays a distinct pharmacology. Currently there are numerous clinically used drugs, such as morphine and many other opioid analgesics, as well as antagonists such as naloxone, however only for the μ receptor. [1972], etorphine [1972], ethylketocyclazocine [1972] levorphanol [727], hydromorphone [2094], fentanyl [1972], buprenorphine (Partial agonist) [1972], methadone [1595], codeine [1972], tapentadol [1992] [1535], these putative isoforms have not been correlated with any of the subtypes of receptor proposed in years past. Opioid receptors may heterodimerize with each other or with other 7TM receptors [926], and give rise to complexes with a unique pharmacology, however, evidence for such heterodimers in native cells is equivocal and the consequences of this heterodimerization for signalling remains largely unknown. For μ-opioid receptors at least, dimerization does not seem to be required for signalling [1078]. A distinct metenkephalin receptor lacking structural resemblance to the opioid receptors listed has been identified (OGFR, 9NZT2) and termed an opioid growth factor receptor [2198]. Endomorphin-1 and endomorphin-2 have been identified as highly selective, putative endogenous agonists for the μ-opioid receptor. At present, however, the mechanisms for endomorphin synthesis in vivo have not been established, and there is no gene identified that encodes for either. Thus, the status of these peptides as endogenous ligands remains unproven. Two areas of increasing importance in defining opioid receptor function are the presence of functionally relevant single nucleotide polymorphisms in human μ-receptors [1490] and the identification of biased signalling by opioid receptor ligands, in particular, compounds previously characterized as antagonists [236]. Pathway bias for agonists makes general rank orders of potency and efficacy somewhat obsolete, so these do not appear in As ever, the mechanisms underlying the acute and long term regulation of opiod receptor function are the subject of intense investigation and debate. The richness of opioid receptor pharmacology has been enhanced with the recent discovery of allosteric modulators of μ and δ receptors, notably the positive allosteric modulators and silent allosteric "antagonists" outlined in [247,248]. Negative allosteric modulation of opioid receptors has been previously suggested [953], whether all compounds are acting at a similar site remains to be established. Further reading on Opioid receptors Butelman ER et al. (2012) Comments: The primary coupling of orexin receptors to G q/11 proteins is rather speculative and based on the strong activation of phospholipase C, though recent studies in recombinant CHO cells also stress the importance of G q/11 [1065]. Coupling of both receptors to G i/o and G s has also been reported [951,1068,1146,1629] for most cellular responses observed, the G protein pathway is unknown. The potency order of endogenous ligands may depend on the cellular signal transduction machinery. Most of the OX 2 receptor selective antagonists listed are weakly selective (≤10-fold), or selectivity may be less than 100-fold or not unequiv- [1177]. Antagonists of the orexin receptors are the focus of major drug discovery effort for their potential to treat insomnia and other dis-orders of wakefulness [1668], while agonists would likely be useful in human narcolepsy. Further reading on Somatostatin receptors Colao A et al. Overview: Nomenclature as recommended by NC-IUPHAR [414]. The Succinate receptor has been identified as being activated by physiological levels of the Krebs' cycle intermediate succinate and other dicarboxylic acids such as maleate in 2004. Since its pairing with its endogenous ligand, the receptor has been the focus of intensive research and its role has been evidenced in various (patho)physiological processes such as regulation of renin production, retinal angiogenesis or immune response. Nomenclature succinate receptor HGNC, UniProt SUCNR1, Q9BXA5 Endogenous agonists succinic acid [762,1854] Comments: In humans, there is the possibility of two open-reading frames (ORFs) for SUCNR1, allowing the generation of 330 or 334 amino acid proteins Wittenberger et al. [2127] noted that the 330-AA protein was more likely to be expressed given the Kozak sequence surrounding the second ATG. Some databases report SUCNR1 as being 334-AA long. Further reading on VIP and PACAP receptors Harmar AJ et al. (1998)
2018-04-03T02:20:47.420Z
2017-10-21T00:00:00.000
{ "year": 2017, "sha1": "36f21a6a8105458fb9086334cb6f46cacf497da3", "oa_license": "CCBY", "oa_url": "https://bpspubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/bph.13878", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "36f21a6a8105458fb9086334cb6f46cacf497da3", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
54527057
pes2o/s2orc
v3-fos-license
The Relevance of Small Parties: From a General Framework to the Czech “Opposition Agreement”* This paper has two primary aims. The first is to briefly outline the general question of the relevance of small parties. The second aim is in this context to characterise the interesting situation that has emerged in the Czech Republic since the early elections into Parliament were held in June 1998. Following the elections the two strongest parties (ČSSD and ODS) reached a written agreement with the purpose of limiting the influence of small parties as well as making it easier to form a government. A modification of the electoral system in the Chamber of Deputies, as well as a significant strengthening of these two large parties at the expense of the small ones, would bring about a situation not particularly harmful to Czech democracy and in fact, quite the opposite, would contribute to the consolidation of democracy. Czech Sociological Review, 2000, Vol. 36 (No. 1: 27-47) Introduction The role of small parties has at present become a discussion topic of particular importance.In the Czech political scene, this fact stems from the emergence of what is known as the "agreement on creating a stable political environment" (often referred to as the "opposition agreement"), which was concluded by the Civic Democratic Party (ODS) of Václav Klaus, and the Social Democratic Party (ČSSD) of Miloš Zeman in July 1998.The two largest parties (ČSSD and ODS) appear to have reached an agreement aimed among other things at limiting the influence of small parties (especially KDU-ČSL and US).It seems as though the conflict here is set between the large parties on the one hand and the small parties on the other.In fact, the small and very small parties even united to form a type of four-party coalition (KDU-ČSL, US, ODA, DEU) in the November 1998 Senate by-elections in order to enable them to compete against the two largest parties. In many western democracies, over the last three decades party fragmentation 1 has been on the rise, which means that other parties have asserted themselves more perma-nently, such as the Green parties (in connection with the "silent revolution" [Inglehart 1977], and extreme right-wing populist parties (in the context of the "silent counterrevolution") [Ignazi 1992].While as recently as the 1960s the party system appeared to be "frozen", since that time it seems to have been slowly "defrosting".Moreover, a number of regional and ethnic parties have started to attract significant attention in western democracies, and these are most often also small parties when considered from a statewide (federal) point of view.All this increases the interest of studying the role of small parties in contemporary democracies. In this paper I will attempt to briefly outline the general issue concerning the relevance of small parties, and in this light try to characterise the situation that has existed in the Czech Republic since the early elections to the lower chamber of Parliament were held in 1998, following which the two largest parties (ČSSD and ODS) concluded a written agreement with the intention of limiting the influence of small parties and facilitating the formation of government majorities. Size, Relevance and the Number of Parties The size of a party is often regarded as synonymous with its strength.A large party is thus a strong party, and a small party is a weak one.But we know of course that small parties can in fact play a disproportionately great role.This brings us to the question of the "relevance" 2 of small parties.When and why are small parties relevant?According to G. Sartori [1976, see also Sartori 1994 andWare 1996: 148-149], a relevant party is one that either has coalition potential, i.e. it is a party that must be taken into account when a majority government is to be formed, or one that has blackmail potential, i.e. its presence in the political arena considerably alters the nature of the political competition (the arena is transformed from being centripetal into being centrifugal).Coalition potential has more to do with small parties than large ones, while blackmail potential is rather more characteristic of the relatively large parties, which either lack coalition potential altogether or possess this potential to a slight degree only (this has been the case of the Italian Communists, particularly in the 1950s).However, blackmail potential may also be found among small anti-system parties given certain circumstances (particularly when difficulties arise in setting up a majority government).around 6.1, in the period 1970-1974 the average increased to 7.4(!) and over the ensuing period continued to increase.With the exception of the period 1980-1984, when the average was 7.6, the average has continued to 8.0 and up to the highest average, traced in the last researched period, 1995-1997: 9.2! [Lane and Ersson 1999: 142]. 2 ) Both (the smallness of the party and its relevance) must be distinguished from one another, and those parties that for some reason are not relevant (for example due to the existing electoral system) should not be excluded from the category of small parties.Similarly, relevant parties should not be excluded from being small parties.We cannot consider the German Liberals (FDP) as a "large" party because for many decades it has been a "relevant" party.The coalition potential of a party also absolutely cannot be used as a criterion for distinguishing between large and small parties, which is the approach considered by A. Laurent, who claims that from the viewpoint of ties maintained with other political formations, a party can be termed as small if it is unable prior to the elections to conclude election agreements with other partners [Laurent and Villalba 1997: 25].Laurent moreover takes into consideration only pre-election alliances, which is too limiting for the criterion of coalition potential.She further correctly points to a situation in which a certain party itself refused to enter into an alliance, for a time true of the Greens.[Müller-Rommel and Pridham 1991], and the more recent French publication Les petits partis [Laurent and Villalba 1997] are two international academic publications devoted especially to the issue of small parties and the role they play in political systems.In the English publication, P. Mair introduces a list of 157 parties in Western Europe that managed to gain between 1 and 15 percent of the popular vote in national elections between 1947 and 1987.Political scientists focusing on comparative studies worked together with specialists on individual countries in order to compile the collection of papers, in one of which Müller-Rommel distinguishes between four approaches for providing a possible framework for analysis: 1) A conceptional definitional approach, which addresses the question of defining what small parties are and how to classify them within the European party systems. The collective work Small Parties in Western Europe 2) A numerical and party family approach, which focuses on election trends and the results of various "families" of small parties. 3) A diachronic approach, which analyses the developmental cycles of small parties, their formation, successes, and possible extinction. 4) A systemic approach, which looks into the role and performance of small parties in relation to the state, society, and other political parties within the same political system.What follows is mainly founded on the conceptional definitional approach, but it also makes use of the systemic approach.In the context of the conceptional definitional approach, the question of the political "relevanace" of small parties is of particular importance. Specialists are unable to agree on a single set of criteria for determining which parties can be considered as being "small".Mair [1991] examines small parties in Western Europe whose "smallness" is defined by a level of support below 15% of the popular vote.Blondel [1968, see also Blondel 1990] distinguishes further between small parties and very small parties, whereby in his terms small parties maintain around 10% of the vote.In addition to large and small parties, Mair also refers to ephemeral or micro-parties, which have only weak election results (approximately 1%), and in the period his research was focusing on are witnessed in only three cases.However, it is necessary to stress that not only small parties but also even very small parties can sometimes be relevant (for example in Israel). S. Fisher [1974] concludes that a party can be defined as small when at elections it is usually unable to attain a higher position than that of third place.J. Charlot, the recently deceased French specialist on political parties, was of a similar opinion, writing that a party which is not situated among the top three should be considered to be a small party [Charlot 1974].But criteria of this type cannot be applied in a general manner.In particular, such criteria need not necessarily be applicable to those situations that Blondel refers to as "pure" multipartism, in which there is no dominant party.For instance in Switzerland, for many years a configuration of four main parties has managed to co-exist on the federal level: three leading parties that are roughly equal in size (Radicals -FDP/PRD, Socialists -SPS/PSS, and Christian Democrats -CVP/PDC), and a fourth largest party (SVP/UDC) that also cannot be considered as minor. 3M. Duverger [1981: 383] is probably correct in pointing out that the classification of parties according to their size is only a rough measure and only significant when the differences in size also express a difference in substance.The basic distinction between parties according to size is not, in the view of Duverger [1981: 384], between large and small parties, but rather between those parties with a majority vocation, and all the others: not only small, but also medium and even some large parties. While all other kinds of parties defined according to size have a strong tendency toward demagogy and irresponsibility, a party with a majority vocation is almost necessarily realistic.Its programme must pass through a "reality check".Clearly marked out and limited reforms usually move to the forefront of the programme and are far more likely to be included in the programme than are revolutionary principles, which are difficult to realise.In short, the party with a majority vocation is entirely focused on realisable activity and possesses a well-developed sense for realistic politics. The rest of the parties are aware that their programmes cannot be compared with what in reality they will actually do, because it is clear that they will never be in power alone.They will at best be forced to share power with their allies, at the very least in the form of parliamentary support.As a result, they will always be able to lay the responsibility for failure to fulfil their programmes on the shoulders of their coalition partners.The absence of practical sanctions and the absence of "reality checks" enable them, due to the electoral effect, to irresponsibly demand any type of reform, including that which is ultimately unattainable. Moreover, the necessity of making compromises with coalition partners leads parties toward the uncompromising extremes of exaggeration as they attempt to forge more space for retreat.Each of the coalition partners will have a tendency to follow the principle: ask for more so you can at least get something.Parties without a majority vocation thus logically drift into demagogy. It is indisputable that one of the aims of a well functioning political democracy is the curtailment of demagogy and exaggerated propaganda.The operation of parties with a majority vocation (there may even be two such parties within the party system at one time) can also be found among established and relatively homogeneous coalitions of parties, which themselves may also have a majority vocation.Thus the two-party system is not a precondition.It is enough for there to be a system in which one party with a majority vocation competes with an established alliance which also has a majority vocation, or a system in which there are two stable coalitions, each with a majority vocation, competing against each other. 4This kind of established relatively homogeneous coalition, or alliance, may even include a small party as well. right-wing populist parties, achieved a great election victory and moved the Christian Democrats (CVP/PDC) down to fourth place. 4) It is possible to ask whether the "two-and-a-half party system" (or imperfect bipartism) falls into this type of game, as seen in the example of Germany in the 1970s and 1980s.Bipolar configuration, alternation and a relatively moderate character of political competition could genuinely be supported, but at the same time the pendulum game of FDP reduced that which Schumpeter indirectly [1962: 272-273] and Popper directly [1988] considered rightly to be essential to a democracy, namely that citizens have the possibility to freely, by means of free elections, unseat the existing government.Which of the large parties, i.e.Social Democrats or Christian Democrats (SPD or CDU-CSU) will be the government and which the opposition was not so much decided on Relevant small parties are compatible with a well functioning democracy when they are integrated into the bipolar game of alternation [on alternation see Quermonne 1991 and1995].One might suggest that we are ignoring the frequently mentioned alternative to bipolar alternation, the broad or "grand" coalition, recommended by the theorist behind consociational democracy and the consensus model of democracy, Arend Lijphart.But we are not ignoring this at all.The broad coalition, which has demonstrated functional longevity only in the case of Switzerland, restricts the influence of small parties: small parties are in this system no longer relevant on a statewide (federal) level. The size of parties is tied to the number of the parties in a given system.The likelihood that small parties will exist increases along with the increase in the number of parties. Criteria for Determining the Size and Strength of Parties The existence of different electoral systems can change the criteria by which a party may be considered small.For example, in Great Britain, the Liberals are a small party in terms of seats in the lower house of the Parliament, but they are not often a small party in terms of their share of the vote, which brings us to the question of the criteria involved in determining the size of a party. If we wish to quantify the size and strength of a party, a variety of quantifying criteria is available, and they relate to the four various arenas: (1) membership strength, meaning the number of members and activists; (2) the size of the electorate, meaning the percentage of voters; (3) parliamentary size, meaning the percentage of seats in Parliament; (4) government strength (which we will deal with later in more detail). The first criterion is of little use.By its standards, in 1998 the second smallest party in the Chamber of Deputies after the Freedom Union (US) was the Social Democratic Party (ČSSD), which however in the elections of that year became the strongest party.The number of members may be used to judge the development of one party, or to compare one party with the size of others that are similar.The French political scientists, Jean and Monica Charlot, further point out that the number of active members of a party can be used to measure its capability for political mobilisation, thus its "militant strength" (in the original French, "force militante") [Charlot and Charlot 1985: 499]. in Germany by the citizens through elections, but rather through the leaders of the small liberal party (FDP), and sometimes even during the legislative term, when they decided to transfer from one coalition (with SPD) to another (with CDU-CSU).Without denying the negative aspects of the two-and-a-half party system (especially from the viewpoint of the "non-classical" theory of democracy) it is possible with certain reservations to accept that this party system in which the small party of the centre plays a significant role works relatively well.However, there is no doubt that the two-party and two-coalition configurations function more satisfactorily (the latter on the condition that the coalitions are established and relatively homogeneous), as does the configuration in which an established alliance stands against one large party.In this case it can be asked how the German party system will function now when there are since the 1970s not only the Greens, but following unification also the East German post-Communists.The more pessimistic scenario is the fear that the system will tend to approach polarised pluralism and that the extreme post-Communists will be able to use blackmailing tactics.The more optimistic scenario is the prediction that in the Republic two moderate coalitions will alternate: (1) The Social Democrats with the Greens, against (2) The Christian Democrats with the Liberals. As for the second and third criteria, if we were analysing the evolution of popular opinion we would of course place more emphasis on the voter alone.However, as we are interested in understanding the role of parties within the state (or more generally within a given system, a category which can include, for example, the German Land, the Swiss canton, or the European Union), we will interpret the size of a party as meaning, in agreement with Duverger [1981: 381-382], its parliamentary size, i.e. the percentage of seats it holds in the lower chamber.Parliamentary strength also influences electoral strength.Voters in time would probably otherwise be disappointed to realise that the party they cast their votes for is being disadvantaged by the electoral system. The strength of the parties (rather than their size) can also be measured in the arena of government, which for our purposes is even more important than the arena of Parliament.K. Janda [1980], in his overview of parties worldwide, characterised the governing strength of a party on the basis of two variables: (1) "Cabinet Participation", defined as the average number of ministerial posts held by a given party, (2) "Government Leadership", defined as the average number of years over time which a given party has occupied the highest post of executive power (the prime minister or president, depending on the type of regime).It is obvious that this "governing strength" is rather more a matter of "relevance" than of "size".If a small party has produced more than one prime minister does that mean it ceases to be small?Surely not.It is simply a party that is relevant. The size of a party in the arena of Parliament, and the strength (or rather relevance) of a party in the arena of government are closely related to their coalition potential.This dependency is (1) material, which means that election coalitions can substantially increase the number of seats a party gains in Parliament.Here we can present two Czech examples, the first represented by KDS in 1992.Without the government coalition with the largest party (ODS), a situation advantageous to KDS, it would not have gained entry into the Czech National Council (which after the division of the Czecho-Slovak federation became the Chamber of Deputies of the Czech Republic). In the case of the Czech Republic, elections into the Chamber of Deputies set the limiting threshold at 5% for individual parties, and then increase it in the case of election coalitions.This means that very small parties (such as KDS was) cannot form a coalition only among themselves, but must instead seek another, larger partner (a large, medium, or at the very least small party, but not another very small party). The second Czech example can be seen in the November 1998 Senate by-elections, in which small and very small parties formed a successful four-party coalition (KDU-ČSL, US, ODA, DEU) composed of two small (KDU-ČSL, US) and two very small (ODA, DEU) parties.Of course we should remember that the Senate elections employ the double-ballot majority system. (2) The dependency is also political which indicates the fact that a parliamentary and governing alliance increases or decreases the weight, or strength of a party.According to Duverger, in the French National Assembly between 1946 and 1951, the Communist party was less influential with its 163 deputies than the radical party with only 45 deputies.The Communists were isolated and alone, while the radicals made use of their position in the centre of the political spectrum to establish combinations and agreements. In the Czech Republic during the second half of the 1990s, KDU-ČSL had the greatest coalition potential.However it failed to make use of this strength after the elec-tions into the Chamber of Deputies in June 1998, and for the first in a long time found itself in opposition. Big Roles for Small Parties Small parties can at times occupy a decisive position, providing them with considerable influence, when the difference in the number of parliamentary seats between the two main parties or alliances is very small.A coalition shift of the small party is enough to alter the political balance, a classic example is that of the German Liberals (FDP).For a long period, from the 1970s to the 1990s, they managed to maintain a position in government and occupy its key posts, while the two large parties, the CDU and the SPD, remained dependent on them.Thus the fate of the country was left in the hands of a small party.This situation of course depends on the character of the small party.Nonetheless, two fundamental questions present themselves: (1) Is this small party anti-system in nature and dangerous for democracy?(2) Does this small party have any kind of genuine social base? 5 One of the ways to escape from being blackmailed by a small party in this manner is for the two largest parties to form a coalition between themselves, which in post-World War II Austria was often just the case.The large parties (Social Democrats -SPÖ and the People's Party -ÖVP) considered a coalition with the third party (FPÖ) as an unacceptable possibility, since in their eyes this party was nationalistic and extreme.(This type of coalition is not a matter of a grand coalition in the strict sense of the word -from the viewpoint of theory on coalitions it is instead usually a "minimal winning coalition", or more precisely a variant thereof, which unites the smallest number of parties necessary to win, i.e. to gain the majority of the deputy seats in the lower chamber of Parliament).Empirical evidence and theoretical explanations demonstrate that this type of "grand coalition" does not usually work well.A grand coalition blocks the political system; it prevents alternation and does not leave any institutional space for protest against the existing politics. 6In this way it also contributes to an extra-institutional overflow, an extreme example of which is organised terrorism, for instance in Germany at the turn of the 1960s [Braud 1997: 180]. 5 ) If a small party has a genuine social base, it has no choice but to accept this and to admit measures facilitating representation for these small parties.For example in Germany, the 5% threshold is not applied to parties of national minorities, which relates mainly to the Danish minority in the northern Schleswig-Holstein.The German court in its constitutional verdict no.32 of April 5, 1952, distinguished between on the one hand the fragmentary parties (Splitterparteien) which have an insignificant number of votes and have no local rooting (ohne ortlichen Schwerpunkt), i.e. they are scattered throughout the country and have no strong base in a single area, and on the other hand the minority parties (Minderheitsparteien), which the Danish party SSW in Schleswig-Holstein actually is.The decision of the German constitutional court shows that this party of the Danish minority focuses on one fifth of the votes in a part of the country with a specific historical fate, and which is geographically clearly defined and where the cultural character is partly formed by the national minority.This kind of party is not a fragmentary party (Splitterpartei) [see Klokočka 1991: 175]. 6) This is fortunately not true for Switzerland, where the considerable federal decentralisation affects political parties as well and as such the deputies of "government" parties criticise their ministers with ease, and where in addition the elements of direct or semi-direct democracy facilitate the legal expression of disagreement and protest. It be may asked whether or not it is prospicious to democracy when a small party representing only a very limited circle of voters plays such a large role in the political system.A party that has gained only five percent of the popular vote or even less is in a position to determine which of the large parties will be in government and which in the opposition, moreover it is even able to do this throughout the course of the legislative period, i.e. without any reference to elections.Governments become quite fragile and the large parties are left dependent on the small ones. A small party can dictate its conditions because the large party is unable to govern alone.In this way it can abuse its position and select among the potential coalition partners the one which is most likely to retreat or makes the best offer, and the small party can then, for instance, even acquire the seat of prime minister.In June 1998, it was in fact the post of prime minister that the then leader of the People's Party (KDU-ČSL), Josef Lux, was striving for in his wish to form a coalition with the Social Democratic Party (ČSSD) and the small Freedom Union (US) following the 1998 elections into the Chamber of Deputies. The likelihood of establishing a clear and homogeneous majority decreases when small parties play in a disproportionately large role, and the political accountability of the government toward citizens-voters is diluted.As K. R. Popper [1988] pointed out, the voter is then to a great extent stripped of the power to sanction the existing governing team. In practice this is a matter of deciding whether to give an advantage to the strongest party (or durable alliance) and thus increase the chances for a stable and accountable government, or whether we will let the small parties play a disproportionate role and use their power to unscrupulously blackmail the large parties and topple the government whenever they wish. Moreover, the representativeness that supporters of the proportional system put such an emphasis on is only expressed in the forum of Parliament.However, in terms of the exercise of power, the government is of more significance than Parliament.Parliament primarily plays the role of checking power.Insofar as any certain (though usually only very rough) proportionality in government is concerned, it is an altogether rare phenomenon (lastingly it has occurred only in Switzerland since 1959, where of course the small parties have been permanently excluded from the federal executive), and most often it is an expression of some sort of crisis situation (for instance in Great Britain during both World Wars when governments of national unity were formed, or in Austria where over the last decades the Social Democrats (SPÖ) have generally governed along with the People's party (ÖVP), because there are difficulties involved in forming a coalition with the third party (FPÖ), which is extreme and nationalistic). Small Parties in the Czech Political Scene On the basis of the two previous elections to the Chamber of Deputies of the Czech Republic (regular elections in 1996 and early elections in 1998), the following configuration emerged: the two largest parties (the right-wing ODS and the Social Democrats ČSSD) attained roughly 30% of the popular vote or just short of that, one medium-large antisystem party without coalition potential (KSČM, i.e. dogmatic Communists who have maintained their name), which won just under 15% of the votes, and finally two (after the 1996 elections three) small parties who gained between 6-9% of the vote. The extreme right wing party (SPR-RSČ) was during the legislative period of 1996-1998 a small, anti-system party and had absolutely no coalition potential.The small liberal right parties (ODA in the legislative period 1996-1998, and US in the legislative period [1998][1999][2000][2001][2002] which could more or less be characterised as small parties of personalities but nonetheless ideologically clearly defined as positioned on the right, which rather lowered their coalition potential.KDU-ČSL, which on the Czech political landscape is the party with the second largest number of members after the Communists (KSČM), has a pivotal position as a result of its position in the centre of the political spectrum and also had the highest coalition potential, which is usual among these kinds of Christian-Democratic parties. Overall the Czech political scene is quite "polarised" in the sense that Sartori uses the word, i.e. between the relevant parties exists a considerable ideological (or other) distance.This complicates the formation of a coalition not only with the anti-system party but also even with other parties.For example, during the election campaign of 1998, the leader of the Social Democratic Party (ČSSD), Miloš Zeman, declared on more than one occasion that following the legislative elections the only potential coalition party for ČSSD could be KDU-ČSL alone and no other party.The right-wing parties also limited themselves prior to the elections, particularly ODS, which affectedly "mobilised" itself against the Social Democrats during the campaign, as though the latter were a threat to the changes that had been achieved since 1989. Both legislative elections in the Czech Republic (1996 and 1998) were followed by the formation of minority governments.The first was made up of the strongest party ODS (29.6% of the votes, 34% of the seats) and two small parties (KDU-ČSL with 8.1% of the votes and 9% of the seats, and ODA with 6.4% of the votes and 6.5% of the seats).These three parties together still were unable to make up an absolute majority of seats (they only had 49.5% of the seats).This minority government collapsed in November 1997 after the two small parties (KDU-ČSL and ODA) withdrew from the coalition, and roughly half of the deputies and several ministers left the largest party, ODS.After the government of Prime Minister Václav Klaus, the leader of ODS, was dissolved, an interim, minority 'semi-caretaker' government was formed and named by the president of the Republic, composed mainly of members from smaller political parties and those politicians who had left ODS, as well as seven non-affiliated members.The prime minister was Josef Tošovský, the governor of the Central Bank, who had no party affiliation (though he was formerly a longtime member of the Czechoslovak Communist Party during the grim period of "normalisation", a fact which Czech citizens became aware of only some time after he had been appointed prime minister, and then only due to the investigative efforts of some journalists). At the end of January 1998, this government acquired the support of the Chamber of Deputies primarily through the help of the Social Democrats (ČSSD), a party that from the end of 1997 had been coming first in all opinion polls carried out at the time.The Social Democrats, though not actually participating in this government themselves, nonetheless "tolerated" it temporarily and on the condition that early elections would be called in June 1998.In order for these early elections to be called within the agreed time frame, ČSSD along with ODS, which also shared concerns about the interim Tošovský government perhaps implementing important irreversible measures, together put forth a constitutional law on shortening the term of parliamentary mandates [see Brokl and Mansfeldová for details].The early elections were ultimately held in 1998, and although in accordance with all expectations ČSSD did gain the most votes, it acquired only 37% of the seats.It was still unable to form a majority government. The difficulties involved in setting up an efficient government and establishing stability were predicted by this author as early as 1996 [Novák 1996b], i.e. prior to the legislative elections in June of 1996, as a reaction to the expressed satisfaction of the then prime minister, Václav Klaus, who emphasised the political (in fact more precisely the governmental) stability of the Czech Republic.Elections into the Chamber of Deputies in 1996 unfortunately at that time confirmed the author's fears concerning the uncertain future of government stability. On the basis of the 1996 results, the level of fragmentation did decrease ("only" six parties gained entry to the lower chamber and the effective number of parties was 4.15), the process of crystallising the party scene in the meantime strongly progressed: on the right side of the political spectrum with the merging of KDS and ODS, and even more so on the left side of the spectrum where ČSSD dominated sovereignly.Despite all this, the political scene after the 1996 elections became substantially less "governable" than it had been after the 1992 elections.The concern [see Novák 1996c] that the configuration during the years 1992-1996 was an exceptional one and that the likely mid-term (or even long-term) future of the Czech political scene would be characterised by governmental weakness or government instability was confirmed.A significant improvement of the socio-economic situation (which in the Czech Republic as is known began to worsen perceptibly from 1997 on) was not to be expected.Clearly, ODS and ČSSD have difficulties in setting up a functional and efficient government, which in the period of transition the Czech Republic particularly requires.A minority government in the absence of a dominant party 7 is extremely fragile. Increasing Proportionality A process of gradually increasing proportionality in elections into the Chamber of Deputies began to express itself from this time [see Novák 1996c: 410-412].This arose as a result of the combined mechanical and psychological effects of the 5% threshold.For a majority of more than half of the parliamentary mandates it is now necessary under the existing electoral law to gain roughly 45% of the popular vote which is very difficult both for the Social Democrats and for a right-wing coalition, which would not be very solid at 7 ) According to Duverger's view it appears that only the presence of a dominant party, as he defines it in his classic work on political parties, i.e. a party substantially larger than any other party in the given political system [Duverger 1981: 412-414] facilitates the formation of a long-term (stable) minority cabinet, which would not have the character of a mere caretaker government.This is true in the Scandinavian countries where the Social Democrats represent this type of dominant party.The research of Schofield seems to confirm this, according to which in Sweden during the period of 1945-1987, out of the overall number of sixteen governments ten were minority, and minority governments lasted there on average thirty months, i.e. two and a half years, which is an acceptable average [Schofield 1995: 251].For a discussion on whether Duverger's definition of the dominant party differs from how for example Blondel interprets a dominant party, how it differs from Sartori's "predominant" party, and Schwartzenberg's "ultradominant" party [Schwartzenberg 1977: 578], as well as from the idea of parties with a "majority vocation", see my handbook [Novák 1997: 162].Here it is enough to point out that the dominant party in the sense Duverger applies it can only appear once in a given system, but it need not have an absolute majority of deputy seats.any rate.8In addition, the extreme, anti-system parties, KSČM and SPR-RSČ, passed over the threshold without difficulty in these elections. After the early elections to the Chamber of Deputies in June 1998 the situation changed.Both right-wing parties (ODS and ODA) had been hit by crises, from which ODA probably never recovered, and the party did not even candidate in the early elections to the lower chamber.A large number of deputies and ministers left ODS at the end of 1997 and the beginning of 1998, with the former minister of the interior, Jan Ruml, leading the way.He in turn then founded a new formation, the Freedom Union (US), which after a promising start, ultimately receded, and in the early legislative elections in June represented the weakest party managing to cross the 5% threshold (at 8.6%) and enter into the Chamber of Deputies.Conversely, after the departure of a number of the most prominent representatives from ODS, the position of its founder and leader, Václav Klaus, was further strengthened, and in the early elections ODS lost only about 2% (as compared with the previous legislative elections of 1996).With a gain of 27.7% of the popular vote it trailed behind the winning ČSSD (32.2%) by not even five percentage points. Party fragmentation and polarisation was on this occasion lessened as a result of the failure of the extreme right-wing SPR-RSČ, which surprisingly did not manage to cross over the 5% threshold, and thus did not gain any seats in the lower chamber.As a result only five parties won seats in the Chamber of Deputies, one party less than in 1996.Following the 1998 early elections the effective number of parliamentary parties was only 3.7, which is a positive development.The Czech party "landscape" thus moved closer to what Sartori refers to as moderate pluralism.But again, even in this case, a majority government could not be set up. Are the Interests of Large Parties at the Same Time the Interests of the Czech Republic? Already in 1996, prior to the legislative elections, I stressed [see Novák 1996a] that the Social Democratic Party and ODS ought to realise the fact that it is in their interest -as well as the interest of the Czech Republic and its future long-term government stabilityto modify the electoral law on elections into the Chamber of Deputies, in order to facilitate the formation of an efficient and functional government.Otherwise, despite a number of positive circumstances, the Czech political system could sink into the "mud of centrism" (Duverger) and powerlessness.In addition to the existence of a fragile minority government, I described an even worse picture: a "grand coalition" of ODS and ČSSD which would entirely block the Czech political system, disgust the population, and throw coal onto the fires of extreme parties. The two largest Czech parties (ODS in particular) realised this under the pressure of difficulties in forming government coalitions only after the elections of 1998, and their leadership began to consider the possibility of together proposing that changes be made to the electoral law for the lower chamber.It is well known that ODS (which originally proposed a single-ballot plurality electoral system of the Westminster type) was more inclined toward these changes than were the Social Democrats, whose share of the voting preference moreover considerably dropped in the second half of 1999.The main source of conflict during the long negotiations between the two parties concerned the method of allocating seats: ODS proposed the Imperiali highest average system, ČSSD the d'Hondt system.In the end a compromise was reached in the form a sort of modified d'Hondt system, the effects of which lie somewhere between the d'Hondt and the Imperiali systems.The Social Democrats proposed that the new law should only come into effect as of January 1, 2002, fearing that ODS might quit the "opposition agreement", the result of which could lead to the fall of the minority, Social-Democratic government and the call for new, early elections (the next regular elections into the Chamber of Deputies should be held in 2002). Only on January 26, 2000 did the leadership of ODS and the Social Democrats (Václav Klaus and Miloš Zeman, as well as the deputy chairmen of the parties and of the parliamentary clubs), in an effort to append and deepen the agreement on creating a stable political environment, together signed the written "agreement no. 2 on the basic parameters of changes to the electoral system concluded by the Czech Social Democratic Party and the Civic Democratic Party".According to agreement no. 2, the number of electoral constituencies would increase from 8 to 35, the method of allocating seats would be changed from the Hagenbach-Bischoff to the modified d'Hondt system,9 and the overall number of mandates in the Chamber of Deputies would remain at 200.According to the agreement the electoral constituencies may not cross over the borders of the regions, and their borders should be formed out of the districts, which limits the possibilities of arbitrary "gerrymandering".This proposal was to be submitted by the government to the Chamber of Deputies where it would be debated and passed no later than July 31, 2000, after which the modified electoral law would come into effect in January 1, 2002.This proposed compromise represents a profound change in comparison with the existing situation.As, for example, L. C. Dodd [1976] pointed out, "minimum winning cabinets" are more propitious to government stability than "undersized cabinets" (i.e.minority governments), and even more than "oversized cabinets".If we take into consideration the fact that KSČM (the un-renamed dogmatic Communist party) has zero coalition potential and is thus excluded from all government coalitions, the following minimal winning coalitions in particular could theoretically have been considered after the Czech legislative elections of 1998: 1) ODS + KDU-ČSL + US (63 + 20 + 19 = 102 seats, i.e. 51%) This is the minimum winning coalition with the least number of seats, according to the interpretation of W. H. Riker [1962], and at the same also the minimal connected winning coalition (the minimal coalition of parties neighbouring one another on the political spectrum), according to the interpretation of J. Axelrod [1973].The reasons why this coalition (by far the most logical one on paper) of the centre-right has never been achieved are evident: the Freedom Union -US (headed by its then leader Jan Ruml) is to a large extent composed of politicians who in 1997 abandoned ODS, and KDU-ČSL is the party which in 1997 through its departure from the at that time similar centre-right coalition of ODS + KDU-ČSL + ODA brought about the fall of the Klaus government.The leaders of the two small parties (Josef Lux and Jan Ruml) were among the main engineers behind the fall of the Klaus coalition minority government.Is it any wonder then that none of the leaders of the three parties mentioned here was interested in forming a coalition together? 2) ČSSD + KDU-ČSL + US (74 + 20 + 19 = 113 seats, i.e. 56.5%) The leader of KDU-ČSL at the time, Josef Lux, as well as the president of the Republic, Václav Havel, demonstrated a preference for this type of coalition, but it never came about because the right-wing Freedom Union of Jan Ruml rejected not only a government but also any sort of parliamentary coalition with the Social Democrats of Miloš Zeman.It is no real loss that this planned, heterogeneous coalition composed of one left-wing party (ČSSD), one right-wing party (US) and one party from the centre (KDU-ČSL) was in the end never realised.It is not positive to witness that nothing other than their mere antago-nism to the Civic Democratic Party (ODS) of Václav Klaus would be able to unite the leaders of these three parties, J. Lux, M. Zeman and J. Ruml.A great ideological distance lay between the right-wing US and the left-wing ČSSD, not mentioning the personal enmity between the leaders of these two parties.Nevertheless, it cannot be entirely denied that this is one of the possible minimal connected winning coalitions, assuming that US sits less to the right on the left-right scale than ODS, which is not entirely certain. 3) ČSSD + ODS (74 + 63 = 137 seats, i.e. 68.5%)This theoretical possibility which has usually been referred to (though not entirely appropriately) as the "grand coalition" because it unites the two strongest parties, would in reality be a minimal winning coalition drawing in the fewest number of parties, according to the interpretation of M. A. Leiserson [1968].Its apparent disadvantage would be the large ideological distance between the two parties.Both of these two parties however were united by a common interest in restricting the influence of small parties and also the common interest of limiting the manoeuvring space of the president of the Republic, whose role in the difficulties involved in forming governments has increased and whose distaste with the large political parties has long been apparent [see Brokl and Mansfeldová 1999]. Representation of the Ministers of Small Parties in the Two Klaus Governments Another circumstance playing a role here is that of the negative experiences taken from the previous coalition governments of the centre-right (Klaus I and Klaus II).Among other factors, the distribution of ministerial portfolios was important.Let us recall that the first Klaus government was set up at the time when the Czecho-Slovak federation was coming to an end in 1992, but the federal government was no longer of significance, while the national governments, i.e. the Slovak government of Mečiar and the Czech government of Klaus, actually were.The Czech national government, the prime minister of which was Václav Klaus, was installed on July 3, 1992.After the Czecho-Slovak federation was dissolved (January 1, 1993), this government turned into the new government of the independent Czech state, at which time, on January 4, 1993, two more ministries were set up -the Ministry of Defence and the Ministry of Transportation.These two ministries, during the federation period, existed only on the federal level and did not form an element in the Czech government (by way of note, Jan Stránský became the Minister of Transportation, who up until the dissolution of the federation had been the prime minister of the federal government).Václav Klaus, the leader of the strongest party ODS, became the prime minister of the Czech government, while the leaders of the small parties ODA (Jan Kalvoda) and KDU-ČSL (Josef Lux) each obtained the post of Deputy Chairman of the government.It is clear from the table that this distribution of ministerial portfolios was applied so that ODS, which alone represented roughly 63% of the coalition seats, would obtain only a small majority of the ministerial portfolios: 10 out of 19, i.e.only one seat more than what was left for the small parties, which constituted roughly 52.5% of the overall number of ministerial posts.The distribution of ministerial portfolios thus was not proportional to the parliamentary size of the coalition parties.All the small parties were overrepresented in government, while the largest coalition party (ODS) was underrepresented, though it nonetheless had the narrowest possible absolute majority in the cabinet. The advantage small parties had against the largest party, ODS, was in reality even greater than the numbers would indicate.It is necessary to add too that KDS entered Parliament only as a result of its pre-election coalition with ODS.The very small KDS (the Christian-Democratic party of Václav Benda) had only 1-2% of the votes according to polls conducted in 1992, while at the same time the 5% threshold necessary for gaining seats in Parliament still existed!In its own way it is a paradox that it was this generosity on the part of ODS toward KDS that later proved fateful for the largest party. In an effort to survive, politicians from the miniature party KDS began to consider a fusion with ODS, which actually came about in March 1996, i.e. several months before the next legislative elections.The result was that the ministers from KDS, in the case of conflicts between the ministers of ODS with ministers from the small parties KDU-ČSL and ODA which occurred from time to time, had a growing interest in not voting along with ministers from the other small parties, but instead voting with the ministers of the largest party. The ministers from KDU-ČSL and ODA had difficulty coming to terms with the fact that during conflicts within the government they were easily overruled, and they decided that in future, i.e. after the elections into the Chamber of Deputies in 1996, they would demand to receive among them half of the ministerial portfolios.They genuinely achieved this, but in these elections the old-new coalition (ODS + KDU-ČSL + ODA, i.e. without KDS which had in the meantime fused with ODS) lost its absolute majority of deputy seats.This was the swan song of the second Klaus government.As is well known it was the very next year, in 1997, that the second Klaus cabinet fell.The "Opposition Agreement" and its Critics It is little wonder that, after the post-election negotiations had demonstrated that the relations between the right-wing ODS and the small parties of the right and centre (US, KDU-ČSL) were still tainted by events of the past and remained considerably tense, ODS decided instead to "tolerate" the minority "monochromatic" government of the Social Democrats (ČSSD), and in July 1998 reached an "agreement on creating a stable political environment" with them.According to this agreement both contracting partners must, among other things, submit within one year after signing the agreement: "A proposal for such modifications to be introduced into the constitution and other laws which more precisely determine the competence of particular constitutional organs, the process involved in their establishment, and in accordance with the constitutional principles strengthen the importance of the results emerging out of the competition between political parties".With this, ODS did not even enter into the government, nor did it engage in supporting the minority Social-Democratic government in Parliament during voting on their proposals.ODS limited itself to the recognition of the right of the strongest party to establish a minority government, and it committed itself to not take part in votes of confidence (either for or against), and further to not instigate or support any motion of no confidence in the government.In this way it enables (in the case of both parties maintaining the aforementioned agreement) the minority government of the Social Democrats to remain in power for the entire legislative term of 1998-2002.In return, according to the agreement, the opposition party has the right to occupy, among others, the positions of speakers in both chambers of Parliament of the Czech Republic, as well as other positions of controlling organs in the Chamber of Deputies.In the agreement between ČSSD and ODS, two articles are of particular importance: "I.The parties named above commit themselves to respecting the right of that party of the two which is victorious in the elections to set up the government of the Czech Republic, and to express this respect through the non-participation of the deputies of the second party during government confidence voting.""VI.Each of the parties named above commits itself to not calling for government confidence votes and not making use of constitutional opportunities leading to the dissolution of the Chamber of Deputies during the legislative period of the Chamber of Deputies, and should such proposals be submitted by another political entity, will not support the vote.In the case of votes on individual laws (including the budget) the aforementioned parties are not bound in any way."The agreement between ČSSD and ODS -and especially their common effort to strengthen the position of large parties and weaken that of small ones -provoked a great deal of criticism and concern, as well as a variety of interpretations in the Czech Republic.On occasion it is possible to come across the opinion in some Czech publications that the "real opposition" is formed by KDU-ČSL, US and KSČM, while the "agreement opposition" (ODS) actually represents a sort of "hidden or silent coalition" with ČSSD, that the "agreement on creating a stable political environment" camouflages or masks the real state of affairs, which is a "grand coalition" between ODS and ČSSD [Klíma 1999: 14]. If we divide the concept of a coalition into election, parliamentary and government versions, it is clear that in the given case it is not a matter of an election or a government coalition.It is only possible to think in terms of some sort of parliamentary coalition.The analysis of voting by individual parliamentary groups (deputy clubs) in the years 1998 and 1999 moreover demonstrates that it was not the deputies from the "contracting" ODS who most frequently voted with ČSSD, but rather the deputies from the Communist Party (KSČM) and deputies from the Christian Democratic Union (KDU-ČSL), i.e. deputies from the assumed "real opposition". 10 Pehe [1999] among others warned that changes to the electoral law which ČSSD and ODS negotiated on could as a result "lead to the return of the Communists to power".The proposed modifications (increasing the number of electoral constituencies from 8 to 35, substituting the Hagenbach-Bischoff method with the modified d'Hondt system) would instead weaken the Communists, who usually represent the third largest party.The modifications that have been proposed would lead to a system which in terms of its effect approaches the single-ballot plurality system of the Westminster model.Though it is true that since the second half of 1999, polling agencies have recorded a drop in the electoral support for the Social Democrats (ČSSD) and a parallel, large rise in the electoral support for the Communist party (KSČM), which at the time was competing with ODS for first place as the favourite among Czech voters.But even if the electoral support for the two largest parties (ODS and ČSSD) was drastically decreased while that for the communist party (KSČM) increased enough to surpass that of the two largest parties, it would suffice to create a (pre-) election coalition (for instance, ODS + US and/or ČSSD + KDU-ČSL, in an extreme case ČSSD + ODS) in order to marginalise the Communists, who have no potential coalition partner.Moreover, it might not even be necessary to create a preelection coalition, and instead be sufficient to set up a post-election (government or parliamentary) coalition of non-Communist parties.This would prevent any possible Communist government from obtaining the confidence of Parliament, even if in the next elections into the Chamber of Deputies, which should be held in 2002, the Communists came out in front and acquired a relative majority (plurality) of seats. 11The agreement 10 ) The extension, appending and strengthening of this agreement on creating a stable political environment is the so-called "Common declaration of the delegation of ODS and ČSSD of January 14, 2000", according to which both participating parties consider it essential to agree on four points: (1) on the state budget, (2) on the basic parameters of the change to the electoral system, (3) on the preparation of the Czech Republic for entry into the European Union, (4) on the determination of the objective conditions of tolerance of the minority government, (5) on communication between parliamentary clubs.In this "common declaration" of January 14, 2000 is stated: "In the case of the successful completion of preparation and signing of the first part of these written agreements ODS facilitates acceptance of the state budget in the first {parliamentary} reading, and after signing the remaining agreements facilitates the final passing of the state budget for the year 2000."If then the five mentioned agreements were realised the result would be a certain shift in the continuum in the direction from purely procedural "agreements" toward a normal parliamentary coalition. 11) According to the Czech constitution a newly named government must present itself before the Chamber of Deputies within thirty days of its being appointed and request a vote of confidence (article 68).If the Chamber of Deputies rejects the request of the government for a vote of confidence, the government must resign (article 73).Finally, should the government refuse to resign, even though it is obliged to do so, the President must dismiss the government (article 75).It is possible then to say that the Czech Republic belongs among those countries in which the newly formed government necessarily requires an explicit formal legislative investiture vote.Kaare partners, ODS and ČSSD, and eventually even other non-Communist parties, could certainly reach such an agreement on a post-or pre-election coalition.Pehe [1999] claims further that the shift in the direction toward a two-party system need not be as particularly advantageous to ODS as its representatives may think.The two-party system forces both main parties into becoming broad coalitions with an assortment of interest and political groups, and leads to a certain "ideological wishywashiness", and rules out "relatively rigid" disciplined parties with strong leaders, such as that which ODS and Václav Klaus currently represent.What Pehe is presenting in reality relates to the American presidential regime, in which the main parties are not disciplined.It is not, however, valid for two-party systems in a parliamentary regime, as the classic example of Great Britain in particular demonstrates, in that both main parties are relatively disciplined and strong leadership is also possible, and the Czech Republic has of course a parliamentary regime. The two-party system is not at any rate a particularly likely possibility in the case of the Czech Republic, not even after the (at present uncertain) change to the electoral law which the leadership of ČSSD and ODS agreed on in January 2000.A mathematical simulation of the parliamentary elections of 1996 and 1998 [see Lebeda 1998] shows that a combination of small electoral constituencies (in this case 38) with the Imperiali highest averages system would lead in the Czech Republic under the configuration at the given time, when ODS and ČSSD were in election terms essentially stronger than the other parties, almost certainly to a two-party system.However, this is based on the assumption that the small parties would move into the elections independently, as is the case under the current electoral law.But a change to the electoral law would in all likelihood bring about the formation of pre-election coalitions among the small parties out of their interest in maintaining a position in the Chamber of Deputies.The arrangement would prevent the emergence of a two-party system and thus most likely lead to a two-and-a-half party system. Pehe himself also shows that if the small parties of KDU-ČSL and US were to form a pre-election coalition they could in fact defeat both large parties.Indeed this is possibly true, especially if ČSSD continues to fall, as in the polls taken during the last months of 1999 and the early months of 2000.But Pehe is incorrectly making a reference to the success achieved by the four-party coalition (KDU-ČSL + US + ODA + DEU) formed in the 1998 Senate elections.Senate elections are held according to the doubleballot (absolute) majority system, which favours parties with high coalition potential.Unlike this, the proposed modifications to the system of proportional representation and its effects, as is known, more closely approach the single-ballot plurality electoral system which does not help parties with high coalition potential, but rather the two largest parties, and then those parties that are strongly concentrated in a region. If we proceed from a situation in which KDU-ČSL and US were to form a preelection coalition (whether only between the two of them, or in the form of a four-party coalition including the very small parties ODA and DEU), and as the basis of an estimate of the electoral strength of the parties we take the development of support in the polls during the last months of 1999 and the early months of 2000, then, with the electoral law Strøm [1990] showed correctly that the necessity of an explicit legislative investiture vote complicates the formation of minority cabinets.changed according to the proposal of ODS and ČSSD, in all likelihood four political entities would have a chance to gain seats in the Chamber of Deputies (three parties plus one coalition of two or four parties).Three of these parties (ODS, KSČM and the four-party coalition) have according to polling agencies at present around 20% of the electoral support, and the fourth (ČSSD) has not quite 15%.This kind of configuration (in which the anti-system KSČM would be fighting over first place among the three strongest parties) is less favourable than the ones for the periods prior to the elections in the Chamber of Deputies in 1996 and 1998, and it aggravates the prognoses.If the reverse happened and KDU-ČSL and US each ran on their own candidate list, the right-wing US would probably then become the most under-represented party out of all the parties now (in the legislative period [1998][1999][2000][2001][2002] represented in the Chamber of Deputies, while at the same time KDU-ČSL, which has a strong base in several regions, would probably be less harmed by this type of modified electoral law. Conclusion In the Czech Republic, the results of voter support, as shown in the polls since 1995, and the results of elections to the Chamber of Deputies since 1996, have shown that two parties have asserted themselves considerably more strongly than the others -the moderately right-wing ODS and the moderately left-wing ČSSD.However, both parties have encountered difficulties in setting up a relatively homogeneous and stable majority government.One of the obstacles remains for the time being the rather large distance (ideological and other) between the relevant parties.The second obstacle is the increasing proportional effect of the existing law for elections into the Chamber of Deputies.Under such circumstances, a strengthening of the majority elements while preserving the PR system could play a positive role.It would not hurt Czech democracy if the two largest parties were to considerably strengthen at the expense of the small parties, quite the reverse it would probably contribute to its consolidation.However, the large decline in support for ČSSD and the parallel increase in support for the Communist party (KSČM), which has appeared in the polls since the second half of 1999, significantly complicates the situation and it makes it difficult to form a clear prognosis. Association for the Republic-Republican Party of Czechoslovakia US Freedom Union Translated by Robin Cassling MIROSLAV NOVÁK (Doctor of Sociology, University in Geneva) is an associate professor of political science at Charles University in Prague.He specialises in comparative political sociology, with a particular focus on the transition to democracy of the post-communist countries of Central and Eastern Europe, party systems and electoral systems, and types of democracy.He is Table 1 . The results of the elections into the Chamber of Deputies in the Czech Republic (Given that the overall number of seats is 200 it is easy to calculate the percentage of parliamentary seats so that the number of seats is divided by two.) Table 2 . The party composition of the first Klaus government installed on July 4, 1992 (status January 4, 1993, i.e. after the disintegration of the Czecho-Slovak federation and the addition of the two new ministries)
2018-12-02T16:33:52.916Z
2000-03-01T00:00:00.000
{ "year": 2000, "sha1": "0d1f31bb51f4189e792cd7efb89d9d1f2cb59939", "oa_license": "CCBYNC", "oa_url": "http://sreview.soc.cas.cz/doi/10.13060/00380288.2000.36.11.06.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4960659a2cb70036ee33b518981a3e66de0e5f4a", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
15839552
pes2o/s2orc
v3-fos-license
Evaluation of a Fast Protocol for Staging Lymphoma Patients with Integrated PET/MRI Background The aim of this study was to assess the applicability of a fast MR-protocol for whole-body staging of lymphoma patients using an integrated PET/MR system. Methods A total of 48 consecutive lymphoma patients underwent 52 clinically indicated PET/CT and subsequent PET/MRI examinations with the use of 18F-FDG. For PET/MR imaging, a fast whole-body MR-protocol was implemented. A radiologist and a nuclear medicine physician interpreted MRI and PET/MRI datasets in consensus and were instructed to identify manifestations of lymphoma on a site-specific analysis. The accuracy for the identification of active lymphoma disease was calculated and the tumor stage for each examination was determined. Furthermore, radiation doses derived from administered tracer activities and CT protocol parameters were estimated and the mean scan duration of PET/CT and PET/MR imaging was determined. Statistical analysis was performed to compare the diagnostic performance of PET/MRI and MRI alone. The results of PET/CT imaging, all available histopathological samples as well as results of prior examinations and follow-up imaging were used for the determination of the reference standard. Results Active lymphoma disease was present in 28/52 examinations. PET/MRI revealed higher values of diagnostic accuracy for the identification of active lymphoma disease in those 52 examinations in comparison to MRI, however, results of the two ratings did not differ significantly. On a site specific analysis, PET/MRI showed a significantly higher accuracy for the identification of nodal manifestation of lymphoma (p<0.05) if compared to MRI, whereas ratings for extranodal regions did not reveal a significant difference. In addition, PET/MRI enabled correct identification of lymphoma stage in a higher percentage of patients than MRI (94% vs. 83%). Furthermore, SUVs derived from PET/MRI were significantly higher than in PET/CT, however, there was a strong positive correlation between SUVmax and SUVmean of the two imaging modalities (R = 0.91 p<0.001 and R = 0.87, p<0.001). Average scan duration of whole-body PET/CT and PET/MRI examinations amounted to 17.3±1.9 min and 27.8±3.7 min, respectively. Estimated mean effective-dose for whole-body PET/CT scans were 64.4% higher than for PET/MRI. Conclusions Our results demonstrate the usefulness of 18F-FDG PET data as a valuable additive to MRI for a more accurate evaluation of patients with lymphomas. With regard to patient comfort related to scan duration and a markedly reduced radiation exposure, fast PET/MRI may serve as a powerful alternative to PET/CT for a diagnostic workup of lymphoma patients. Introduction Highly accurate staging of lymphoma patients is mandatory to identify tumor localizations as well as disease extent, which provides important prognostic information and helps to select an appropriate treatment strategy, primarily based on chemotherapy and/or radiotherapy. The use of diagnostic imaging has been shown a valuable tool for the determination of the initial tumor stage and to evaluate therapy response [1,2]. Computed tomography (CT) is the most commonly applied method for a diagnostic workup of lymphoma patients, due to its high availability and the opportunity of rapid data collection. However, the successful introduction of hybrid imaging, in form of positron emission tomography/computed tomography (PET/ CT), has been demonstrated to enable a more accurate clinical evaluation of the majority of lymphoma types [3,4]. The additional metabolic information provided by 18F-Fluorodeoxyglucose (18F-FDG)-PET led to an improved staging performance as well as therapy response assessment based on changes in 18F-FDG uptake between baseline and interim or post-treatment scans [5][6][7]. While the combined information of PET/CT has been shown beneficial compared to other cross-sectional imaging techniques for the diagnostic workup of lymphoma patients, one major disadvantage is caused by an increased ionizing radiation dose due to the combination of PET and whole-body CT [8][9][10]. Initial studies on integrated positron emission tomography/magnetic resonance imaging (PET/MRI), combining the diagnostic advantages of simultaneous acquired PET and MRI data, have also shown promising results for the evaluation of patients with lymphoma [11,12]. While PET/MRI compared to PET/CT offers the inherent advantage of reduced radiation dose, the interchange of the morphological part from CT to MRI as a part of hybrid imaging, has been shown to result in a markedly prolonged examination time [10], potentially resulting in patient discomfort. Therefore, the aim of the present study was to investigate the diagnostic applicability of a fast protocol for whole-body staging lymphoma patients with simultaneous PET/MRI. Materials and Methods Patients The study was conducted in conformance with the Declaration of Helsinki and approved by the Ethics Commission of the Medical Faculty of the University Duisburg-Essen (study number 11-4822-BO). Written informed consent was obtained from all patients before each examination. A total of 48 consecutive lymphoma patients (mean age 47±16 years; range 19-73 years) were prospectively enrolled in this trial. A total of 52 examinations were performed including scans for initial staging (n = 11), interim scans during treatment (n = 7), for restaging after the end of treatment (n = 9) and for surveillance/exclusion of tumor relapse upon suspicion (n = 25). Table 1 shows the distribution of the different lymphoma subtypes. PET/CT PET/CT scans were performed on a Biograph mCT 128 system (Siemens, Healthcare GmbH, Germany), after a fasting period of at least 6 hours. Prior to each examination, blood samples were taken to ensure blood glucose levels below 150 mg/dl. Then, a body-weight adapted dosage (4 MBq/kg bodyweight) of 18F-FDG, with a mean activity of 273±52 MBq, was intravenously administered 61±14 min before the start of each scan. Patients were examined in fulldose (n = 24) or low-dose (n = 28) technique. Whole-body CT examinations were performed in caudo-cranial scan direction with an increment of 5 mm and a pitch of 1, using a manufacturer-supplied dose reduction software for automatic mA/s adjustment (Care Dose 4D™, presets: full-dose: 120 kV, 210 mAs; low-dose: 120 kV, 40 mAs). Images were reconstructed with a slice thickness of 5 mm. Full-dose PET/CT scans started 70 s after intravenous administration of 100 ml of iodinated contrast media (Ultravist 300, Bayer Healthcare, Germany). PET data were obtained in 5-7 bed positions (from skull-base to upper thighs) with an acquisition time of 2 min each, a 256x256 matrix and a Gaussian filter of 4 mm Full Width at Half Maximum (FWHM). An attenuation weighted ordered-subset expectation maximization algorithm (AW-OSEM) was used for PET image reconstruction with 3 iterations and 24 subsets. Maps for attenuation correction were calculated based on acquired CT datasets. For estimations of the effective dose of whole-body CT scans (low-dose and full-dose), the dose-length product and a conversion factor were used as described in a previous study [13]. In accordance with a previous report, mean effective dose of PET was calculated based on the administered 18F-FDG dose [14]. PET/MRI PET/MRI scans were performed on a 3 Tesla Biograph mMR integrated PET/MR system (Siemens Healthcare GmbH, Germany). Patients were bedded head-first in supine position. Imaging started with an average delay of 133±25 min after the injection of 18F-FDG. Wholebody PET data were obtained in 4-5 bed positions (from skull-base to mid-thighs) with an Image interpretation Images were analysed by two board-certified physicians (radiologist, 8 years of experience; nuclear medicine physician (7 years of experience), in consensus and in random order, using a dedicated software for hybrid imaging (Syngo.via; Siemens, Healthcare GmbH, Germany). Both readers were blinded to the patients' identification data. A first session comprised interpretation of MRI datasets followed by readings of PET/MRI data. An interval of 4 weeks between the ratings was chosen to avoid recognition bias. For each rating, the readers were instructed to identify manifestations of lymphoma on a site-specific analysis: nodal groups included Waldeyer ring, right and left cervical, right and left axillary, right and left internal mammary or diaphragmatic, anterior mediastinal or paratracheal, right and left hilar, subcarinal or posterior mediastinal, celiac or superior mesenteric, hepatic and splenic hilar, retroperitoneal, inferior mesenteric, right and left iliac and right and left inguinal regions. In addition, several extra nodal regions were analyzed, including lungs, liver, spleen, kidneys, thyroid, adrenal glands, bones, stomach, intestines as well as other different organs and tissues. For all identified lesions size measurements were performed and the standardized uptake value (SUV) in PET positive lesions was determined by drawing a 3D-isocontour on fused PET/CT and PET/MR images. Furthermore, for both imaging modalities the mean scan duration was measured. Using DWI as a part of MR imaging, an ADC map was generated by the PET/MR system software (syngo VB18P, Siemens Healthcare GmbH, Germany) using three bvalues (b = 0, 500 and 1000 s/mm 2 ). Malignancy on MRI was defined according the following criteria: nodal lesions with a longest diameter >1.5 cm and extra nodal masses with a longest diameter >1 cm, distinctive contrast enhancement, central necrosis, local tumor invasion/destruction, high signal intensity in DWI (b = 1000 s/mm 2 ) and low signal in corresponding ADC map. ADC values of all suspect lesions were determined, but served only as an orientation for characterization of benign/ malignant findings. The interpretation of 18F-FDG-PET data, used for differentiating between benign and malignant lesions in PET/MRI and PET/CT ratings was performed qualitatively. A visually increased 18F-FDG uptake in nodal or extra nodal sites higher than in background tissues was considered as an additional sign for involvement with lymphoma [1]. In case of a discrepant finding on PET and MR datasets (e.g. a lesion with unsuspicious morphology and increased focal tracer uptake, or vice versa) the lesions were dedicatedly evaluated in accordance with the criteria used in a previous publication [16]. Therefore, the corresponding PET data were rated superior in PET/MRI and a morphologically unsuspicious lesion with focally elevated 18F-FDG uptake was deemed positive for malignancy. The tumor stage for each examination was determined in analogy to the revised criteria of the Ann Arbor staging system as proposed by the Lugano classification [1,17], originally introduced for initial assessment of lymphoma patients. However, the major focus in the present study was to evaluate and demonstrate the overall diagnostic capability of the two imaging modalities to determine disease extent in our patient cohort. Therefore, PET/MRI and MRI examinations for initial staging but also interim scans during treatment, scans for restaging after the end of treatment and for surveillance were analyzed. For the evaluation of the PETcomponent in PET/CT and PET/MRI the SUVmax and SUVmin of the largest nodal and extra nodal lesions were determined while the number of evaluated lesion was limited to ten per patient. Finally, a consensus interpretation on a lesion-and patient-basis was performed by two experienced physicians for the determination of the reference standard. Therefore, all 52 PET/ CT examinations were analyzed. Additionally, all available histopathological samples as well as results of prior examinations and follow-up imaging (CT, MRI, PET/CT; n = 33, mean duration 239 ± 157 days) were used for the determination of malignant and benign lesions. In accordance with previous publications, lesions that were identified on MRI and/or PET/MRI and could not be identified on PET/CT images, were only included in our ratings if follow-up imaging was available [18]. Conversely, lesions that were identified by PET/CT but missed in DW-MRI or PET/MRI were rated as false-negative. Statistical analysis For statistical analysis the IBM SPSS version 21 software (SPSS Inc, Armonk, NY, USA) was used. Sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of PET/MRI and MRI for the identification of lymphoma patients were calculated and a McNemar test was used to determine the significance of differences between both ratings. SUVs as well as calculated data of scan duration and radiation exposure are presented as mean values ± standard deviation (SD). Wilcoxon signed-rank test was utilized to indicate potential significant differences between SUVs obtained in PET/MRI and PET/CT. Pearson's correlation coefficients were calculated and Bland-Altman analyses were performed for determined SUVmax and SUVmean of all PET-positive lymphoma lesions in both hybrid imaging modalities. P-values <0.05 were considered to be statistically significant. Tumor stage was determined in analogy to criteria as proposed by the Lugano classification [1] comprising 19 limited and 8 advanced stage lymphoma manifestations (out of the 28 examinations). In one case, bulky disease was present. MRI enabled correct identification of the tumor stage in 43/52 (83%) examinations. In 6 of the 9 misclassified examinations, MRI overrated the actual tumor stage, based on false-positive findings and underrated the stage in 3 cases due to lymphoma lesions that were falsely interpreted as benign. PET/MRI correctly determined the patients 0 disease status in 49/52 (94%) cases and overestimated the actual tumor stage in the remaining 3 examinations. Table 4 summarizes the results of both imaging modalities for the identification of the tumor stage in all 52 examinations. Fig 3a). The SUVmean derived from PET/CT and PET/MRI also revealed a strong and highly significant correlation (R = 0.87; p<0.001, Fig 3b). Bland-Altman analysis was performed to determine lower and upper limits of agreement between PET/CT and PET/MRI for SUVmax (3.12 and -7.86; Fig 4a) and SUVmean (1.93 and -3.83; Fig 4b). Estimations of scan duration and radiation exposure Discussion The present study investigated the diagnostic potential of a fast protocol for integrated PET/ MRI used for dedicated tumor staging of patients with lymphoma. Combining simultaneously obtained PET and MRI data for image interpretation enabled a significantly better diagnostic performance for the assessment of nodal manifestations of lymphoma if compared to MRI alone. In addition, disease status based on the revised staging criteria was correctly identified in a higher number of patients using PET/MRI. Within the last years, hybrid imaging, in terms of PET/CT has been well established as a high quality imaging tool for lymphoma diagnostics, integrating high resolution anatomical and metabolic information [3,4]. Adding the additional information provided by 18F-FDG PET to CT enables a higher detectability of active lymphoma lesions and facilitates therapy response assessment even in cases, in which structural changes have not yet become visible [19][20][21]. Therefore, PET/CT has been proven highly valuable for a diagnostic workup of FDGavid lymphoma subtypes, yet also considering the relatively high radiation exposure, mainly caused by the CT-component [9,10]. The recent introduction of integrated PET/MRI systems may represent a promising alternative to PET/CT, having demonstrated its high and comparable diagnostic capacity to PET/CT in numerous publications [22,23]. Besides an extension of examination time due to the interchange of the morphological part from CT to MRI, PET/MRI enables a remarkable reduction of ionizing radiation dose of about 73 to 77% per examination [9,10]. Our results strengthen these findings, demonstrating an overall reduction of radiation exposure of about two thirds by using PET/MRI as an alternative to PET/CT (low-dose and full-dose). Especially lymphoma patients might particularly benefit from this new imaging modality, considering the percentage of a younger patient population and the need for repetitive examinations, increasing the risk of radiation associated second malignancies [24,25]. Moreover, even if radiation savings compared to low-dose PET/CT scans are limited (29%), PET/MRI offers high-quality morphologic information in addition to the PET data (Fig 5), which enables a better characterization of suspicious findings. In addition, fast-PET/MRI provides a high quality diagnostic performance within an appropriate scan duration, exceeding the average scan duration of a whole-body PET/CT for only about 10 minutes. An initial study by Platzek and colleagues reported a high sensitivity and specificity of PET/ MRI for the detection of lymphomas in a region-based analysis [12]. Our results support these findings, yielding a correct identification of all patients with viable lymphomas using PET/ MRI, while two patients without evidence of malignancy were rated false-positive. Those misinterpretations occurred due to a focally increased tracer uptake of nodal lesions, which may have been caused by the subsequent acquisition of PET/MRI datasets with an average delay of 72 min after PET/CT. Previous studies could already show altered distributions of 18F-FDG in certain tissues and organs on PET/MRI datasets, which were acquired about one hour after PET/CT [26,27]. The successful introduction of diffusion-weighted imaging as an additional functional parameter to morphological MR imaging enabled an increase in diagnostic accuracy for the identification and characterization of tumor lesions [28]. Some studies reported promising results of MRI with DWI in staging of lymphoma patients, which were only slightly inferior to that from PET/CT [29,30]. One recently published work by Heacock and colleagues compared the diagnostic ability of MRI and PET/MRI and showed a higher staging performance using PET/MRI [11]. These findings go in line with our results, demonstrating a significantly better performance of PET/MRI for the detection of lymphomas as well as for the determination of the correct tumor stage. In accordance with previous publications, MRI revealed a tendency to overrate the actual tumor stage, with substantial consequences on further patient management [31,32]. However, staging indolent, frequently non-FDG-avid lymphoma subtypes, the use of MRI including DWI might be highly beneficial and potentially superior if compared to PET imaging as it has been shown in a recently published study by Giraudo and colleagues [33]. The use of PET has been recommended as an integral part for staging and clinical evaluation of 18F-FDG-avid lymphomas, representing the vast majority of lymphoma types in our study [1]. Besides visual assessment, quantitative SUV measurements are commonly performed to assess viable lymphoma lesions as well as for the determination of therapeutic response due to metabolic changes under therapy [7]. One major challenge and general point of discussion when comparing PET/MR and PET/CT hybrid imaging lies in physical differences of MR-and CT-based attenuation correction, potentially leading to differences in absolute SUV measurements between both hybrid imaging modalities. Investigating the applicability of the PET PET/MR -component for the evaluation of lymphoma patients, our results reveal a strong positive correlation between SUVs obtained from PET/CT and PET/MRI. A number of previously published studies support these results, showing a high correlation of the SUVs acquired in both imaging modalities for parenchymatous organs as well as for different tumor types [26,34]. Accordingly, these data underline the validity of SUVs derived from PET/MRI datasets for the use in oncological imaging. While most previous publications show an overall increase of SUVmax (in different tumor entities, including lymphoma), a recent publication by Heacock et al. revealed an overall decrease of the SUVmax on subsequently acquired PET/(MRI) data [11]. Our results go in line with most publications, demonstrating significantly higher values for SUVmax and SUVmean on PET/MRI datasets which were obtained with a one hour delay after a PET/CT [35]. This might be explained by an increasing 18F-FDG accumulation in malignant cells within the period of prolonged tracer uptake after intravenous administration. Our study is not free of limitations. First, the patient cohort consisted of different lymphoma subtypes (Table 1). Therefore, subgroup analyses would have been desirable, yet, would not have been reasonable due to limited patient numbers. Accordingly, these preliminary results should be confirmed in future studies investigating the staging performance of PET/ MRI for different lymphoma types. Second, the majority of patients revealed 18F-FDG-avid lymphomas, which justifies the use of PET/CT as the main part of the standard of reference, hence restricting a direct comparison of PET/MRI and PET/CT in the evaluation of lymphoma patients. Another limitation lies in the restricted reference standard, mainly caused by the unavailability of histopathological confirmation of all suspicious lesions. Therefore, in accordance with previous publications, we used all applicable information in terms of the results from PET/CT imaging as well as all available histopathological results and cross-sectional imaging follow-up as reference standard [36]. Finally, we used two different protocols (low-dose or full-dose examinations) for PET/CT imaging. A concordant protocol would have been desirable, yet, the study set up reflects clinical staging algorithms. Accordingly, in low-dose PET/CT examinations morphological criteria for the formation of the reference standard have been limited, which might have affected the determination of malignant of benign lesions. Conclusion The present study demonstrates the high diagnostic value of a fast protocol for integrated PET/ MRI for staging lymphoma patients, enabling high quality assessment of morphologic and metabolic data while maintaining comparable examinations times with markedly reduced radiation exposure when compared to PET/CT. Furthermore, our results demonstrate the usefulness of 18F-FDG PET data as a valuable additive to MR imaging for a more accurate evaluation and tumor staging of lymphoma patients.
2018-04-03T02:24:07.770Z
2016-06-21T00:00:00.000
{ "year": 2016, "sha1": "6a9acf35509a8c29ebc5ce9248ff05c4a249f5c9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0157880&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6a9acf35509a8c29ebc5ce9248ff05c4a249f5c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260349627
pes2o/s2orc
v3-fos-license
Design, Synthesis, and Application of Fluorescent Ligands Targeting the Intracellular Allosteric Binding Site of the CXC Chemokine Receptor 2 The inhibition of CXC chemokine receptor 2 (CXCR2), a key inflammatory mediator, is a potential strategy in the treatment of several pulmonary diseases and cancers. The complexity of endogenous chemokine interaction with the orthosteric binding site has led to the development of CXCR2 negative allosteric modulators (NAMs) targeting an intracellular pocket near the G protein binding site. Our understanding of NAM binding and mode of action has been limited by the availability of suitable tracer ligands for competition studies, allowing direct ligand binding measurements. Here, we report the rational design, synthesis, and pharmacological evaluation of a series of fluorescent NAMs, based on navarixin (2), which display high affinity and preferential binding for CXCR2 over CXCR1. We demonstrate their application in fluorescence imaging and NanoBRET binding assays, in whole cells or membranes, capable of kinetic and equilibrium analysis of NAM binding, providing a platform to screen for alternative chemophores targeting these receptors. ■ INTRODUCTION −6 They are grouped into three families (CC, CXC, and CX 3 C) according to the position of the first two highly conserved cysteine residues, [1][2][3]5,7 with the C-X-C family characterized by the presence of one amino acid between the cysteine residues.6,8 Chemokine receptors belong to the Class A G protein-coupled receptor (GPCR) family and are similarly subdivided into CC, CXC, or CX 3 C categories based on the ligand type. 6 Mot are coupled to the G i class of G-proteins, 6 resulting in a decrease in intracellular cAMP levels following receptor activation.6,9,10 Chemokine binding may also induce the recruitment of βarrestins to mediate receptor desensitization and internalization, alongside other signaling pathways.11,12 The CXCR2 receptor subtype, responsive to chemokines such as CXCL8/interleukin8 (IL8) and CXCL1/Gro-alpha, is expressed on a wide range of immune cells, particularly neutrophils, and other cell types including endothelial and epithelial cells. CXCR2 is primarily involved in driving chemotaxis and associated processes such as cell motility, integrin expression, and activation but may also be involved in other processes such as phagocytosis and apoptosis.6,14 Moreover, CXCR2 regulates neutrophil homeostasis and extravasation, 6,17 facilitating worsening of acute and chronic inflammation 3,6,18,19 as well as being involved in angiogenesis and driving tumor metastasis.20−22 Given its involvement in a series of diseases, 19 CXCR2 is a tractable pharmaceutical target, especially in regard to inhibition of inflammatory cell recruitment.6 On this basis, CXCR2 antagonists have been developed and clinically evaluated for the treatment of chronic obstructive pulmonary disease (COPD), 6,23−26 asthma, 3,6,21,24,27 melanoma, 28,29 breast cancer, 30 and colorectal cancer.31 CXCL8-CXCR2 interaction plays a key role in neutrophil chemotaxis and angiogenesis. Competitive inhibition of this complex through small molecule approaches has proven notoriously difficult, which has led to the discovery and development of CXCR2 intracellular negative allosteric modulators (NAMs) as an antagonist strategy.35−38 These bind in an intracellular binding pocket located in the cytoplasmic domain, formed by the ends of transmembrane (TM) 1, TM2, TM3, TM6, the loop between TM7 and helix 8, and the C-terminus. The 3,4-diamino-cyclobutenedione class of CXCR2 intracellular NAMs was originally developed by Schering−Plough in the early 2000s.13,46 Their proposed binding pose and mechanism of action have been exemplified with the publication of the Xray crystal structure of CXCR2 in complex with the antagonist 00767013 (1, Figure 1) (PDB ID: 6LFL).15 The binding of 1 is proposed to sterically interfere with G protein binding due to their overlapping binding sites and stabilizes the inactive state of the receptor.15 Sch527123 (R-navarixin, 2, Figure 1) 13,46 exhibits high potency and affinity at CXCR2 (K D = 0.049 ± 0.004 nM) in radioligand binding studies over alternative intracellular modulators such as SB265610. 47 Th high affinity may be the result of a slow dissociation rate from the receptor.14,40 However, to directly determine NAM affinity and binding kinetics, a molecular tool that binds competitively with unlabeled ligands to the intracellular allosteric binding site is needed, which can then be used in real-time analysis of ligand binding.High-affinity fluorescent probes are suitable for this purpose and can be employed in several GPCR binding techniques, including bioluminescence resonance energy transfer (BRET) assays.48−53 These assays present considerable advantages over radioligand binding assays, traditionally used to characterize ligand binding.A better safety profile allows an easier performance of the assay and disposal of waste materials. 54,55urthermore, fluorescent ligand-based resonance energy transfer assays have the key capability to monitor specific binding in a homogeneous format, without the need to separate bound from the free tracer, and the binding from a single sample can be monitored continuously in real time, enabling kinetic analysis to be performed in a straightforward manner.Recently, new fluorescent probes targeting the intracellular allosteric binding site of chemokine receptors, such as CCR9 56 and CCR2, 57 have been reported.58,59 Herein, we present the design, synthesis, and pharmacological characterization of a focused library of fluorescently labeled CXCR2 intracellular allosteric ligands, demonstrate their high affinity and a degree of selectivity for the CXCR2 receptor by both imaging and kinetic NanoBRET measurements, and show their application to the direct measurement of unlabeled NAM binding affinities for the CXCR2 intracellular allosteric modulator site. ■ RESULTS AND DISCUSSION Fluorescent Probe Design.R-Navarixin (2) represents a prototypical CXCR2 intracellular NAM possessing high affinity and potency in functional chemotaxis assays 60 and was deemed a suitable starting point for the development of fluorescent NAMs.Initially, using computational approaches, we sought to understand the relevant ligand−target interactions between 2 and CXCR2, to enable selection of an appropriate attachment point to incorporate a fluorophore via a suitable linker. To create a molecular model for R-navarixin (2) bound to CXCR2, we first established that using GLIDE (Schrodinger software suite release 2023-1), we could accurately redock 1 into the original CXCR2 X-ray crystal structure (PDB: 6LFL) 15 (Figure 2B).This validated method was then used to dock R-navarixin (2) into the same ligand binding pocket. Given their structural similarity, it is unsurprising that the predicted binding pose of R-navarixin is very similar to that of 00767013 (Figures 2C and 3).The furan ring, squaramide moiety, phenol, and carbonyl oxygen of the N,N-dimethylamide each interact with different amino acid residues in the binding pocket including Phe321 8.50 , Lys320 8.49 , Gln319 8.48 , Arg144 3.50 , and Asp84 2.40 .The importance of these residues for NAM binding at CXCR2 has in part been supported by mutagenesis studies highlighting, for example, the relevance of Asp84 2.40 , Thr83 2.39 , and Lys320 8. 49 40 .Conversely, the N,Ndimethylamide moiety does not appear to be critical for receptor binding as it is not predicted to make any direct contact with the receptor and to protrude from the pocket toward the cytosol, without establishing any crucial interactions with CXCR2. It is necessary to note that the CXCR2 protein used to determine the CXCR2 X-ray crystal structure (PDB ID: 6LF) included mutations to facilitate crystallization, 15 in particular, A249 6.33 E, which is in close proximity to the proposed intracellular allosteric binding pocket.Therefore, to further evaluate R-navarixin (2) binding to CXCR2, a more humanized sequence was used to generate a representative model, with Ala249 6.33 present.The binding pose of Rnavarixin (2) and its interactions with CXCR2 did not change with Ala249 6.33 present.This additional docking confirmed that the N,N-dimethylamide moiety is not predicted to make direct interactions with the receptor and therefore was a suitable site for attachment of a linker moiety and associated fluorophore (we will show later though that in fact there is significant SAR associated with this region of the molecule). Fluorescent Ligand Synthesis.R-Navarixin (2), for which the synthesis has been previously reported, 60 was chosen as the receptor binding motif for the library of fluorescent ligands.The linkers were designed to explore both the influence of distance between the receptor binding motif and fluorophore and also the importance of the N,Ndialkylamide moiety present on 2. As such, two focused series of linker-coupled compounds were generated through the addition of an initial N-(2-aminoethyl)amide or N-methyl-N-(2-aminoethyl)amide spacer (Scheme 1).The terminal primary amine functionality present on this spacer allowed either direct reaction with a suitable N-reactive fluorescent dye or further elongation of the linker through incorporation of a glycyl or β-alanyl moiety. The BODIPY 630/650 fluorescent dye was selected due to its red-emission profile, which is beneficial when imaging in living cells as it is clearly distinguishable from background cell autofluorescence emission.The BODIPY fluorophore also offers distinct advantages in terms of high quantum yield and photostability. 61Furthermore, the lipophilic nature of BODIPY 630/650 is beneficial in supporting membrane permeability, which is critical to targeting the cytosolic face of the receptor in a whole cell context. Unfortunately, compound 10b was found to be chemically unstable.Despite multiple attempts to purify and isolate 10b using a range of approaches (normal and reverse-phase chromatography), subsequent analysis indicated that the compound lacked sufficient purity to progress.Consequently, the corresponding fluorescent probe 11b was not synthesized. Pharmacological Characterization: NanoBiT Complementation Assay.In order to determine any functional effect as a result of linker addition, the activity and suitability of protected congeners 7a,b, 10a, and 10c underwent evaluation using a NanoBiT complementation assay, measuring CXCL8 stimulated recruitment of β-arrestin2 to the human CXCR2 receptor. 62In this assay, receptor-arrestin interaction is detected by the proximity complementation of Large BiT (LgBiT) and Small BiT (SmBiT) tags, which regenerate functional nanoluciferase in a reversible manner.This complementation is detected by real-time luminescence measurements following addition of the substrate furimazine (Figure 4A). 62To assess the initial pharmacological activity of the synthesized protected congeners, HEK293 CXCR2/β-arrestin2 NanoBiT cell lines were pre-treated with 2, 7a,b, 10a, or 10c in a concentration-dependent manner (10 μM−0.1 nM) for 30 min prior to furimazine addition and CXCL8 stimulation.Luminescence representing arrestin recruitment by the CXCR2 receptor was recorded for up to 60 min after agonist activation, with data at 60 min being presented in Figure 4B and pIC 50 inhibitory potencies summarized in Table 1.(2), used as the assay reference, showed the highest potency overall, in line with previously published data, 63 with the following order of inhibitor potency for the linker congeners 7b > 7a > 10a > 10c.Compounds 7a and 7b differ only through the N-methyl status of the salicylamide moiety.Notably, the N-methylated analogue (7b) had 2.5-fold higher potency compared to the corresponding unmethylated analogue (7a).The influence of linker length and nature of the protecting group present on the congener can be observed when comparing 7a, 10a, and 10b.In the case of 7a (N-Boc protected) and 10a (N-Fmoc protected), these comprise only the initial ethylenediamine spacer, whereas 10c (N-Fmoc protected) additionally incorporated a glycyl linker.These analogs demonstrated an association between decreased functional potency and the larger Fmoc protecting group.Overall, linker addition to 2 via the amide moiety identified in our docking studies appeared tolerated, with potency reductions of between 3-and 300-fold in the functional NanoBiT assay.Furthermore, assessment using a whole cell assay also requires the designed NAMs to cross the plasma membrane and bind at the intracellular CXCR2 binding site.Therefore, potency differences in this assay may reflect not only an influence on CXCR2 binding affinity, but different physicochemical properties of the linker-coupled compounds that influence their cellular permeabilities. R-Navarixin NanoBRET Binding Assays.The binding affinities of the fluorescent probes 11a, 11c−f at CXCR1/2 were evaluated by generating a bioluminescence resonance energy transfer (NanoBRET) assay performed both in membranes and whole cells. As shown in Figure 5, in our assay, HEK 293 cells expressed CXCR2 fused with full length NanoLuc at the intracellular Cterminus.The NanoLuc is the donor bioluminescent enzyme from which the energy transfer occurs in the presence of membrane permeant furimazine, 64 and the fluorescent ligand is the acceptor fluorophore.Since the energy transfer only occurs when there is close proximity (<10 nm) between the donor and the acceptor, 64 it is possible to monitor specific fluorescent ligand binding to the receptor of interest through the increase in BRET ratio (acceptor emission 630 nm/donor emission 460 nm) in real time, without removal of the free ligand.The BRET ratio is the ratio between the energy emitted by the acceptor fluorophore over the energy emitted by the enzyme donor. 64luorescent ligand binding (11a, 11c−f) was initially tested in a cell free NanoBRET assay using HEK-CXCR2-NanoLuc membranes (Figure 6A,D,G,J,M).All the ligands showed saturable binding and low non-specific binding (NSB, determined with the addition of a high concentration of unlabeled ligand, 2), as determined by the competition of 2 for the intracellular allosteric binding site.The highest CXCR2 binding affinities were obtained with the N-methylated ligands 11d and 11f (K D of 10−27 nM; Table 2).The influence of linker composition was also evident in the comparison of nonmethylated analogue affinities, with the order of K D for 11e > 11a > 11c (Table 2).Additionally, the use of fluorescent ligands within a homogeneous real-time NanoBRET system facilitated characterization of ligand binding kinetics, exemplified through use of tracer 11a, allowing estimation of the association rate constant k on (1.9 ± 0.2 × 10 5 M −1 min −1 ), dissociation rate constant k off (0.013 ± 0.005 min −1 ), and a kinetically derived pK D of 7.3 ± 0.2 (n = 4) of compound 11a in line with endpoint derived parameters. Furthermore, saturation binding experiments were carried out in the presence of a saturating concentration of agonist CXCL8 (IL8) (Figure 7).Under agonist conditions the affinity and kinetic parameters of 11a displayed no significant difference to that determined in the absence of CXCL8 (11a + 10 nM CXCL8, K D = 149.2± 16 nM, p = 0.45, n = 4, Having determined fluorescent ligand binding affinities in membranes, we sought to employ these ligands within a cellular setting.To assess the influence of cellular permeability on compound binding, 11a, 11c−f were tested in the whole cell format of the CXCR2 NanoBRET assay (Figure 6B,E,H,K,N).We were able to measure saturable binding and determine K D values for all compounds except for 11c, with the same rank order of affinity and a drop in apparent K D of 1.8−5.8-fold.The N-methylated fluorescent ligands 11d−f and compound 11e were again the highest affinity in the whole cell system, with K D values of 52−88 nM.These experiments demonstrate the applicability of these fluorescent probes in both membrane-and cell-based CXCR2 binding assays.Moreover, we have confirmed cell accessibility of the probes by direct fluorescent imaging of the CXCR2 cell line, colabeled with the SNAP-tag receptor fluorophore and 11a.These data demonstrate both cell surface and intracellular endosomal fluorescent labelling of the cells by 11a, in a specific manner with competitive displacement by a high concentration of unlabeled 2 (Figure 8). We also assessed the selectivity of probes 11a, 11c−f for the related chemokine receptor CXCR1 compared to CXCR2, given the reported 100-fold CXCR2:CXCR1 functional selectivity for the parent NAM R-navarixin (2). 13In the CXCR1-NanoLuc NanoBRET binding assay in membranes, 11a, 11c, and 11e showed no CXCR1 specific binding (Figure 6C,F,I, L,O; Table 2).However, saturable CXCR1 binding was observed for 11d and 11f, with, respectively, 11-fold and 14fold reductions in affinity at CXCR1 compared to CXCR2 (Table 2).These data demonstrate that the selectivity of the fluorescent probes for CXCR2 over CXCR1 is retained, albeit to a lesser extent than the parent compound.However, 11d and 11f possess sufficient affinity to be used in a CXCR1 NanoBRET assay probing for the corresponding intracellular binding site. Having shown that our fluorescent ligands bind CXCR2, we demonstrated their use as tools to directly measure the affinity of unlabeled NAMs acting at the intracellular binding site of CXCR2. We performed NanoBRET competition binding experiments in membranes from HEK293 CXCR2-NanoLuc cells, employing 11a as the probe and a variety of competing CXCR2 NAMs, including both known literature compounds and previously presented protected congeners (Figure 10). 13,35,47,60,65,66Unlabeled NAMs fully displaced the fluorescent ligand in a competitive manner, enabling the calculation of pK i values (see Table 3) through application of Cheng−Prusoff correction.These estimates were in good agreement with previously reported CXCR2 affinity measurements for literature compounds 13,35,47,60,65,66 as well as the NanoBiT assay potencies established for navarixin (2) and its protected congeners in Table 1.Furthermore, we performed NanoBRET competition binding experiments in membranes derived from a HEK293 CXCR1-NanoLuc cell line, employing 11d and two competing CXCR1/2 NAMs, R-and S-navarixin (2 and 12, Figure S1).Similarly, NAMs displayed full displacement of the fluorescent ligand in a competitive manner at CXCR1 and therefore we were able to calculate pK i values through application of the Cheng−Prusoff correction as previously described.Our data corroborate previous findings, indicating that both enantiomers display reduced affinity for CXCR1 (R-navarixin (2), pK i = 7.66 ± 0.15; S-navarixin (12), pK i = 5.59 ± 0.18). Conformational Analysis of Methylated and Unmethylated Ligands.A recurrent observation in the pharmacological data is the enhanced potency of N-methylated ligands over their unmethylated congeners (e.g., 11d vs 11c and 11f vs 11e).This finding could not be predicted from the original docking studies on which the ligand designs were based as they suggested that the N,N-dimethylamide portion of R-navarixin (2) is not involved in significant interactions with the receptor.Therefore, to rationalize the pharmacological data, we performed some additional studies with a particular focus on the N-methylamide portion of the congeners analyzing some intermediates made during the synthesis of the fluorescent probes.We initially chose compounds 7a and 7b as they include a short N-Boc protected ethylenediamine linker, more convenient for docking purposes being not excessively long and flexible, with or without N-methylation of the amide.Moreover, these two analogues showed a notable difference in potency when tested in the previously described NanoBiT assay (7b > 7a).We docked them in CXCR2 crystal structure (PDB:6LFL) humanized with Ala249 following the methodology previously described.The binding poses of both compounds mimic that for R-navarixin and no significant difference was shown in the interaction with the receptor or in docking score between the N-methylated (7b) and unmethylated (7a) analogues.Since docking studies did not provide sufficient data, which could explain the difference in potency between the analogues, we decided to focus on their potential In 6a, the angle rapidly and repeatedly passes through the planar (180°) conformation, while for 6b, it is restricted (atropisomerism, at least on this time scale).(E, F) Time courses for the distance between the carbonyl oxygen and phenolic hydrogen atoms in 6a and 6b, respectively).In 6a, a strong H-bond is maintained, while in 6b, it is present or absent depending on whether the high or low twist conformation of the torsion angle is adopted. difference in structural conformation.Initial evidence for this arose from the observation of broadened signals in the 1 H-NMR spectra of the methylated compounds with respect to the corresponding unmethylated analogues, suggesting that methylation affects the conformational preference of the ligands.To perform more detailed structural NMR studies, we selected two previous intermediates in the synthesis, compounds 6a and 6b.These can be considered as "minimal ligands" as they retain either a methylated or unmethylated amide moiety, the phenol and a mixed squarate moiety, with the chiral furanylalkylamino moiety, replaced with an ethoxy group.The lack of chirality of analogues 6a and 6b facilitated more straightforward acquisition and analysis of 1 H-NMR spectra with a focus on the conformational nature of the benzamide region.First, 1 H-NMR spectra for each compound in DMSO were fully assigned using two-dimensional NMR spectroscopy experiments (correlation spectroscopy (COSY), heteronuclear single-quantum correlation spectroscopy (HSQC), and heteronuclear multiple-bond correlation spectroscopy (HMBC)).Subsequently, to obtain structural information, we employed rotating-frame nuclear Overhauser effect spectroscopy (ROESY) (Figures S20 and S21).This technique, used to establish through-space correlation between nuclei physically near to each other, 67,68 allowed us to visualize the interactions of the protons in the molecule that are close in space even if they are not bonded or coupling to each other.Interestingly, compounds 6a and 6b differ particularly in their phenolic OH signal.Notably, the phenolic OH peak is shifted downfield in the unmethylated compound 6a with respect to the N-methylated analogue 6b (13.5 and 9.4 ppm, respectively).Moreover, for the phenolic OH of compound 6a, several through-space interactions with other protons of the molecule, such as the benzamide NH and adjacent CH 2 protons of the ethylene region and the aromatic protons, were detected.Conversely, no through-space interactions were detected for the phenolic OH in the N-methylated compound 6b.These data suggest that the N-methyl group could introduce a conformational restriction in compound 6b (and thus other N,N-dialkylamide analogues), preventing intramolecular hydrogen bonding between the phenol and benzamide moieties, whereas the absence of the N-methyl group allows the above intramolecular hydrogen bond to form making phenol less available for receptor interaction.Furthermore, the need to disrupt the intramolecular hydrogen bond would confer an energetic penalty.This could explain the difference in potency between unmethylated and methylated analogues, as previous studies 69 suggest that the interaction between the phenol and the receptor is crucial.Additionally, the rotational restriction present in the N-methylated analogue 6b could promote a conformation that favors receptor binding.Alongside the NMR experiments, we performed molecular dynamics simulations of 6a and 6b in DMSO.These predict that the key amide bond maintains a strict trans-geometry in both molecules but that the neighboring bond connecting to the phenolic ring (torsion angle highlighted in Figure 11) behaves very differently in 6b compared to 6a.In 6a, we observe that the torsion angle distribution (Figure 11A) shows a bifurcated maximum ∼20°either side of 180°, with a low barrier between the two states that is crossed rapidly and repeatedly over the 10 ns simulation (Figure 11C).Throughout this time, a hydrogen bond between the amide oxygen and phenolic OH is highly conserved (Figure 11E).In contrast, for 6b, we observe that the torsion angle distribution (Figure 11B) shows a bimodal distribution with maxima at 80°a nd 120°.Transitions between the two states are rapid and frequent (Figure 11D) and correlate with the formation (120°) and breakage (80°) of the hydrogen bond with the phenolic OH (Figure 11F).There is a symmetry-related conformation when the torsion angle lies in the −80°to −120°range.This can be generated by the application of torsion angle restraints to the simulation to drive it through the planar state, but the increased steric hinderance provided by the N-methyl group means that, in contrast to 6a, we observe no spontaneous transitions, at least on the 10 ns time scale (results not shown). The simulations thus are in full agreement with the NMR analysis, supporting the hypothesis that a considerable portion of the enhanced activity of N-methylated analogues could come from the effects that this modification has on the structure of the free ligand: methylation favors a less planar conformation and a weaker intramolecular hydrogen bond, both of which would be expected to contribute to a more favorable binding free energy. ■ CONCLUSIONS Intracellular allosteric modulators are novel alternatives for the selective targeting of chemokine receptors, which are important drug targets across multiple immunological diseases, respiratory disorders, and cancers, but present challenges for developing traditional orthosteric directed small molecules.Nonetheless, our understanding of how these NAMs bind and modulate receptor pharmacology could be improved significantly by tools that enable direct interrogation of the binding site at the receptor−effector interface.Here, we demonstrate structure guided design and synthesis of fluorescent ligands (11a, 11c−f) based on the CXCR2-selective NAM R-navarixin (2) and show their application to the study of CXCR2 and CXCR1 NAM binding and function, through both fluorescence imaging and real-time resonance energy transfer assays applicable to medium throughput drug discovery. Pharmacological assessment of the suite of synthesized fluorescent ligands in this study highlighted the importance of linker composition for activity and CXCR1/CXCR2 selectivity, as documented elsewhere in the synthesis of other GPCR fluorescent ligands. 52,70For example, β-alanyl incorporation on to the N-(2-aminoethyl)amide linker improved CXCR2 affinity compared to glycyl incorporation or an unextended N-(2aminoethyl)amide linker.The presence of an N-methyl-N-(2aminoethyl)amide also improved affinity resulting in a less planar conformation and weaker intramolecular hydrogen bonding.Notably, the availability of probes with a range of affinities is beneficial in terms of fluorescent ligand assay development to determine GPCR ligand binding.For example, the availability of fluorescent probes with fast kinetics (and lower affinity) extends the range and accuracy of the determination of unlabeled ligand kinetic parameters in competition kinetic assay development using BRET assays. 71iven the intracellular nature of the CXCR2 modulator binding site, membrane permeability of the designed fluorescent probes is also a key consideration since this allows the use of in cell target engagement binding assays, with the receptors in a native context, in addition to cell free membrane systems.We demonstrated that compounds 11a and 11d−f were indeed able to cross the cell membrane and reach their intracellular binding site, showing similar orders of affinity to previous data, displaying their potential utility in future cellbased assays.An influence of membrane permeability and Journal of Medicinal Chemistry reduced intracellular concentration, together with the whole cell receptor context, may account for the 4−6-fold reduced probe binding affinity observed in cells compared to membranes. CXCR1 and CXCR2 possess high sequence homology, and it is known that the congener of our fluorescent probes, Rnavarixin (2), binds both receptors, showing 34-fold lower potency at CXCR1.Within the SAR, we observed that our fluorescent ligands retained CXCR2 selectivity but to differing degrees.For example, 11d bound CXCR1 with suitably high affinity (K D ∼ 100 nM) to be considered as a fluorescent tracer in binding assays exploring NAM receptor pharmacology for this receptor subtype as well as CXCR2. Finally, we demonstrated direct determination of the affinities of a range of structurally distinct CXCR2 NAMs using 11a as the fluorescent tracer in a membrane-based NanoBRET competition assay, using a real-time homogeneous format.These data correlate well with both published data 13,35,60,65,66 and our own assessments of the functional activity of these NAMs on CXCR2 signaling.Moreover, we demonstrate that these ligands can be used in ligand binding kinetic studies and could be employed in future competition kinetic studies to determine the properties of unlabeled CXCR2 NAMs.In conclusion, our synthesized fluorescent ligands constitute a novel toolbox for elucidation of the pharmacology of current and novel small molecule NAMs at the CXCR2 and CXCR1 receptors. ■ EXPERIMENTAL SECTION General Chemistry.Chemicals and solvents were purchased from standard suppliers and used without further purification.BODIPY 630/650-X NHS was purchased from Lumiprobe (Hunt Valley, MD).Compounds 2 (R-navarixin), 12 (S-navarixin), 13, and 14 were synthesized according to the previously reported procedure from Dwyer et al., 60 and all the NMR data obtained were in accordance with reported literature data.Unless otherwise stated, reactions were carried out at ambient temperature and monitored by thin layer chromatography on commercially available precoated aluminumbacked plates (Merck Kieselgel TLC Silica gel 60 Å F 254 ).Visualization was by examination under UV light (254 and 366 nm) followed by staining with ninhydrin.Organic solvents were evaporated under reduced pressure at ≤40 °C (water bath temperature).Flash column chromatography was carried out using technical-grade silica gel from Aldrich, pore size 60 Å, 230−400 mesh particles size and particle size 40−63 μm.Preparative layer chromatography (PTLC) was performed using precoated glass plates (Analtech uniplate silica gel GF, 20 × 20 cm, 2000 μm).Analytical RP-HPLC was performed using YMC-Pack C8 column (150 mm × 4.6 mm × 5 μm) at a flow rate of 1.0 mL/min over a 30 min period (gradient method of 10%−90% solvent B; solvent A = 0.01% formic acid in H 2 O, solvent B = 0.01% formic acid in CH 3 CN), and UV detection at 254 nm and spectra were analyzed using Millennium 32 software. 1H NMR and 13 C NMR spectra were recorded on a Bruker-AV 400, respectively, at 400.13 MHz and at 101.62 MHz.Chemical shifts (δ) are quoted in parts per million (ppm) with calibrated to the residual undeuterated solvent signal.Solvents used for NMR analysis were CDCl 3 supplied by Cambridge Isotope Laboratories Inc., (δ H = 7.26 ppm, δ C = 77.16ppm), DMSO-d 6 supplied by Sigma Aldrich (δ H = 2.50 ppm, δ C = 39.52 ppm) and CD 3 OD supplied by Sigma Aldrich (δ H = 3.31 ppm, δ C = 49.00 ppm).The spectra were analyzed using NMR software MestReNova.Coupling constants (J) are recorded in Hz and the significant multiplicities described by singlet (s), doublet (d), triplet (t), quadruplet (q), broad (br), multiplet (m), and doublet of doublets (dd).LC/MS was carried out using a Phenomenex Gemini-NX C18 110 Å column (50 mm × 2 mm x 3 μm) at a flow rate 0.5 mL/min over a 5 min period (gradient method of 5%−95% solvent B; solvent A = 0.01% formic acid in H 2 O, solvent B = 0.01% formic acid in CH 3 CN).LC/MS spectra were recorded on a Shimadzu UFLCXR system combined with an Applied Biosystems API2000 electrospray ionization mass spectrometer and visualized at 254 nm (channel 1) and 220 nm (channel 2).High-resolution mass spectra (HRMS) were recorded on a Bruker microTOF mass spectrometer using electrospray ionization (ESI-TOF) operating in positive or negative ion mode.All pharmacologically tested compounds are >95% pure by HPLC analysis.Chromatographic purity traces and HRMS spectra are available in Figures S2−S19.Optical rotations were measured using a ADP200 polarimeter (Bellingham + Stanley Ltd). For NanoBRET assays, cells were allowed to grow to 90% confluency in T175cm 2 flasks prior to membrane preparation.Cells were washed twice with phosphate-buffered saline (PBS, Sigma-Aldrich, Pool, UK) to remove growth medium and removed from the flask by scraping in 10 mL of PBS.Cells were pelleted by centrifugation (10 min, 2000 rpm) prior to freezing at −80 °C.For membrane homogenization (all steps at 4 °C), 20 mL of wash buffer (10 mM HEPES, 10 mM EDTA, pH: 7.4) was added to the pellet before disruption (8 bursts) with an Ultra-Turrax homogenizer (Ika-Werk GmbH & Co. KG, Staufen, Germany) and centrifugation at 48000g at 4 °C.The supernatant was removed, and the pellet was resuspended in 20 mL of wash buffer and centrifuged again as above.The final pellet was suspended in cold 10 mM HEPES with 0.1 mM EDTA (pH 7.4).Protein concentration was determined using the bicinchoninic acid assay kit (Sigma-Aldrich, Pool, UK) using bovine serum albumin as standard, and aliquots were maintained at −80 °C until required. NanoBiT Complementation Assays.Stable transfected HEK 293 cells co-expressing SNAP-CXCR2-LgBit and β-arrestin2-SmBit were seeded on white, clear bottom, poly-D-lysine-coated 96-well plates (Greiner 655098) at a density of 32,000 cells/well and allowed to grow overnight.NanoBiT assay buffer consisted of Hepes balanced salt solution (147 mM NaCl, 24 mM KCl, 1.3 mM CaCl 2 , 1 mM MgSO 4 , 1 mM Na pyruvate, 1 mM NaHCO 3 , 10 mM HEPES, pH 7.4) with 0.1% Bovine serum albumin and 10 mM D-Glucose.Cells were washed with assay buffer to remove growth media prior to incubation with respective ligand concentrations (or R-navarixin (2) control) for 1 h.Chosen ligands were first diluted in DMSO to 10 mM and stored at −20 °C prior to use.Ligands were diluted in assay buffer to required concentrations (final assay concentration range: 10 μM−0.1 nM) prior to addition to an assay plate.Post-incubation, a furimazine substrate (1/660 dilution in assay buffer from supplier stocks) was added to cells and allowed to equilibrate for 5 min at 37 °C.Upon equilibration, initial baseline luminescence readings were taken prior to the addition of 10 nM CXCL8 (aa28−99) (Stratech Scientific, Ely, UK).Luminescence readings were continually monitored over a 60 min time course every 15 min at 37 °C postagonist addition using a BMG PHERAstar FS (BMG Labtech).Ligand IC 50 values were obtained using a four-parameter logistic equation. NanoBRET Fluorescent Ligand Binding Assay.NanoBRET assays were carried out in OptiPlate-384 white well microplates (product number: 6007290, PerkinElmer LAS Ltd., UK) and used 25 mM HEPES, 1% DMSO, 0.1 mg/mL Saponin, 0.2 mg Pluronic acid F 127 , 1 mM MgCl 2 and 0.1% BSA (pH 7.4) assay buffer.Both saturation and competition assays employed 1 μg/well HEK 293 SNAP-CXCR2-NanoLuc or SNAP-CXCR1-NanoLuc cell membranes for characterization of fluorescent ligand binding and employed 100 nM or 10 μM R-navarixin (2) to define non-specific binding (NSB).For saturation binding experiments, used to determine fluorescent ligand affinity, membranes were incubated with increasing concentrations of fluorescent ligand (8−1000 nM dilution range in assay buffer) and either assay buffer or NSB, with or without 10 nM CXCL8 28−99 (final assay volume, 40 μL).Membranes were incubated with furimazine at a 1/660 dilution for 5 min prior to addition to the assay plate, allowing for equilibration of luminescence output.NanoBRET was monitored every 15 s for 60 min at 37 °C measuring Nanoluciferase output (450 nm) and BODIPY 630−650 output (630 nm [610-LP filter]), generating a BRET ratio (630 nm/ 450 nm), using a BMG PHERAstar FS (BMG Labtech).Collected data was converted to specific binding measurements through subtraction of NSB data and analyzed by endpoint saturation analysis, allowing determination of ligand dissociation constant (K D ) through (1) Additionally, specific binding traces for 11a (defined as total binding − NSB) were fitted to a one site association model.Global fitting of this model across multiple fluorescent ligand concentrations from the same experiment enabled estimation of ligand association (k on ) and dissociation rate constants (k off ), together with the kinetically derived K D (= k off /k on ) using the equations: (2) where B plateau is the equilibrium level of tracer binding, and the observed association rate constant k obs is related to the binding rate constants for tracer in a single site model by (3) For competition binding assays, HEK 293 SNAP-CXCR1/CXCR2-tsNanoLuc cell membranes were incubated with 100 nM fluorescent ligand, ranging concentrations of unlabeled ligands or NSB/vehicle controls, and 1/660 furimazine (final assay volume, 30 μL).Membranes were added to the assay plate, after 5 min incubation with furimazine, by online injection using the BMG PHERAstar FS injector.NanoBRET measurements were taken over 3 h at 37 °C.Data was normalized using NSB to define 0% and vehicle controls to define 100% binding and was fit to a three-parameter logistic equation to determine unlabeled ligand IC 50 estimates using (4) IC 50 values were further converted to competing ligand dissociation constants (K i ) values using the Cheng−Prusoff correction: (5) where K FL and [FL] represent the fluorescent ligand dissociation constant and concentration, respectively. Cellular based binding assays were carried out in white, clear bottom, 96-well Greiner plates (655098, Greiner Bio-One, Stonehouse, UK).HEK 293 SNAP-CXCR2-NanoLuc cells were seeded at 32,000 cells/well, and assay buffer was Hepes balanced salt solution (147 mM NaCl, 24 mM KCl, 1.3 mM CaCl 2 , 1 mM MgSO 4 , 1 mM Na pyruvate, 1 mM NaHCO 3 , 10 mM HEPES, pH 7.4) with 0.1% BSA and 10 mM D-Glucose.Growth medium was removed 24 h after seeding, and cells were washed with assay buffer prior to the addition of 20 μL of assay buffer per well.Where appropriate, 10 μL vehicle or 10 μM (R)-navarixin (2) (in assay buffer) was added to wells, defining total and NSB, and cells were incubated for 30 min at 37 °C to ensure sufficient binding of the NSB ligand.Chosen fluorescent ligands were diluted in assay buffer to required concentrations (78 nM−10 μM) and 10 μL was added to the assay plate after 30 min incubation.The plate was then incubated at 37 °C for 1 h before addition of 1/240 furimazine solution (10 μL per well).The luciferase substrate was allowed to equilibrate for 5 min before measurement of NanoBRET signal using BMG PHERAstar (as per previously described membrane binding assays) with measurements taken every hour over a 3 h period.Fluorescent ligand affinity was derived as previously described for membrane binding experiments.All binding and functional data were analyzed using PRISM 9.0 (GraphPad Software, San Diego). Modeling.Docking was carried out using tools from Schrodinger software suite release 2023-1.The structures of the ligands were imported in Maestro in a MOL file format generated from ChemDraw (PerkinElmer Informatics release 19.1) and prepared with LigPrep retaining their specific chirality.The crystal structures were imported and prepared with Maestro's protein preparation wizard, including water removal, H-bonding optimization using PROPKA at pH = 7, and energy minimization using OPLS3 force field.Grids were produced using Glide selecting Ala249 6.33 or Glu249 6.33 and Lys320 8.49 as the centroid of the grid and Lys320 8.49 hydrogen bonding as a constraint.The docking was performed both without constraints and with the selected constraint.For each ligand, 100 poses were minimized post docking and a maximum of 20 poses per output.Default settings were used unless otherwise stated.For all the ligands, the highest docking scoring pose was selected.Images were generated using PyMol (The PyMOL Molecular Graphics System, Version 2.0 Schrodinger, LLC).Molecular dynamics simulations were performed using AMBER 20. 75 Molecular models for 6a and 6b were built and parameterized using antechamber (gaff2 forcefield) and immersed in a c 43 Å 3 box of DMSO. 76After energy minimization, molecular dynamics simulations were run for a total of 10.3 ns in the NPT ensemble (Langevin dynamics with a collision frequency of 5 ps-1, temperature regulated to 300 K, pressure regulated with a Berendsen barostat, relaxation time 2 ps, SHAKE applied to all bonds, long-range electrostatic interactions evaluated using the PME method with a realspace cutoff of 8 Å).Discarding the first 300 ps as equilibration, analysis of the torsion angle and H-bonding data confirmed that the simulations were well converged.Simulation analysis was performed in Jupyter notebooks using tools from the MDTraj Python packagem. 77 ASSOCIATED CONTENT General Procedure 1: Conversion of Mixed Squarates to Chiral Squaramides 7a,b, 10a−f.To a solution of the required squaric acid monoamide monoesters compound (6a,b, 9a−f) in 1.5 mL EtOH were added (R)-1-5methylfuran-2-yl-propan-1-amine hydrochloride (1.1 equiv) and Et 3 N (1.1 equiv).The mixture was stirred at rt for 144 h, concentrated under reduced pressure, and purified by PTLC (Si). General Procedure 2: Amide Coupling for 9c−f.To a solution of the Fmoc-protected amino acid (1 equiv) in anhydrous CH 2 Cl 2 at 0 °C were added EDCI (1.2 equiv) and HOBt (1.1 equiv).The solution was stirred for 30 min prior to the addition of the required amine (8a,8b) (1.1 equiv) and DIPEA (2.1 equiv).The mixture was stirred at rt for 72 h, evaporated to dryness, and purified by PTLC (Si). General Procedure 3: Fluorophore Ligation via Amide Bond Formation for 11a, 11c−f.The desired Fmoc-protected amine congener (10a, 10c−f) (1 equiv) was dissolved in DMF and treated with 20% piperidine in DMF.The solution was stirred at rt for 3 h and concentrated under reduced pressure to generate the desired amine congener.The compound was then dissolved in DMF (1 mL) and treated with BODIPY 630/650-X NHS ester (0.9 equiv).The solution was stirred at rt for 18 h in the dark and concentrated under reduced pressure.The reaction mixture was purified using PTLC (Si, MeOH/CH 2 Cl 2 , 5:95). Figure 2 . Figure 2. (A) Side view of CXCR2 crystal structure with 00767013 (1) (cyan) (PDB ID: 6LFL).(B) View from the bottom of the receptor of crystallographically determined (cyan) and redocked (magenta) poses of 00767013 (1) in the crystal structure of CXCR2 (PDB ID: 6LFL).(C) View from the bottom of the receptor of crystallographically determined (cyan) pose of 00767013 (1) and docked pose of R-navarixin (2) (orange) in the crystal structure of CXCR2 (PDB ID: 6LFL).Crucial receptor residues lining the binding pocket are shown in green. Figure 4 . Figure 4. (A) NanoBit complementation assay.CXCR2 tagged with LgBiT and β-arrestin2 tagged with SmBiT of NanoLuc luciferase.Stimulation of the receptor with CXCL8 results in β-arrestin2 recruitment, enzyme complementation and luminescence generation in the presence of furimazine as enzyme substrate.(B) Concentration inhibition curves for compounds 2, 7a,b, 10a, and 10c, demonstrating the effect on 10 nM CXCL8 responses in the CXCR2-β-arrestin2 NanoBiT assay.Cells were pretreated for 30 min with test compounds followed by 60 min CXCL8 stimulation.The data shown are pooled from three individual experiments (mean ± SEM, n = 3), with each experiment performed in technical duplicate. Figure 5 . Figure 5. Representation of the NanoBRET binding assay.BRET occurs upon co-localization of the Nanoluciferase and fluorescent ligand, allowing dual readout of luminescence and fluorescence to generate a BRET ratio. Figure 6 . Figure 6.Fluorescent ligands (11a, 11c−f) NanoBRET saturation binding studies in CXCR2 membranes (left column, A, D, G, J, M), CXCR2 whole cells (central column, B, E, H, K, N), and CXCR1 membranes (right column, C, F, I, L, O) in the absence (blue) or presence (black) of 100 nM or 10 μM R-Navarixin (2) used to measure non-specific binding (NSB).Data are representative experiments from n = 5 individual experiments performed in duplicate.The equivalent graphs showing specific binding curves can be found in Figure S24. Figure 7 . Figure 7. Saturation binding experiments employing endpoint and kinetic analysis for derivation of ligand affinity.11a NanoBRET saturation binding and association kinetic studies in CXCR2 membranes in the absence of CXCL8 (A, C) and in the presence of 10 nM CXCL8 (B, D).Data are representative experiments from 4 performed. Figure 8 . Figure 8. Live cell imaging of 11a binding to SNAP-tagged CXCR2-tsNanoLuc HEK293.Cells were pre-labeled with SNAPsurface-AF488 to identify the SNAP-tagged receptors (green), prior to incubation of 11a (red) in the absence or presence of 10 μM 2 to define non-specific binding (10 min at 37 °C), prior to fluorescence imaging.Scale indicates 20 μm. Figure 10 . Figure 10.NanoBRET competition binding studies CXCR2 allosteric modulators in CXCR2 membranes.Membranes were incubated with 100 nM 11a and increasing concentrations of unlabeled ligands for 3 h at 37 °C.The data shown represent the combined mean ± SEM of n = 5 experiments where each experiment was performed in duplicate. Figure 11 . Figure 11.Conformational analysis of molecular dynamics simulations of 6a and 6b in DMSO.(A, B) Torsion angle distributions for the highlighted bond in 6a and 6b, respectively.(C, D) Time courses for the selected torsion angle.In 6a, the angle rapidly and repeatedly passes through the planar (180°) conformation, while for 6b, it is restricted (atropisomerism, at least on this time scale).(E, F) Time courses for the distance between the carbonyl oxygen and phenolic hydrogen atoms in 6a and 6b, respectively).In 6a, a strong H-bond is maintained, while in 6b, it is present or absent depending on whether the high or low twist conformation of the torsion angle is adopted. Table 2 . Binding Affinities of the Fluorescent Ligands in CXCR2 Membranes, CXCR2 Whole Cells, and CXCR1 Membranes Determined by Saturation Binding Assays a All values represent mean ± SEM of n = 5 separate experiments. b pA 2 parameters derived from functional data.
2023-08-02T06:17:23.121Z
2023-07-31T00:00:00.000
{ "year": 2023, "sha1": "f44ef80bb7d97d47a0578feafc7cbfe6c98c4ba6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acs.jmedchem.3c00849", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3c69fbbca39cda40aaf91a5dba7b2c6a4bae52f4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
227737024
pes2o/s2orc
v3-fos-license
Quark, pion and axial condensates in three-flavor finite isospin chiral perturbation theory We calculate the light quark condensate, the strange quark condensate, the pion condensate, and the axial condensate in three-flavor chiral perturbation theory ($\chi$PT) in the presence of an isospin chemical potential at next-to-leading order at zero temperature. It is shown that the three-flavor $\chi$PT effective potential and condensates can be mapped onto two-flavor $\chi$PT ones by integrating out mesons with strange quark content (kaons and eta), with renormalized couplings. We compare the results for the light quark and pion condensates at finite pseudoscalar source with ($2+1$)-flavor lattice QCD, and we also compare the axial condensate at zero pseudoscalar and axial sources with lattice QCD data. We find that the light quark, pion, and axial condensates are in very good agreement with lattice data. There is an overall improvement by including NLO effects. Introduction Quantum Chromodynamics (QCD) has a rich phase structure, which can be established using its symmetries and symmetry breaking patterns [1,2,3]. The QCD Lagrangian possesses an SU (N c ) gauge symmetry with N c = 3, which preserves color charge when quarks and gluons interact. Furthermore, the QCD Lagrangian is symmetric with respect to independent chiral rotations of left-and-right handed quarks and anti-quarks in the chiral limit. However, the QCD vacuum breaks this a e-mail: adhika1@stolaf.edu b e-mail: andersen@tf.phys.ntnu.no c e-mail: martimoj@stud.ntnu.no symmetry by pairing quarks and antiquarks giving rise to a non-zero, spatially homogeneous, chiral condensate, ψ ψ 0 , with the following spontaneous symmetry breaking pattern [4] Here SU (3) L(R) is the symmetry group associated with chiral transformations of left(right)-handed quarks in the chiral limit and SU (3) V is the symmetry group associated with vector transformations of the quarks. 1 The pairing is analogous to Cooper pairs in the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity [5] as was originally pointed out by Nambu [4]. U (1) B is the symmetry of the QCD Lagrangian with respect to global phase transformations of quarks and anti-quarks, which leads to the conservation of baryon charge. The order parameter of spontaneous chiral symmetry breaking is the chiral condensate, ψ ψ 0 , which is non-zero in the QCD vacuum and leads to the meson octet of (pseudo-) Nambu-Goldstone bosons for (massive) massless quarks. Since quarks are massive, with the strange quark mass being much larger than the up and down quark masses, the vector symmetry group SU (3) V is broken down to SU (2) I × U (1) Y in the isospin limit with m s m u = m d . The presence of isospin and strange chemical potentials in the vacuum phase explicitly breaks the symmetry down as follows, where Y represents hypercharge, I represents isospin, and I 3 the third component of isospin. The electromagnetic gauge group U (1) Q is a subgroup of both SU (3) L × SU (3) R and SU (2) I × U (1) Y × U (1) B as would be expected from the fact that quarks carry electromagnetic charge in addition to color charge. In this work, we will maintain a focus on the properties of QCD at low energies for finite isospin chemical potential [8]. For isospin chemical potentials larger than the pion mass (at zero temperature), QCD is known to exhibit pion condensation. This is signaled by the formation of a pseudoscalar condensate ψ γ 5 λ1,2 2 ψ = 0, where λ i denotes the i'th Gell-Mann matrix. This condensate becomes inhomogeneous in the presence of an external magnetic field due to the spontaneous symmetry breaking of the U (1) gauge symmetry [9]. Furthermore, the condensate increases monotonically as has been observed in lattice QCD and NLO two-flavor calculations [10] for values of the isospin chemical potential up to approximately 2m π . This behavior is analogous to that observed in the context of nuclear matter [11] in that there is a simultaneous weakening of the chiral condensate with increasing nuclear density or in our case isospin density. There is a further condensate: the axial condensate, which is non-zero in the context of finite isospin chemical potential. It has been studied in lattice QCD with the added benefit that unlike the pion and chiral condensates, the zero-source limit results have been extracted [12]. We have previously compared finite source results for the pion and chiral condensates with twoflavor QCD [10] with results in very good agreement. The axial condensate condenses simultaneously with the pion condensate but also exhibits the feature that it does not increase monotonically with increasing isospin chemical potential even though the pion condensate does. Tree-level χPT calculations show that the condensate increase steadily at the critical isospin chemical potential and peaks at 3 1/4 m π [13] and for larger isospin chemical potentials decreases monotonically. The condensation of the axial condensate is somewhat surprising in that it occurs even in the absence of an explicit axial chemical potential as has also been shown in the NJL model [14]. A somewhat heuristic argument for the condensation was first put forth in Ref. [13] using the soft-pion theorem valid for any local operatorÔ, which relates a matrix element involving two arbitrary states |s 1 and |s 2 with one that involves a new state |π a (p)s 1 containing an extra pion compared to |s 1 .Q 5 a is the axial charge operator, which can be defined in terms of the axial charge density operatorρ 0 a ( y), Finally,f π is the relevant pion decay constant, which depends on the choice of states |s i , which we will choose to be the pion condensed vacuum that forms in the presence of an isospin chemical potential. The rotation from the normal vacuum, |0 , to a pion-condensed vacuum, |α , occurs above a critical chemical potential equal to the pion mass, with the parameter α depending on the isospin chemical potential. The chiral condensate is non-vanishing in both of the vacua. Additionally, in the pion-condensed vacuum |α both the isospin density and the pion condensate are non-vanishing, i.e. α|π a ( x)|α = 0 , wheren I is the isospin density operator and pion condensate (operator) are defined aŝ Using standard equal-time anti-commutation relations for the quark fields, it is straightforward to show that whereQ 5 ± =Q 5 1 ± iQ 5 2 is the charge associated witĥ ρ 0 ± =ρ 0 1 ( x) ± iρ 0 2 ( x) andρ 0 ± are the axial current density operators,ūγ 0 γ 5 d anddγ 0 γ 5 u respectively. Choosing |s 1 = |s 2 = |α andÔ =n I in the soft-pion theorem Eq. (4) and noting that in the thermodynamic limit, adding a single zero-momentum pion to the pion condensed vacuum does not alter it, i.e. |π a (0)α = |α , we get using Eq. (11) that the axial density of the pion condensed phase is non-zero, i.e. α|ρ 0 ± |α = 0 [13]. The paper is organized as follows: In section 2, we discuss the χPT Lagrangian in the presence of a pseudoscalar source and an axial vector potential and point out that the pion condensate and the axial condensate condenses orthogonally in the ground state at tree level. In section 3, we construct the one-loop effective potential in the presence of both a pseudo-scalar and an axial vector potential using the ingredients of the previous section. Using the effective potential, we calculate the chiral condensate, the strange-quark condensate, the pion condensate, and the axial condensate in section 4. We also map our three-flavor χPT results to two-flavor χPT with appropriate identifications of the low energy constants (LECs). Finally, in section 5 we compare the condensates, in particular the axial condensate at zero pionic source and the pion and chiral condensates at finite pionic source with the available lattice data. We list a few useful formulas in Appendix A. χPT Lagrangian χPT is a low-energy effective theory for QCD based on its symmetries and degrees of freedom [15,16,17,18]. For two-flavor QCD, the degrees of freedom are the pion triplet, whereas for three-flavor QCD, they are the octet of pions, kaons, and the eta. The leading-order term in χPT is given by the following Lagrangian where χ is with M = diag(m u , m d , m s ) being the quark mass matrix, j 1 and j 2 are pseudo-scalar (pionic) sources and λ i are the Gell-Mann matrices. We will work in the isospin limit, i.e. m u = m d in this paper. The Gell-Mann matrices are normalized as Finally, the covariant derivatives contain both a vector source v µ and an axial source a µ with where µ I is the isospin chemical potential and a a 0 is the zeroth-component of the axial source that couples to λ a . In two-flavor QCD, the presence of an isospin chemical potential rotates the vacuum in the τ 1 and τ 2 directions [8], withφ 1 andφ 2 being real parameters satisfyingφ 2 1 + φ 2 2 = 1 such that the ground state is unitary and properly normalized, i.e. Σ † α Σ α = 1. In three-flavor QCD, the vacuum is rotated in the same way [19,20] but with τ 1 and τ 2 replaced by λ 1 and λ 2 , The pions then condense in the φ 1 ,φ 2 T -direction in isospin space. As suggested by the heuristic argument in the previous section, the pion condensate induces an axial condensate that points in the orthogonal direc- . This feature has also been observed in the context of the NJL model [14] and previously in χPT near the critical isospin chemical potential [21] at next-to-leading order. As such we proceed, without any loss of generality, by choosingφ 1 = 0,φ 2 = 1 and j 1 = 0, j 2 = j in the following discussion. The static Lagrangian, which is equal to the tree-level effective potential modulo a minus sign, in three-flavor QCD is Since the pion and axial condensates are derivatives with respect to the sources j i and a a 0 , respectively, we can immediately deduce from the tree-level effective potential that the tree-level pion and axial condensates are orthogonal as expected. However, this is not sufficient to guarantee that orthogonality holds at next-toleading order. In order to verify this, one needs to construct the full dispersion relation (including the most general pionic and axial sources) that determines that NLO effective potential. While the full dispersion relation is too cumbersome to present here, we have explicitly verified that this is indeed the case. With this understanding, we proceed by writing down the rotated vacuum in the pion condensed phase assuming all of the pion condensate points in the λ 2 direction and the axial condensate points in the λ 1 direction. Using Eq. (19), we get for the rotated vacuum which can be conveniently cast in a form that makes the axial rotation of the normal vacuum transparent [22] and Σ 0 = 1. As pointed out in Ref. [22] and discussed in some detail in Ref. [23], parameterizing fluctuations around the rotated vacuum requires an equivalent rotation of the generators, without which the theory is not renormalizable and the kinetic terms are non-canonical. The upshot is that the Σ fields in the χPT Lagrangian should be written as where which guarantees that the fluctuations around the rotated ground state are parameterized correctly. U is defined as where φ i are the fluctuations of the pion, kaon and eta fields and λ i are the unrotated generators. It follows from the parametrization above that where Σ = U 2 when α = 0. Expanding the leading order χPT Lagrangian using Σ we get the following structure where L static 2 is the contribution with no derivatives or fluctuations, L linear 2 is linear in the fields and L quadratic 2 is quadratic. Explicitly, where we first define j-dependent masses in order to make the notation leaner The masses in the Lagrangian in terms of m j andm j in the pion sector are in the charged kaon sector are and finally the eta mass is Since in the following sections we will need these masses in the limit of a zero axial vector source, we adopt the following equals sign convention whereby any equation that follows a=0 = is assumed to be in this limit. For instance, Next, using the quadratic Lagrangian, we find the inverse propagator: where P = (p 0 , p) is the four-momentum in Minkowski space, such that P 2 = p 2 0 − p 2 . The inverse propagator for the charged pions is D −12 , the charged kaons is D −1 45 and the neutral kaons is D −1 with the masses defined above. In order to renormalize the one-loop effective potential we also need the treelevel contribution from the O(p 4 ) χPT Lagrangian [17], where the relevant terms are where the low energy constants L i and H i are defined as [17] The constants Γ i and ∆ i assume the following values [17] L r i and H r i are scale-dependent and run in order to ensure the scale independence of physical quantities observables in χPT, as follows We only need the static contribution from L 4 , Eq. (50), which is given below, 3 Effective potential In this section, we calculate the next-to-leading order effective potential using the Lagrangian from the previous section. We begin with the tree-level effective potential V 0 , which is simply given by Similarly, the next-to-leading order static contribution V static 1 is given by V static The one-loop contributions from the neutral pion and the eta meson are of the form The integral can be evaluated easily using dimensional regularization and the result is stated in Appendix A, Eq. (A.3). The one-loop contribution from the charged pions on the other hand is of the form where the energies E π ± are given by (61) In order to isolate the divergences, we expand E π ± in powers of p around infinity up to terms that contain divergences Noting that the divergences in Eq. (62) are the same as , we can isolate the divergences by writing where the divergent and finite parts of the charged pion integrals satisfy (65) The one-loop contribution to the effective potential from the charged kaons is where we have used that m 4 = m 5 . The integrand can be factorized and the integral rewritten as withm 2 4 = m 2 4 + 1 4 m 2 45 . Replacing the terms in the parenthesis as p 0 ± im45 2 → p 0 , which is permitted since the p 0 integral is being performed from negative infinity to positive infinity, we get a simple expression for the one-loop contribution from the charged kaons in terms ofm 4 , Noting that the contribution from the neutral kaons is identical since log D −1 67 = log D −1 45 , using Eq. (68) and Eq. (A.3), we get for the divergent contribution to the full one-loop potential Combining Eq. (69) with the tree-level contribution from L 2 , the counterterm L 4 , and renormalization of the couplings L i and H i according to Eqs. (51)-(52), we get the final form of the one-loop effective potential For zero pion and axial sources, j = 0 and a 1 0 = 0 respectively, Eq. (70) reduces to the result of Ref. [20]. Quark, pion and axial condensates In this section, we calculate the light quark, strange, pion and axial condensates. The up-quark and downquark condensates are equal in the isospin limit, which we denote as ψ ψ . The light quark, strange, pion and axial condensates are then defined as Our definition of the light quark condensate, ψ ψ = ūu = d d is different from the definition used in the finite isospin lattice QCD simulation of Ref. [24], where ψ ψ = ūu + d d . This difference explains the extra factor of 1 2 in the definition of the quark condensate above. We also define the pion condensate with an extra factor of 1 2 compared with the lattice work [24]. Their pionic source λ then corresponds exactly to our source j. At tree level, the quark, pion and axial condensates are where the tree-level chiral condensate in the normal vacuum is ψ ψ tree Similarly, the NLO strange-quark condensate is where there is no finite effective potential contribution since the kaon one-loop contributions can be written in the standard quadratic form. The result reduces to that of Ref. [17] in the limit of zero isospin chemical potential. Using the definition of the pion condensate, we obtain where the condensate vanishes for µ I ≤ m π since α = 0. Finally, the axial condensate, which is zero in the normal vacuum becomes nonzero in the pion condensed phase. The final result is where setting α = 0 gives zero as required since pion condensation is required for the axial condensate to form. Two-flavor χPT in the large-m s limit In the limit m s m u = m d , we expect using effective field theory arguments that the degrees of freedom containing an s-quark, i.e. the kaons and the eta, to decouple. Our results for the light-quark condensate and the pion condensate should then reduce to the two-flavor case, albeit with renormalized couplings. The only reference left to the s-quark is in the expressions for the modified couplings l r i and h r i , and modified parameters f andB, see Eqs. (86)-(90) below. This was shown explicitly in Ref. [17], where relations among the lowenergy constants in two -and three-flavor χPT were derived. We begin by expanding the light-quark condensate in inverse powers of m s . Eq. (79) then reduces to where we have introduced new renormalized couplings l r 1 -l r 4 , and h r 1 , as well as modified parametersf andB 0 , which we define below, The new mass parameters are defined asm 2 . The parametersB 0 andf can also be obtained by considering the one-loop expressions for the chiral condensate and the pion decay constant ignoring the loop corrections from the pions, i.e. they are obtained by integrating out the s-quark. The relations between the renormalized couplings l r i and the low-energy constantsl i in two-flavor χPT are where γ 1 = 1 3 , γ 2 = 2 3 , γ 3 = − 1 2 , γ 4 = 2, and δ 1 = 2 [16]. These equations can be used to calculate the running of the couplings l r i and h r i with the renormalization scale. One can then verify that the running of the left-hand and right-hand side of Eqs. (86)-(87) is the same. One can also verify that the modified parametersf 2 andB 0 do not run. Inserting these relations into Eq. (79), we find The pion condensate can be calculated in the same way and the result is Finally, the axial condensate in the large-m s limit is In order to evaluate the condensates in two-flavor χPT, we need the ground state value of α, which is obtained from the effective potential of two-flavor χPT in the presence of a pseudo-scalar source. The two-flavor effective potential can be found by taking the large-m s limit in the effective potential Eq. (70) and the identification of two-flavor LECs as was done with the condensates. We obtain Taking appropriate derivatives of the two-flavor effective potential Eq. (96) yields the various condensates. However, note that 2B 0 m is a reference scale M and must be held fixed when taking the partial derivative with respect to m to obtain the quark condensate. We note that in the two-flavor effective potential and the condensates,B 0 of Eq. (90) appears in the leading order terms and B 0 appears in the next-to-leading order terms. The large-m s limit we perform has the following formal ordering of the various scales, The first equality is the isospin limit, the second the large-m s limit, and the last ensures the validity of an effective field theory approach. In this formal limit the B 0 in the next-to-leading order result can be identified withB 0 up to the order we are working. Finally, our expansion in inverse powers of m s also assumes B 0 j B 0 m s and µ 2 I B 0 m s . Numerical results and discussion In this section, we use the results from the previous section to plot the strange-quark condensate and the axial condensate at zero pionic and axial sources. We also plot the light quark and pion condensate for nonzero pionic source. Finally, we compare the nonzero pionic source results with lattice simulations and compare the axial condensate with available lattice results at zero pionic and axial sources. Finite isospin QCD on the lattice is studied by adding an explicit pionic source since spontaneous symmetry breaking in finite volume is forbidden. Obtaining the pion condensate then requires not just taking the continuum limit but also extrapolating to a zero external source, which is a difficult procedure. We also note that the quark, pion and axial condensates given by Eqs. (79)-(82) depend on the ground state value of α, which can be found by minimizing the one-loop effective potential, i.e. solving ∂V eff ∂α = 0, at zero axial vector source. Definitions and choice of parameters Since we are interested in the condensates as functions of the isospin chemical potential µ I , i.e. in medium effects, we plot the (normalized) change in the chiral condensate, strange-quark condensate, the pion condensate and the axial condensate relative to the normal vacuum using the following definitions [24] Note that Σ a is simply the negative of the axial condensate and the normalization has been chosen to match that of lattice QCD [12]. The chiral and pion condensate deviations satisfy at tree level in both the normal vacuum and the pion condensate phases even in the presence of a pseudoscalar source. For the calculations of the deviations and the axial condensate we will use the following values of the quark masses allowing for a 5% uncertainty, consistent with Ref. [28], where m π,tree and m K,tree are the tree level pion mass and kaon mass respectively. Note that since B 0 is fixed by the up and down quark masses and the GOR relation, the strange quark mass is fixed in three-flavor χPT by the value of B 0 and the tree level pion and kaon masses. In three-flavor χPT we cannot fix the strange quark mass independently of the up and down quark masses. In order to compare with simulations, we adopt the following values of the pseudo-Nambu-Goldstone masses and decay constants [27], We point out that the quark masses in Eq. (100) are not those used in the simulations of Ref. [24] as the latter are unknown. The quark masses from Ref. [28] are approximately 3% larger than given in Eq. (100). In Ref. [10], we therefore varied the quark mass m u = m d by 5% to gauge the sensitivity of the results. It turns out that the dominating uncertainty stems from uncertainty of thel i s. The same remains true for threeflavor χPT condensates. Additionally, we choose the following experimentally determined values for the three-flavor LECs and their associated uncertaintities [29]. The quoted numerical values are at the renormalization scale µ = 0.77 GeV, which is approximately the rho-mass, m ρ , with [25,29]. We will only use the central values of the three-flavor LECs for generating our plots since including the uncertainrties gives rise to a complex η-mass which is unphysical [23]. We get the following bare parameters Similarly, the experimentally determined two-flavor LECs used to generate the two-flavor condensates arē These are proportional to the running LECs evaluated at the bare pion mass as follows from their definitions in Eqs. (91) and (92) 5.2 Deviation of condensates at j = 0 In Fig. 1, we plot the axial condensate deviation, which is the negative of the axial condensate, at tree level and NLO. We find that both the tree-level and the NLO axial condensates are in excellent agreement with lattice QCD. The difference between the tree-level, NLO and lattice is negligible up to µ I ≈ 1.2m π with the differences becoming more significant with increasing isospin chemical potentials. The difference between the two-flavor and the three-flavor result is tiny. In Fig. 2, we plot the strange-quark condensate deviation at both tree level (red) and next-to-leading order (green). At tree level, the pion condensate does not expel the strange-quark condensate. However, at NLO, the deviation of the strange-quark condensate increases above one up to approximately µ I = 1.4m π and then decreases monotonically relative to its vacuum value. Compared to the light-quark condensate, the decrease is significantly smaller. Note that the normalizations in the chiral and quark condensate deviations are different by factors of f 2 π and f 2 K respectively, which are insufficient to explain the difference in the deviations of the respective condensates. It would be of interest to calculate the strange-quark condensate on the lattice to see if it displays the non-monotonic behavior found here. Fig. 2 Deviation of the strange-quark condensate (normalized to 1) from the normal vacuum value, Σs s , in three-flavor χPT for j = 0. See main text for details. Deviation of condensates at j = 0 In this subsection, we compare χPT light quark and pion condensates at finite j with available QCD lattice data [12,27,30]. In Fig. 3, we show the deviation of the chiral and pion condensates as defined in Eq. (98) for j = 0.00517054m π , which is the smallest value of the source for which lattice data is available. In Fig. 4, we show the deviation of the chiral and pion condensates for j = 0.0129263m π . We note that there is no chiral and pion condensate data available for j = 0 since they are "cumbersome" to generate [24]. For a fair comparison of the finite j lattice data, it is important to know the quark masses in the continuum. Since quark masses are not physical observables their values depend on the method of renormalization. For the lattice calculation, a continuum extrapolation was not performed. Consequently, we use the lattice continuum quark masses of Ref. [28] for our comparison (and include a 5% uncertainty) with the expectation that the difference with the lattice calculation of Ref. [12,30] is small. Fig. 3 Upper panel shows the deviation of the light quark condensate (normalized to 1) from the vacuum value, Σψ ψ for j = 0.00517054m π . Lower panel shows the deviation of the pion condensate from the vacuum value, Σ π for j = 0.00517054m π . See main text in [12,24] for details. The upper panel of Fig. 3 shows the light quark condensate deviation at j = 0.00517054m π from χPT and lattice QCD as a function of µ I /m π . Firstly, we observe that the NLO correction to the LO results (red solid line) is very small for both N f = 2 (blue dashed line) and N f = 3 (green dashed line). All three curves are Fig. 4 Upper panel shows the deviation of the light quark condensate (normalized to 1) from the vacuum value, Σψ ψ , for j = 0.0129263m π . Lower panel shows the deviation of the pion condensate from the vacuum value, Σ π , for j = 0.0129263m π . See main text in [12,24] for details. in excellent agreement with the lattice results (black points), the tree-level results being in slightly better agreement. In the lower panel of Fig. 3, we plot the pion condensate deviation for the same value of the pionic source. The pion condensate is therefore nonzero for all values of µ I because the nonzero pseudo-scalar source explicitly breaks isospin symmetry. The tree-level and NLO pion condensate agree with each other and the lattice results up to µ I ≈ 1.2m π . Beyond that χPT underestimates the pion condensate with two-flavor χPT in better agreement with lattice QCD compared to threeflavor χPT. Notice that the LO results level off for large values of µ I independent of the source j, in disagreement with both lattice data and the NLO results. Thus the NLO result is a significant improvement over the tree-level result and we can no longer interpret α as the angle specifying how the chiral condensate is rotated into the pion condensate. A similar violation is seen in the NJL model [31]. In Fig. 4, we plot the light quark and pion condensate deviations at j = 0.0129263m π from χPT and lattice QCD. The qualitative behavior is similar to that for j = 0.00517054m π and the same remarks apply, in particular the improved agreement of the chiral condensate with lattice data.
2020-12-09T02:41:23.459Z
2020-12-08T00:00:00.000
{ "year": 2020, "sha1": "a79887b605c41075a23051f7632809436247b2e7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-021-09212-7.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "a79887b605c41075a23051f7632809436247b2e7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
88517364
pes2o/s2orc
v3-fos-license
High Temperature Structure Detection in Ferromagnets This paper studies structure detection problems in high temperature ferromagnetic (positive interaction only) Ising models. The goal is to distinguish whether the underlying graph is empty, i.e., the model consists of independent Rademacher variables, versus the alternative that the underlying graph contains a subgraph of a certain structure. We give matching upper and lower minimax bounds under which testing this problem is possible/impossible respectively. Our results reveal that a key quantity called graph arboricity drives the testability of the problem. On the computational front, under a conjecture of the computational hardness of sparse principal component analysis, we prove that, unless the signal is strong enough, there are no polynomial time linear tests on the sample covariance matrix which are capable of testing this problem. Introduction Graphical models are a powerful tool in high dimensional statistical inference. The graph structure of a graphical model gives a simple way to visualize the dependency among the variables in multivariate random vectors. The analysis of graph structures plays a fundamental role in a wide variety of applications, including information retrieval, bioinformatics, image processing and social networks (Besag, 1993;Durbin et al., 1998;Wasserman and Faust, 1994;Grabowski and Kosiński, 2006). Motivated by these applications, theoretical results on graph estimation (Meinshausen and Bühlmann, 2006;Liu et al., 2009;Montanari and Pereira, 2009;Ravikumar et al., 2011;Cai et al., 2011), single edge inference (Jankova et al., 2015;Ren et al., 2015;Neykov et al., 2015;Gu et al., 2015) and combinatorial inference (Neykov et al., 2016;Neykov and Liu, 2017) have been studied in the literature. In this paper we are concerned with the distinct problem of structure detection. In structure detection problems one is interested in testing whether the underlying graph is empty, (i.e., the random variables are independent) versus the alternative that the graph contains a subgraph of a certain structure. A variety of detection problems have been previously considered in the literature (see for example Addario-Berry et al., 2010;Arias-Castro et al., 2012. These works mainly focus on covariance or precision matrix detection problems and establish minimax lower and upper bounds. While covariance and precision matrix detection problems are inherently related to the Gaussian graphical model, in this paper we focus on detection problems under the zero-field ferromagnetic Ising model. The Ising model is a probability model for binary data originally developed in statistical mechanics (Ising, 1925) and has wide range of modern applications including image processing (Geman and Geman, 1984), social networks and bioinformatics (Ahmed and Xing, 2009). Below we formally introduce the model and problems of interest. Zero-field ferromagnetic Ising model. Under a zero-field Ising model, the binary vector X ∈ {±1} d follows a distribution with probability mass function given by where Θ = (θ ij ) d×d is a symmetric interaction matrix with zero diagonal entries and Z Θ is the partition function defined as The non-zero elements of the symmetric matrix Θ specify a graph G(Θ) = G = (V , E) with vertex set V = {1, . . . , d} and edge set E = {(i, j) : θ ij = 0}. We will refer to the graph G(Θ) as G whenever it is clear what the underlying matrix Θ is. It is not hard to check that by the definition of G, the vector X is Markov with respect to G, that is, each two elements X i and X j are independent given the remaining values of X −(i,j) if and only if (i, j) ∈ E. Here, the term zero-field specifies that there is no external magnetic field affecting the system, meaning that the energy function d i,j=1 θ ij X i X j consists purely the terms of degree 2 (i.e., there are no main effects). In this paper, we further focus on zero-field ferromagnetic models, where we also assume that θ ij ≥ 0, i, j ∈ {1, . . . , d}. In addition, our analysis is under the high-temperature setting, where the magnitudes of θ ij 's are under a certain level. More specifically, throughout this paper we assume that Θ F ≤ 1 2 , where Θ F = d i,j=1 θ 2 ij 1/2 is the Frobenius norm of Θ. Structure detection problems. As described in the previous paragraph, a zero-field ferromagnetic Ising model specifies a graph G = (V , E). In a structure detection problem, we are interested in testing whether the underlying graph G is an empty graph versus the alternative that G belongs to a set of graphs with a certain structure. Specifically, let G ∅ = (V , ∅) be the empty graph, and let G 1 be a class of graphs not containing G ∅ . The following hypothesis testing problem is an example of a detection problem. Given a sample of n independent observations X 1 , . . . , X n ∈ R d from a zero-field ferromagnetic Ising model we aim to test (1.1) The term "detection" here is used in the sense that if one rejects the null hypothesis, the presence of a non-null graph has been detected. In (1.1) the graph class G 1 can be arbitrary, which makes the hypothesis testing problem (1.1) a very general problem. We now give a specific instance of this problem which is of particular importance. Let G * be a fixed graph with s = o( √ d) 1 non-isolated vertices which represents some specific graph structure. The structure detection problem that considers all possible "positions" of G * is of the following form: where G 1 (G * ) is the class of all graphs that contain a size-s subgraph isomorphic to G * . While problems (1.1) and (1.2) give a good intuition what a detection problem is, in order to facilitate testing we need to impose certain assumptions on the matrix Θ, as otherwise even with graphs vastly different from the empty graph there might not be enough "separation" between the null and the alternative hypothesis. Since the underlying graph G is specified by the matrix Θ, we can reformulate problems (1.1) and (1.2) into testing problems on Θ. Given a class of graphs G 1 , we define the corresponding parameter space with minimum signal strength θ > 0 as S(G 1 , θ) = Θ = (θ ij ) d×d : Θ = Θ T , G(Θ) ∈ G 1 , Θ F ≤ 1/2, min (c) is a 5-star; (d) is an example of a graph that has community structure with k = 5 and l = 4. We can write the detection problems as (1.5) by defining the corresponding shown graphs as G * . The results of our paper cover the following examples. Main Contributions There are three major contributions of this paper. First, we develop a novel technique to derive minimax lower bounds of structure detection problems in Ising models. Our proof technique relates the Ising model probability mass function and the χ 2 -divergence between two distributions to the number of certain Eulerian subgraphs of the underlying graph. With this technique, we are able to obtain a general information-theoretic lower bound for arbitrary alternative hypothesis, which can be immediately applied to examples including any of the four examples described in the previous section. Second, we propose a linear scan test on the sample covariance matrix that matches our minimax lower bound for arbitrary structure detection problems, in certain regimes. Along with our general minimax lower bound result, this procedure reveals the fact that a quantity called arboricity, (i.e., a certain maximum edge to vertex ratio of graphs in the alternative hypothesis) essentially determines the information-theoretic limit of the testing problem. This matches the intuition that in order to distinguish a graph with small signal strength from the empty graph, one need to examine the densest part of the graph. Furthermore, the denser the graph is, the easier it is to detect it, where the precise measurement of graph density turns out to be graph arboricity. In addition, we also study the computational lower bound of structure detection problems. Based on a conjecture on the computational hardness of sparse Principal Component Analysis (PCA), which has been studied by recent works (Berthet and Rigollet, 2013b,a;Gao et al., 2014), we prove that no polynomial time linear test on the sample covariance matrix can detect structures successfully unless there is a sufficiently large signal strength. In addition to this result, we also derive another computational lower bound result under the oracle computational model studied by Feldman et al. (2015a,b); Wang et al. (2015). Related Work Plenty of work has been done on graph estimation (also known as graph selection) in Ising models. Santhanam and Wainwright (2012) gave the first information-theoretic lower bounds of graph selection problems for bounded edge cardinality and bounded vertex degree models. Later, Tandon et al. (2014) proposed a general framework for obtaining information-theoretic lower bounds for graph selection in ferromagnetic Ising models, and showed that the lower bound is specified by certain structural conditions. On the other hand, Ravikumar et al. (2010) proposed an algorithm for structure learning based on l 1 -regularized logistic regression that works in the high temperature regime (Montanari andPereira, 2009). Bresler (2015) gave a polynomial time algorithm that works for both low and high temperature regimes. Compared to graph estimation, structure detection is a statistically easier problem. As a consequence, the limitations on signal strength that we exhibit in this paper are weaker than the corresponding requirements used in the graph estimation literature. Structure detection problems have been studied in Addario-Berry et al. (2010); Arias-Castro et al. (2012. However, all these works focus on Gaussian random vectors. Specifically, Addario-Berry et al. (2010) study testing the existence of specific subsets of components in a Gaussian vector whose means are non-zero based on a single observation. Arias-Castro et al. (2012) consider the correlation graph of a Gaussian random vector and establish upper and lower bounds for detecting certain classes of fully connected cliques based on one sample. In a follow up work, Arias-Castro et al. (2015b) generalize the result to multiple i.i.d. samples. Arias-Castro et al. (2015a) give another related result on detecting a region of a Gaussian Markov random field against a background of white noise. The major difference between these existing works and our work is that we focus on detection in the Ising model, and our results not only work for cliques, but also for general graph structures. Recently, (Neykov et al., 2016;Lu et al., 2017;Neykov and Liu, 2017) proposed a novel problem where one considers testing whether the underlying graph obeys certain combinatorial properties. We stress that while related to structure detection, these problems are fundamentally different as structure detection is a statistically simpler task. It is not surprising therefore that the algorithms we develop are very different from those in the aforementioned works, and the proofs of our lower bounds use different techniques. Our result on computational lower bound follows the recent line of work on computational barriers for statistical models (Berthet and Rigollet, 2013b,a;Ma et al., 2015;Gao et al., 2014;Brennan et al., 2018) based on the planted clique conjecture. Berthet and Rigollet (2013b) focus on the testing method based on Minimum Dual Perturbation (MDP) and semidefinite programming (SDP) and prove that such polynomial time testing methods cannot attain the minimax optimal rate for sparse PCA. Berthet and Rigollet (2013a) prove the computational lower bound on a generalized sparse PCA problem which includes all multivariate distributions with certain tail probability assumptions on the quadratic form. Ma et al. (2015) consider the Gaussian submatrix detection problem and propose a framework to analyze computational limits of continuous random variables via constructing a sequence of asymptotically equivalent discretized models. Inspired by the results in Ma et al. (2015), Gao et al. (2014) consider the computational lower bound for Gaussian sparse Canonical Correlation Analysis (CCA) as well as sparse PCA problems. Our computational lower bound result is based on the previous studies on the sparse PCA problem. We summarize these results and directly base our result for Ising models on a sparse PCA conjecture. By doing this, we are able to use a novel proof technique that utilizes the high-dimensional central limit theorems of Chernozhukov et al. (2014). Other related works on Ising models include the following. Berthet et al. (2016) study the Ising block model by providing efficient methods for block structure recovery as well as information-theoretic lower bounds. Mukherjee et al. (2018) study the upper and lower bounds for detection of a sparse external magnetic field in Ising models. Daskalakis et al. (2018) consider goodness-of-fit and independence testing in Ising models using pairwise correlations. Gheissari et al. (2017) establish concentration inequalities for polynomials of a random vector in contracting Ising models. Notation We use the following notations in our paper. For a vector We also use the standard asymptotic notations O(·) and o(·). Let a n and b n be two sequences and assume that b n is non-zero for large enough n. We write a n = O(b n ) if lim sup n→∞ |a n /b n | < ∞ and a n = o(b n ) if lim n→∞ a n /b n = 0. Let V = {1, . . . , d} be the complete vertex set. In this paper we consider graphs with d vertices over the vertex set V . For a graph G, let E(G) = {(i, j) : G has an edge connecting vertex i and j}, where (i, j) = (j, i) are undirected pairs. Moreover, we denote by V (G) = {i ∈ V : G has an edge connecting vertex i} the set of non-isolated vertices of G. Organization of the Paper Our paper is organized as follows. In Section 2, we present our main information-theoretic lower bound result as well as its applications to various detection problems. In Section 3 we develop a general procedure to construct optimal linear scan tests on the sample covariance matrix. In Section 4 we examine the computational limit of the linear tests on the sample covariance matrix by comparing the covariance matrices of Ising and sparse PCA models. Sections 5 and 6 contain the proofs of the main results of Sections 2 and 3 respectively. The remaining detailed proofs are all placed in Section A.1. In Section B we provide an additional proof of a computational lower bound under the oracle computational model. Lower Bounds The minimax risk of detection problem (1.4) is defined as where P 0,n and P Θ,n are the joint probability measures of n i.i.d. samples under null and alternative hypotheses respectively. The infimum in (2.1) is taken over all measurable test Figure 2: Illustration of arboricity. Here the nodes and black lines represent the vertices and edges of graph G respectively. The vertex set V that maximizes |E(G V )|/(|V | − 1) is denoted by green nodes, which also gives the densest subgraph of G. We have R(G) = 3. In this section, we derive necessary conditions on the signal strength θ required for detection problems to admit tests which are not asymptotically powerless. Our results will show that the difficulty of testing an empty graph against G 1 is determined by a quantity called arboricity, which was originally introduced in graph theory by Nash-Williams (1961) to quantify the minimum number of forests into which the edges of a given graph can be partitioned. For a graph G ∈ G 1 and a vertex set V ⊆ V , let G V be the graph obtained by restricting G on the vertices in V (i.e., removing all edges which are connected to vertices V \ V ). The arboricity of G is defined as follows: where · is the ceiling function, and 0/0 is understood as 0. The arboricity of a graph measures how dense the graph is. For an illustration of arboricity see Figure 2. Let G ∅ = (V , ∅) denote the empty graph. By definition R(G ∅ ) = 0. For a given graph G the larger R(G) is, the more different G ∅ and G are. We further define to measure the difference in graph density between G ∅ and G 1 in a worst case sense. Let G * be a nonempty subset of G 1 such that all graphs in G * have arboricity R. By the definition of R, such nonempty G * exists, and may not be unique. Our analysis works for arbitrary choices of G * which satisfy the incoherence condition (Neykov et al., 2016) defined as follows. Definition 2.1. (Negative association and incoherence condition) For k ≥ 0, we say the random variables Y 1 , . . . , Y k are negatively associated if for any k 1 , k 2 ≥ 0 with k 1 + k 2 ≤ k, any distinct indices i 1 , i 2 , . . . , i k 1 , j 1 , . . . , j k 2 , and any coordinate-wise non-decreasing functions f and g, we have We say that the graph set G * is incoherent if for any fixed graph G, the binary random variables are negatively associated with respect to uniformly sampling G ∈ G * . For a graph G, we denote by A G the adjacency matrix of G. Then given G * , we define the corresponding parameter set with minimal signal strength θ as By (2.3), it follows that to give a lower bound on γ[S(G 1 , θ)] it suffices to lower bound γ(S * ). We are ready to introduce our main theorem. Theorem 2.2. Let G * be a non-empty subset of G 1 such that all graphs in G * have arboricity R. Define N (G * ) : is related to both the structural properties of graphs in G 1 and the sample size n, the second term R B and third term 1 8(Λ∨Γ) are independent of n. Therefore when the sample size is large enough, is the leading term determining the necessary signal strength, and the other two terms mainly serve as scaling conditions of θ. Remark 2.4. The condition (2.4) given by Theorem 2.2 is comparable to the "multiedge" results given in Neykov et al. (2016), where the authors give minimax lower bounds of combinatorial inference problems in Gaussian graphical models. Unlike our results in Theorem 2.2, the necessary signal strength for Gaussian graphical models given by Neykov et al. (2016) does not explicitly involve graph arboricity. It is also not very clear under what condition the lower bound given by Neykov et al. (2016) is sharp. In comparison, in this paper we show that graph arboricity is an appropriate quantity that gives sharp lower bounds for any structure detection problems under the incoherence condition and the sparsity assumption s = O(d 1/2−c ) for some c > 0. It is also worth comparing Theorem 2.2 to the results of Neykov and Liu (2017). The lower bounds on the signal θ of Neykov and Liu (2017), typically involve the quantity log d n which is generally much larger than the right hand side of (2.4) when R is large enough. This is intuitively clear since detection problems are statistically easier than graph property testing. Our proof strategy is also completely different than the one used by Neykov and Liu (2017), and relies on high temperature expansions rather than Dobrushin's comparison theorem. In Theorem 2.2, the incoherence condition of G * is not always easy to check. However, it is known that this condition is satisfied by a various discrete distributions including the multinomial and hypergeometric distributions (Joag-Dev and Proschan, 1983;Dubhashi and Ranjan, 1998). In particular, Theorem 2.11 in Joag-Dev and Proschan (1983) states that negative association holds for all permutation distributions. Therefore, for detection problems of the form (1.5), incoherence condition is always satisfied by picking G * to be the set of all graphs isomorphic to G * . This leads to the following corollary (recall that we are assuming s = o( √ d)). Corollary 2.5. Let G * be a graph with s vertices and G 1 (G * ) be the class of all graphs that contain a size-s subgraph isomorphic to then we have lim inf n→∞ γ(S * ) = 1. Examples In this section we apply Corollary 2.5 to specific detection problems. Example 2.6 (Empty graph versus non-empty graph). Consider testing empty graph versus non-empty graph defined in Section 1. If we have lim inf n→∞ γ(S * ) = 1. Example 2.8 (Star Detection). For the star detection problem defined in Section 1, if s ≥ 4 and then lim inf n→∞ γ(S * ) = 1. Example 2.9 (Community structure detection). For the community structure detection problem defined in Section 1, if k ≥ 4, l ≥ 2 and we have lim inf n→∞ γ(S * ) = 1. Proof. To calculate R(G * ), we utilize the fact that arboricity equals the minimum number of forests into which the edges of a given graph can be partitioned (Nash-Williams, 1961). Let C 1 , . . . , C l be the communities. For i = 1, . . . , l, we know that C i is a k-clique, and the arboricity is k/2 . Therefore inside C i , we can partition the graph into k/2 forests. There is also an l-clique in G * consisting of the cross-community edges. This clique can be partitioned into l/2 forests. Note that this l-clique shares only one vertex v(C i ) with the community C i . Therefore for any forest in the partition of this l-clique and any forest in the partition of C i , we can merge them into a single forest because the resulting graph is still acyclic. We can keep merging forests from other communities. Eventually, we can merge l forests from distinct communities to a forest in the l-clique, without introducing any cycles. If l/2 ≥ k/2 , we will obtain l/2 forests that form a partition of G * ; if l/2 < k/2 , then the partition will contain k/2 forests. Therefore by the equivalent definition of arboricity given in (Nash-Williams, 1961) We now compare the upper bounds of A G * F and A G * 1 . If k ≥ 4 and l ≥ 2, we have l ≥ 1 + l/2 and Moreover, Therefore by Corollary 2.5, if (2.8) holds we have lim inf n→∞ γ(S * ) = 1. Upper Bounds In this section we construct upper bounds for the hypothesis testing problem (1.1). We propose a general framework for testing an empty graph G ∅ against an arbitrary graph set G 1 . We remind the reader that the arboricity of a graph G is defined in (2.2) as where G V is the graph obtained by restricting G on the vertex set V . The arboricity R of G 1 is then defined as We now introduce the concept of witnessing subgraph and witnessing set. Before that, we remind the reader, that in this paper all graphs have d vertices (i.e., all graphs are over the vertex set V ), unless otherwise specified. Therefore a subgraph G of a graph G = (V , E) is a graph with d vertices whose edge set is a subset of the edge set of the larger graph, i.e., G = (V , E ) where E ⊆ E. Importantly, the notation V (G) and V (G ) refer to the non-isolated vertices of G and G which may be strict subsets of V . Here we remark that for H to be a witnessing subgraph of G, it is unnecessary to have which is a weaker requirement since by definition we have R ≤ R(G) for any G ∈ G 1 . This implies that every graph G ∈ G 1 has at least one witnessing graph, which may be obtained from the densest subgraph of G (with potential edge pruning). Definition 3.2 (Witnessing Set). We call the collection of graphs H a witnessing set of G 1 , if for every G ∈ G 1 , there exists H ∈ H such that H is a witnessing subgraph of G. By the definition of R, and as we previously argued, every graph G ∈ G 1 must have at least one witnessing subgraph. Therefore at least one witnessing set H of G 1 exists. We define the set of witnessing graphs in order to facilitate the development of scan tests. Below we will formalize a test statistic which scans over all graphs in H. Importantly, in order to match the lower bound result given by Theorem 2.2, it is not sufficient to scan directly over the graphs from the set G 1 . This is because the graphs in G 1 may contain non-essential edges which may introduce noise during the testing. In contrast, the graphs from H trim down those non-essential edges and focus only on the essential parts of the graphs in G 1 . We now introduce our general testing procedure. Our test is based on a witnessing set H. For H ∈ H we define where X l is the l-th sample and X l,i , X l,j are the i-th and j-th components of X l respectively. Our test then scans over all possible H ∈ H and calculates the corresponding W H . We define and κ is a large enough absolute constant. The following theorem justifies the usage of the test defined in (3.2). Theorem 3.3. Given any fixed α ∈ (0, 1), suppose that log(|H|)/n = o(1) and |H| ≥ 2/α. If Rn for a large enough absolute constant κ, when n is large enough we have that the test ψ of (3.2) satisfies The detailed proof of Theorem 3.3 is given in Section 6. If s = O(d 1/2−c ) for some c > 0, log(d/s 2 ) is also of order log(d). Therefore the rate given by Theorem 3.3 matches Corollary 2.5. Examples Example 3.5 (Empty graph versus non-empty graph). Consider testing empty graph versus non-empty graph defined in Section 1. for a large enough constant κ, then when n is large enough, we have P 0,n (ψ = 1) + max Proof. In this example we have R = 1, and therefore H = {single-edge graphs} is a witnessing set of G 1 . We have |H| = d(d − 1)/2, m(H) = 2 and M (H) = log(|H|)/m(H) ≤ log d. Therefore by Theorem 3.3, if (3.3) holds for a large enough constant κ, then when n is large enough, we have that (3.4) holds. Example 3.6 (Clique Detection). For the clique detection problem defined in Section 1, if s log(ed/s)/n = o(1), (d/s) s ≥ 2/α and for a large enough constant κ, then when n is large enough, we have Proof. In this example we have R = s/2 , and H = {s-cliques} is a witnessing set of G 1 . We have |H| = d s , and therefore (d/s) s ≤ |H| ≤ (ed/s) s . We have m(H) = s and M (H) = log(|H|)/m(H) ≤ log(ed/s). Therefore by Theorem 3.3, if (3.5) holds for a large enough constant κ, then when n is large enough, we have that (3.6) holds. Example 3.7 (Star Detection). For the star detection problem defined in Section for a large enough constant κ, then when n is large enough, we have P 0,n (ψ = 1) + max Proof. In this example we have R = 1, and H = {(s − 1)-stars} is a witnessing set of G 1 . We have |H| = s d s , and therefore s(d/s) s ≤ |H| ≤ s(ed/s) s . We have m(H) = s. When s = o( √ d) we have s ≤ (ed/s) s and M (H) = log(|H|)/m(H) ≤ 2 log(ed/s). Therefore by Theorem 3.3, if (3.7) holds for a large enough constant κ, then when n is large enough, we have that (3.8) holds. Example 3.8 (Community structure detection). Consider the community structure detection problem defined in Section 1. for a large enough constant κ, then when n is large enough, we have Proof. If l ≥ k, we have R = l/2 , and we can choose H = {l-cliques} as a witnessing set of G 1 ; if l < k, we have R = k/2 , and H = {k-cliques} is a witnessing set of G 1 . The rest of proof is identical to the clique detection problem, and we omit the details. Computational Lower Bound Our results in Section 3 suggests that in order to match the information-theoretic lower bound, one should first determine the densest subgraphs of graphs in G 1 , and then scan over all possible positions of such subgraphs. However, such tests may not be computationally efficient: for the structure detection problem (1.5), if the densest part of G * contains k vertices, then our test requires scanning over at least d k different positions, and cannot be done in polynomial time if k = O(s δ ) for some constant δ > 0. On the other hand, one can always relax the testing problem into the "empty graph versus non-empty graph" problem, which, according to Section 3.1, can be tested by scanning over single edges in polynomial time. However, it will require signal strength θ > κ log d n for some constant κ to distinguish the null and the relaxed alternative, which does not match the information theoretic lower bound in Theorem 2.2 for the original detection problem with large maximum arboricity R. In this section, we give a detailed analysis of such computational-statistical tradeoffs, and show that the signal strength requirement θ > κ log d n , up to a logarithmic factor, cannot be improved for polynomial time linear tests. Let M = 1 n n i=1 X i X T i be the sample covariance matrix calculated with n samples from the Ising model. We define polynomial time linear tests on M as follows. (4.1) Note that the test we introduce in (3.2) in Section 3 is of the form (4.1). Indeed, we have and since for each H ∈ H, W H is a linear function of M the above is of the form (4.1). However, the test (3.2) may not be a polynomial time linear test according to our definition since the number of graphs in H may not be bounded by (nd) p for a constant p. Main Computational Lower Bound Result In this section we give our main result on the computational lower bound of structure testing problems in Ising models. Our result is based on a sparse PCA conjecture. Denote by 1 i 1 ,...,is = e i 1 + · · · + e is ∈ R d the vector whose i 1 , . . . , i s -th entries are 1 and other entries are 0. Let . . , d} be the set of covariance matrices from Gaussian spiked model. In sparse PCA, we consider the hypothesis testing problem for n i.i.d samples Z 1 , . . . , Z n ∈ R d : (4.2) We denote by P I,n and P Σ,n the probability measure under H PCA 0 and H PCA 1 respectively. ] for some small enough constant η, then for any polynomial time test ψ, we have Conjecture 4.2 is derived by Gao et al. (2014) under the widely believed planted clique conjecture and additional assumptions which essentially require that 2n ≤ d ≤ n a for some constant a > 1 and n[log(n)] 5 ≤ Cs 4 for some small enough constant C > 0. It is also studied in Berthet and Rigollet (2013a) and Brennan et al. (2018). We now give our main theorem on the computational lower bound of hypothesis testing problems of the form (1.5). ] for some small enough constant η, then for any polynomial time linear test ψ as in (4.1) and any G * with s non-isolated vertices, we have lim inf n→∞ P 0,n (ψ = 1) + max Proof. See Section A.3 for a detailed proof. Remark 4.4. Theorem 4.3 shows that no polynomial linear scan tests on the sample covariance matrix M can distinguish the null from alternative hypotheses when θ ≤ η[n −(1/2+δ) ∧ s −(1+δ) ] for small enough constant η. Since the sample covariance matrix M is a sufficient statistic for the Ising model, any test ψ(X 1 , . . . , X n ) on the sample vectors X 1 , . . . , X n can be formulated as a function ψ( M) on the sample covariance matrix. However, ψ may not be linear and furthermore the computation complexity of calculating ψ and ψ may be different, hence the result in Theorem 4.3 cannot prove the nonexistence of computationally efficient ψ(X 1 , . . . , X n ). To derive bounds for other types of test functions, in Section B of the Appendix, we provide a different approach and show the computational limit under the oracle computational model. We leave more general results to future work. While the detailed proof of Theorem 4.3 is given in Section A.3, in the next section we give some important insights into the connection between Gaussian and Ising models. Connection Between Gaussian and Ising Cliques In this section, we explain how we relate Conjecture 4.2 to the Ising model. The main idea is that based on the Gaussian random vectors from the sparse PCA problem, we propose a polynomial time reduction algorithm that constructs a d × d matrix which cannot be distinguished from the sample covariance matrix of an Ising model with a parameter matrix Θ by polynomial time linear tests. Importantly, this reduction only needs to be done for clique graphs because any G * can always be embedded within an s-clique. Furthermore, in the sparse PCA problem each Σ ∈ S σ corresponds to an s-clique. More specifically, for any index set I ⊆ {1, . . . , d} of size s representing the position of a clique, we consider i.i.d. samples X 1 , . . . , X n generated from the Ising model with parameter matrix Θ = θ · [1(i, j ∈ I, i = j)] d×d , and Z 1 , . . . , Z n generated from the multivariate Gaussian distribution with mean 0 and covariance matrix Σ = I + σ1 I 1 T I . Before we introduce the reduction scheme, it is necessary to determine the parameter σ for any fixed θ. Our choice of σ is based on a comparison between the moments of Ising and signs of Gaussian vectors. Let Y i = sign(Z i ), i = 1, . . . , n. For r = 1, . . . , s and distinct i 1 , . . . , i r ∈ I, we define The following lemma determines σ for all small enough θ. From now on we study the sparse PCA problem with parameter σ chosen such that condition (4.3) holds. Based on this σ, we construct Obviously, given Z 1 , . . . , Z n , B can be calculated in polynomial time. We now proceed to show that no polynomial time linear test can distinguish B from which is the sample covariance matrix of the Ising model. Therefore if a polynomial time linear test can test for clique presence in M, one will be able to use this test to test for clique presence in the sparse PCA problem. We denote by P I,n the joint probability measure of with parameter θ and the correspondingly chosen σ, and denote by E I,n the expectation under P I,n . We remind the reader that we consider linear scan tests of the form (4.1). Since the intersection of linear subspaces is a convex polytope, for each linear scan test ψ on the sample covariance matrix, there exists a convex polytope S ⊆ R d×d such that ψ = f (1{ M ∈ S}). Define where C 1 is a constant that only depends on p, and C 2 is an absolute constant. Proof. See Section A.3 for a detailed proof. Proof of Theorem 2.2 In this section we give the proof of Theorem 2.2. Note that by the definition of S * , we only need to consider the simple zero-field ferromagnetic Ising model where all non-zero entries in Θ are the same. Let G = (V , E) be the underlying graph and θ = θ ij , (i, j) ∈ E be the parameter. Let t = tanh(θ) and E 0 denote the expectation under the probability measure that X 1 , . . . , X d are i.i.d. Rademacher variables. The following lemma gives an equivalent form of the probability mass function in simple zero-field ferromagnetic Ising models. Lemma 5.1. For a simple zero-field Ising model with underlying graph G = (V , E) and parameter θ, we have where t = tanh(θ). We now apply Le Cam's method. Let P Θ,n be the joint probability mass function of n i.i.d. samples of Ising model with parameter Θ, E Θ,n denote the expectation under P Θ,n , and P = 1 |S * | Θ∈S * P Θ,n be the averaged probability measure among Θ ∈ S * under the alternative. Then the result of Le Cam's method is given in the following lemma. Lemma 5.2. For the risk γ(S * ) defined in (2.3) we have γ(S * ) ≥ 1 − 1 2 D χ 2 (P, P 0,n ), where D χ 2 (P, P 0,n ) is the χ 2 -divergence between P and P 0,n defined as Lemma 5.2 is a direct result of the Le Cam's method. By Lemma 5.2, lim inf n→∞ γ(S * ) = 1 is implied by lim sup n→∞ D χ 2 (P, P 0,n ) = 0. To prove this, we use a method similar to the high-temperature expansion of Ising model (Fisher, 1967;Guttman, 1989). By Lemma 5.1 and the fact that the n samples are independent, for Θ, Θ ∈ S * with corresponding graphs G, G , we can rewrite the term E 0,n P Θ,n P Θ ,n /P 2 0,n as follows: where t = tanh(θ). Each expectation on the right-hand side above is a polynomial of t. For any G, G ∈ G * , we define (1 + tX i X j ) . Plugging the definitions above into (5.3), we obtain E 0,n P Θ,n P 0,n P Θ ,n P 0,n We now analyze the coefficients of each polynomial in (5.4). Let We also define For f G (t), note that after expanding (i,j)∈E(G) (1 + tX i X j ), the terms with non-zero expectations must have the form t k X 2 i 1 · · · X 2 i k , where i 1 , . . . , i k ∈ V . Therefore by Lemma 5.5, the coefficient of t k is equal to the number of k-edge subgraphs of G where every vertex has an even degree. Similar arguments also applies to f G (t) and f G,G (t). This observation motivates us to introduce the definitions of multigraphs and Eulerian graphs. Definition 5.3 (Multigraph). A multigraph is a graph which is permitted to have multiple edges connecting two vertices. We denote G = (V, E), where V is the vertex set, and E is the edge multiset. For a multigraph G with d vertices, we define its adjacency matrix to be A = (A ij ) d×d , where A ij = A ji = "the number of edges connecting vertices i and j". A symmetric matrix A ∈ R d×d with nonnegative integer off-diagonal entries and zero diagonal entries naturally represents a multigraph with vertex set V . Given two multigraphs G and G with adjacency matrices A and A , we define G ⊕ G to be the multigraph defined by A + A . Definition 5.4 (Eulerian graph). An Eulerian circuit on a multigraph is a closed walk that uses each edge exactly once. We say that a multigraph is Eulerian if every connected component has an Eulerian circuit. Note that in graph theory, the term 'Eulerian graph' has different meanings. Sometimes Eulerian graph is referred to as a graph that has an Eulerian circuit. This is different from our definition, because in this paper we do not require an Eulerian graph to be connected. The following famous lemma on Eulerian graph is first given by Euler (1741) and then completely proved by Hierholzer and Wiener (1873). In words E(k, G) is the set of k-edge Eulerian subgraphs of G. By Lemma 5.5 and our previous discussion, we have a k = |E(k, G)|, b k = |E(k, G )| and c k = |E(k, G ⊕ G )|, and therefore Figure 3 gives an example of how to calculate |E(k, G)| for a given multigraph G. We now proceed to analyze u k . Apparently, u 0 = u 1 = 0. For k ≥ 2, by the definition of u k we can see that, if a k-edge Eulerian subgraph of G ⊕ G can be split into two graphs G 1 and G 2 such that G 1 and G 2 are Eulerian subgraphs of G and G respectively, then it is also counted in the sum k 1 +k 2 =k a k 1 b k 2 and therefore is not counted in u k . Figure 4 gives examples of Eulerian subgraphs that are counted and not counted in u k . Using this type of argument, the following two lemmas together calculate and bound u k for k ≥ 2. Lemma 5.6. We have where the function q k (·, ·) is defined as follows: Lemma 5.7. We have Moreover, for any multigraph G and vertex set V ⊆ V , we have The upper bound of |E(k, G)| in Lemma 5.7 and the assumption that θ ≤ [8(Λ ∨ Γ)] −1 together show that f G (t), f G (t) and f G,G (t), as power series, all converge. Moreover, by the definition of B, the upper bound for q k (·, ·) in Lemma 5.7 and the assumption that By Lemmas 5.6 and 5.7 and the fact that t = tanh(θ) ≤ θ, we have Figure 4: Illustration of graphs counted and not counted in u k . The gray dot-dashed squares highlight the non-isolated vertices of G and G . The solid and dashed lines are edges in G and G respectively. We use purple vertices to represent the common non-isolated vertices of G and G . The blue vertices are non-isolated in G but isolated in G , and the red vertices are non-isolated in G but isolated in G. The green edges in (a) give an example of a 6-edge Eulerian subgraph of G ⊕ G counted in u 6 , while the orange edges in (b) form a 6-edge Eulerian subgraph of G ⊕ G that is not counted in u 6 . Lemma 5.8. If G * is incoherent, then the following inequality holds. Proof of Theorem 3.3 In this section we give the proof of Theorem 3.3. The key part of our proof is to derive concentration inequalities for W H . Following the definition in Vershynin (2010), we define the ψ 1 -norm of the random variable Z as follows. If a random variable Z has finite ψ 1 -norm, we say Z is a sub-exponential random variable. The following lemma gives bounds for the ψ 1 -norm of W H . Lemma 6.1. Let X ∈ {±1} d be a random vector generated from the high temperature ferromagnetic Ising model with parameter matrix Θ. For any graph H, define If Θ F ≤ 1/2, then we have W H ψ 1 ≤ C|E(H)| −1/2 , where C > 0 is an absolute constant. This completes the proof. Discussion In this paper we studied structure detection problems in zero-field ferromagnetic Ising models. Our upper and lower bounds demonstrated that graph arboricity is a key concept which drives the testability of structure detection. We furthermore argued that under a sparse PCA conjecture no polynomial time linear tests on the covariance matrix can test the problem unless the signal strength is of the order of 1 √ n , which is statistically sub-optimal for graphs with high arboricity. There are several important questions which we leave for future work. First, our upper bound results are derived under the assumption that Θ F ≤ 1 2 . This assumption is needed to ensure that the terms (3.1) concentrate around their mean value. This may not be a necessary condition, and we anticipate that the tests we develop might work beyond this regime. Second, an interesting question that is left open is whether one can develop upper and lower bounds for problems of the type (1.5) in the dense regime when s √ d. We believe that this regime may require completely different tests than the ones we developed in this paper. Finally, our computational lower bound, which relies on the sparse PCA conjecture, works only for linear tests on the covariance matrix. As we mentioned earlier, the computational hardness of sparse PCA conjecture has been established under the widely believed planted clique conjecture (Gao et al., 2014;Berthet and Rigollet, 2013a;Brennan et al., 2018). It will be interesting to extend our results beyond linear tests on the covaraince matrix. We currently do not know of a way to prove such a result based on the planted clique conjecture. However, our results under the oracle computational model strongly suggest that indeed it is unlikely that polynomial time tests for detection exist when the signal strength is of smaller order than 1 √ n . A.1 Lower Bound Proofs We first introduce two important lemmas. Lemma A.1. For a multigraph G = (V , E), define the following two classes of Eulerian spanning subgraphs and connected Eulerian subgraphs of G with k edges. Let A be the adjacency matrix of G. Then for k ≥ 2, we have For the first inequality, note that we have Summing up all possible starting vertices, we get |E c (k, G)| ≤ |{legnth-k closed walks in G}| ≤ Tr A k ≤ A k F . This proves the first inequality. For the second inequality, we use induction. First for |E(2, G)|, we have |E(2, G)| = |E c (2, G)| ≤ A 2 F ≤ 2 2 A 2 F . Suppose that for l ≤ k we have |E(l, G)| ≤ 2 l A l F . Then for |E(k + 1, G)|, by the fact that E(1, G) = E c (1, G) = ∅, we have Plugging in the inequalities for |E(l, G)|, we get F . Therefore by induction we get the second inequality. Lemma A.2. Let G be a multigraph with vertex set V = {1, . . . , d} and adjacency matrix A. Let V ⊆ V be a vertex set. For k ≥ 2, we define p k (G, V ) = G ∈ E c (k, G) : G contains at least two distinct vertices in V , q k (G, V ) = G ∈ E(k, G) : ∃i, j ∈ V, i, j are contained in one connected component of G . Then we have Proof. We first prove (A.2). By definition, we have |{length-k closed walks in G starting at i and traversing j}|. Note that each vertex can have at most A 1 neighbors. Therefore we can bound the number of length-k Eulerian circuits starting at vertex i and containing vertex j by counting the possible vertices on the walk: • The number of possible positions of vertex j in V is k − 1. • The number of choices of the rest k − 2 vertices is at most A k−2 1 . This completes the proof of (A.2). Now we prove (A.3). Suppose that G is a subgraph of G with k edges such that one of its connected components contains at least two distinct vertices in V . Let l be the number of edges of this connected component. Then by definition, clearly the rest connected components form a graph in E(k − l, G). Therefore we have By (A.2) and Lemma A.1, we have where the last inequality holds because for l ≥ 2 we have l − 1 ≤ 2 l−2 . Moreover, for V = ∅, by Lemma A.1, clearly we have q k (G, V ) ≤ |E(k, G)| ≤ 2 k A k F ≤ 2 k · |V | · A k F . When V = ∅, by definition we have q k (G, V ) = 2 k · |V | · A k F = 0. This completes the proof. Proof of Lemma 5.1. For any i, j ∈ V , we have Note that cosh(x) is an even function, and X i X j is binary. Therefore we have cosh(θX i X j ) ≡ cosh(θ). Similarly, tanh(x) is an odd function, by checking the function values at X i X j = 1 and X i X j = −1 we obtain tanh(θX i X j ) = tanh(θ)X i X j . Therefore we have where c = cosh(θ) and t = tanh(θ). Plugging (A.4) into the definition of P Θ (X) proves (5.1). Applying Cauchy-Schwartz inequality to the right-hand side above gives TV(P, P 0,n ) ≤ 1 2 E 0,n P(X) P 0,n (X) It then suffices to show that E 0,n P 2 (X) which follows by direct calculation. Proof of Lemma 5.6. Since there cannot be multiple edges in G connecting the same two vertices, the coefficient of t 2 in f G (t) is 0. For the same reason the coefficient of t 2 in f G (t) is also 0. In f G,G (t), the only possible way to form a two-edge Eulerian circuit is to pick one edge from E(G) and to pick another edge from E(G ) connecting to the same two vertices. Therefore u 2 = |E(G) ∩ E(G )|. For u 3 , note that 3-edge Eulerian subgraphs must be triangles. If a triangle only uses edges in E(G), then it is counted in the coefficient of t 3 in f G (t). Similarly, if a triangle only uses edges in G , it is also counted in the coefficient of t 3 in f G (t). Therefore u 3 is the number of triangles that use at least one edge in E(G) and another edge in E(G ), which is defined as ∆ G.G . We denote by E(G) and E(G ) the sets of Eulerian subgraphs of G and G respectively. For k ≥ 4, by (5.5), the coefficient of We now prove that, for G ∈ E(k, G ⊕ G ), if each connected component contains at most one vertex in V (G) ∩ V (G ), then there exist G 1 ∈ E(G) and G 2 ∈ E(G ) such that G = G 1 ⊕ G 2 . To prove this statement, take a fixed connected component of G. Suppose first that the connected component does not contain any vertices in V (G) ∩ V (G ). Then it follows that all of its edges must be contained either in E(G) or E(G ). Next consider the case when the connected component contains only one vertex v ∈ V (G) ∩ V (G ). Since this connected component must be a connected Eulerian graph, we can consider the Eulerian circuit starting and ending at v. If we start walking along the circuit on an edge in E(G), then since v is the only vertex contained in the intersection V (G) ∩ V (G ), we cannot reach vertices in E(G ) until we return to v. Upon returning to v, we have completed a closed walk using purely edges in G. We can continue this process to obtain closed walks on G and G starting and ending at v. Concatenating all the closed walks on G gives G 1 . Similarly, concatenating all the closed walks on G gives G 2 . We have proved that Therefore by the definition of q k (·, ·) we have u Proof of Lemma 5.7. The bounds for E(k, G) and q k (G, V ) are included in Lemma A.1 and Lemma A.2. We now prove the bound for ∆ G,G . We remind the reader that for a graph G and a vertex set V , G V denotes the graph obtained by restricting G on the vertex set V . Note that if a triangle has one edge in E(G) and two edges in E(G ), then the two vertices of the edge in E(G) must be in V (G) ∩ V (G ). Therefore, an upper bound of the number of triangles that have one edge in E(G) and two edges in E(G ) is given by the following procedure: • Pick an edge e from E[G V (G)∩V (G ) ]. • Pick a common neighbour of the two vertices of edge e. Since all graphs in G * have arboricity R, by the definition of arboricity we have This completes the proof. Proof of Lemma 5.8. Let Then we have Consider drawing Θ uniformly from S * , and let P Θ ∼U (S * ) be the probability measure. By assumption, the random variables Expanding the expectation and applying the inequality 1 + x ≤ exp(x) gives Rearranging terms, we get This completes the proof. Proof of Corollary 2.5. Let G * be the set of graphs isomorphic to G * . Then clearly, if G is uniformly sampling from G * , then is just a permutation of s 1s and d − s 0s. Therefore by Theorem 2.11 in Joag-Dev and Proschan (1983), the incoherence condition is satisfied. For any G ∈ G * and v ∈ V (G), we have And therefore N (G * ) = s 2 /d. Moreover, by definition we have Therefore by Theorem 2.2, if then we have lim inf n→∞ γ(S * ) = 1. A.2 Upper Bound Proofs The following lemma given by Bhattacharya and Mukherjee (2015) is helpful for bounding the ψ 1 -norm of W H . Lemma A.3. Let J be a d × d symmetric matrix with non-negative off-diagonal entries and zeros on the diagonal. If J 2 ≤ 1, then we have where λ 1 (J), . . . , λ d (J) are the eigenvalues of J. Proof of Lemma 6.1. By (5.16) in Vershynin (2010) as an equivalent definition of ψ 1 -norm, it suffices to prove To prove (A.5), first note that we have A H 2 F = 2|E(H)|. By definition of the Ising model, we have where the second inequality holds because for |x| ≤ 3/4 we have − log(1 − x) = k≥1 x k /k ≤ x + 2x 2 and by assumption we have J 2 ≤ J F ≤ 3/4. Since Tr(J) = Tr(A H )/(2 A H F ) = 0, we have Moreover, since θ ij ≥ 0 for all i, j = 1, . . Proof. Note that changing signs of all entries in X 1 and Z 1 does not change the value of Ising model probability mass function or the Gaussian probability density function. Therefore, when r is odd, by symmetry it is obvious that α r (θ) = β r (σ) = 0. Since we always focus on the first samples X 1 and Z 1 , in the rest of the proof we omit the subscript "1". When r is even, for α 2 , by second Griffith inequality we have α 2 ≥ t and α r ≥ α r/2 2 ≥ t r/2 . Moreover, let G be the underlying clique graph, and E = {(i, j) : i, j ∈ I} be the edge set of G. Then by (5.1), we have For any even r, let Then similar to our discussion in the proof of Theorem 2.2, by expanding the product, we see that a r,k counts the number of terms of the form where i 1 , . . . , i k−r/2 ∈ I. Therefore by Lemma 5.5, a r,k equals the number of subgraphs of G satisfying the following properties: (i) After removing all connected components that do not contain any of i 1 , i 2 , . . . , i r , the remaining edges can be organized to represent r/2 paths, each connecting a distinct pair of vertices among i 1 , i 2 , . . . , i r . (ii) The connected components that do not contain any of i 1 , i 2 , . . . , i r form an Eulerian subgraph. (iii) Total number of edges is k. • Pick j 2 to be the smallest index such that there exists a path connecting j 1 and j 2 . • Pick j 4 to be the smallest index such that there exists a path connecting j 3 and j 4 . . . . • Pick j r to be the last index that have not been chosen. For any graph G satisfying the descriptions (i)-(iii) and the corresponding j 1 , . . . , j r chosen above, adding the edges (j 2 , j 3 ), (j 4 , j 5 ), . . . , (j r , j 1 ) results in an Eulerian (multi)graph, and the resulting (multi)graph has only one connected component that contains j 1 , . . . , j r . This connected component represents a closed walk starting and ending at vertex j 1 . Therefore, each graph counted in a r,k described above can be characterized by • r/2 index pairs (j 2 , j 3 ), (j 4 , j 5 ), . . . , (j r , j 1 ) with j 1 < j 3 < j 5 < · · · < j r−1 , • a closed walk C starting and ending at j 1 , and • an Eulerian subgraph G that does not contain any of j 1 , . . . , j r . It is obvious that the edge set E added = {(j 2 , j 3 ), (j 4 , j 5 ), . . . , (j r , j 1 )} and C uniquely determines the connected components of G that contains any of i 1 , . . . , i r . Therefore, G is uniquely determined by the 3-tuple [E added , C, G ], and the number of graphs G counted in a r,k is bounded by the number of possible 3-tuples [E added , C, G ], which can be counted as follows. • There are (r − 1)!! different ways to split i 1 , . . . , i r into pairs, which gives an upper bound of |E added |. • If the length of the closed walk C is l, then the number of possible positions of j 2 , . . . , j r is upper bounded by l−1 r−1 . • The number of choices of the rest l − r vertices is upper bounded by s l−r . • The number of possible choices of G is at most |E(k +r/2−l, G)|, where E(k +r/2−l, G) defined in Lemma A.1 denotes the set of (k + r/2 − l)-edge Eulerian subgraphs of G. For the left-hand-side above, we have This completes the proof. for some constant C 1 and C 2 . The bounds for α r (θ) follows directly by Lemma A.4. For β r (σ), by Lemma A.4 we have Note that for η < (16π) −1 , we have since by definition r ≤ s. Therefore This completes the proof. Proof of Lemma 4.6. For any fixed I, let M = √ n M, B = √ n B. By definition, P m is invariant to scaling, meaning that for any S ∈ P m and any constant c, cS is also contained in P m . Therefore we have We proceed to give an upper bound of sup S∈P m |P I,n (B ∈ S) − P I,n (M ∈ S)|. Since we only need to focus on a fixed I, to simplify notation, in the rest of the proof we omit the subscript and denote P = P I,n , E = E I,n . Since B and M are symmetric matrices, we only need to consider the strict upper triangular part of the matrices. Note that β 2 = α 2 , so EB = EM. We now calculate the covariances between entries in B and M. For B, we give the following calculation: • If i 1 and j 1 are both in the clique, then -Var(B i 1 j 1 ) = 1 − β 2 2 . -If i 2 and j 2 are both in the clique and |{i 1 , j 1 }∩{i 2 , j 2 }| = 0, then Cov(B i 1 j 1 , B i 2 j 2 ) = β 4 − β 2 2 . -If i 2 and j 2 are both in the clique and |{i 1 , j 1 }∩{i 2 , j 2 }| = 1, then Cov(B i 1 j 1 , B i 2 j 2 ) = β 2 − β 2 2 . -If i 2 and j 2 are not both in the clique, then Cov(B i 1 j 1 , B i 2 j 2 ) = 0. • If i 1 is in the clique and j 1 is not in the clique, then -Var(B i 1 j 1 ) = 1. -If i 2 and j 2 are both in the clique, then Cov(B i 1 j 1 , B i 2 j 2 ) = 0. -If i 2 is in the clique, j 2 is not in the clique, and j 2 = j 1 , then Cov(B i 1 j 1 , B i 2 j 2 ) = β 2 . -If i 2 is in the clique, j 2 is not in the clique, and j 2 = j 1 , then Cov(B i 1 j 1 , B i 2 j 2 ) = 0. -If neither i 2 nor j 2 is in the clique, then Cov(B i 1 j 1 , B i 2 j 2 ) = 0. • If neither of i 1 , j 1 is in the clique, then -Var(B i 1 j 1 ) = 1. Similarly, the covariances between entries in M follows the exact same pattern as B, except all β 2 amd β 4 's are replaced by α 2 and α 4 . Let Θ 1 , Θ 2 ∈ R [d(d−1)/2]×[d(d−1)/2] be the covariance matrices of the strict upper triangular part of B and M respectively, and let B * and M * be the symmetric Gaussian matrices whose strict upper triangular part is generated from Gaussian distributions whose means are the same as B's (or M's, since β 2 = α 2 ) and covariance matrices are Θ 1 and Θ 2 respectively. Then by Proposition 3 where C 1 is a constant that only depends on p. We now bound TV(P B * , P M * ). Since the B * and M * have the same means, by Pinsker's inequality, we have Let Θ 1 = I − Θ 1 , Θ 2 = I − Θ 2 . We first prove that Θ 1 2 , Θ 2 2 < 1. To prove this bound, we go over each rows of Θ 1 and Θ 2 , and use the Gershgorin disc theorem. For Θ 1 , by previous calculation, we have • If i 1 and j 1 are both in the clique, then Θ 1,(i 1 ,j 1 ),(i 1 ,j 1 ) = β 2 2 . Moreover, in the (i 1 , j 1 )-th row of Θ 1 , there are at most s 2 off-diagonal entries of β 2 2 −β 4 and at most 2s off-diagonal entries of β 2 2 − β 2 . • If neither of i 1 , j 1 is in the clique, then all entries in the (i 1 , j 1 )-th row of Θ 1 are 0. Therefore, we have where C 1 is an absolute constant. Hence Tr(Θ −1 2 Θ 1 − I) = Tr Θ k 2 (Θ 1 − Θ 2 ) = O(s −2δ ). Let λ 1 , . . . , λ d(d−1)/2 be the eigenvalues of Θ 1 . Then by expanding the logarithm terms we obtain Since the entries where i 1 , j 1 are both in the clique and where i 1 , j 1 are not both in the clique are in different connected components, and Θ 1 , Θ 2 are exactly the same on any path that consists of vertices that are not all in the clique, it suffices to bound the weighted sum of paths that only use vertices in the clique. Therefore, similar to previous proof, we have where C 3 is an absolute constant. Therefore by the assumption that θ ≤ ηs −(1+δ) for some small enough positive constant η, we have Summing (A.11) and (A.12) completes the proof. Proof of Theorem 4.3. We remind the reader the following notations for clarification. • P 0,n is the probability measure under the hypothesis that X 1 , . . . , X n are independent Rademacher random vectors. • P Θ,n is the probability measure under the hypothesis that X 1 , . . . , X n are independent samples generated from the Ising model with parameter matrix Θ. • P I,n is the probability measure under the hypothesis that Z 1 , . . . , Z n are independent standard Gaussian vectors. • P Σ,n is the probability measure under the hypothesis that Z 1 , . . . , Z n are independent Gaussian vectors with mean zero and covariance matrix Σ. • P I,n is the joint probability measure under the assumption that X 1 , . . . , X n are independent samples generated from the Ising model with parameter matrix θ[1(i, j ∈ I, i = j)] d×d , and Z 1 , . . . , Z n are independent Gaussian vectors with mean zero and covariance matrix I + σ1 I 1 T I , where σ = O(θ) is chosen such that (4.3) holds. B Computational Lower Bound Under Oracle Computational Model In this section we propose an oracle computational model, based on which we derive another computational lower bound result for detection problems in Ising model. The main idea of oracle computational model is to use the number of rounds of interactions between data and a certain algorithm to represent the algorithmic complexity of this algorithm. In specific, let X be the random vector of interest and X be the domain of X. We define We call every subset Q ⊆ Q * a query space. Next we define the statistical query oracle. Definition B.1 (statistical query oracle). Let n be the sample size of a testing problem. A statistical query oracle r n on a query space Q ⊆ Q * is a random mapping from Q to R. Given a query q ∈ Q * , the oracle r n returns an output Z q ∈ R, such that for any tail probability ξ ∈ [0, 1), Here we call η(Q) > 0 the capacity measure of Q. When Q is finite, we define η(Q) = log(|Q|). Given a query space Q ⊆ Q * , we define R n (Q) to be the set of all statical query oracles on Q with sample size n. We now give the definition of oracle computational model. • Q Ψ is a subset of Q * that contains all queries the test will potentially use. • T Ψ is the maximum number of rounds the model queries an oracle. • q init ∈ Q is the initial query. • δ t : (Q × R) t−1 → Q ∪ {HALT} is the transition function at the t-th round. If δ t returns HALT, then the model stops querying the oracle. • ψ : (Q × R) T Ψ → {0, 1} is the test function that takes the results of at most T Ψ queries as input, and returns the test result as binary output. Each instance of Ψ(Q Ψ , T Ψ , q init , {δ t } T Ψ t=1 , ψ) refers to a test algorithm. The parameter T Ψ is the query complexity of algorithm Ψ. We define A (T ) = {Ψ : T Ψ ≤ T } to be the set of all algorithms with query complexity at most T . Under oracle computational model, the risk of detection problem (1.5) with maximum query complexity T is defined as Note that in (B.3), the supreme over r ∈ R n (Q Ψ ) implies that we consider the worst oracle. If lim inf n→∞ γ oracle {S[G 1 (G * ), θ]} = 1, then when n is large enough, for any algorithm that queries at most T rounds, there exists an oracle r n such that the algorithm cannot distinguish the null and alternative hypotheses. We now give our main result. where κ is some sufficiently small positive constant, then lim inf n→∞ γ oracle {S[G 1 (G * ), θ]} = 1. Proof of Theorem B.3. We denote by G ∅ the empty graph, Similar to the computational lower bound analysis in Section 4, we only need to consider the case where G * is an s-clique. Therefore we set G * to be the set of graphs isomorphic to G * , and let S * = {θA G : G ∈ G * }. Each parameter matrix Θ ∈ S * can be represented by a graph G ∈ G * . In the following, we always denote by Θ the parameter matrix with underlying graph G, and by Θ the parameter matrix with underlying graph G . For a graph G, in order to successfully detect it with the worst-case oracle, a test has to utilize at least one query q that can distinguish G from G ∅ . We define where q(X) ψ 1 ,0 is the ψ 1 -norm of q(X) when X follows the distribution P 0 , and τ is defined in Definition B.1. By the definition of G(q), if T · sup q∈Q Ψ |G(q)| < |G * |, then there must be some G ∈ G * such that none of the T queries used by the test can distinguish G from G ∅ . Therefore the worst case oracle that returns E Θ q(X) when X ∼ P 0 can still satisfy Definition B.1 but will make all the tests powerless. This gives the following lemma. Lemma B.4. For any algorithm Ψ that queries the oracle at most T rounds, if T · sup q∈Q Ψ |G(q)| < |G * |, then there exists an oracle r n ∈ R n (Q Ψ ) defined in Definition B.1 such that lim inf n→∞ γ oracle (S * ) ≥ 1. Proof. See Section B.1 for a detailed proof. By Lemma B.4, to prove lim inf n→∞ γ oracle {S[G 1 (G * ), θ]} = 1, it suffices to show that T · sup q∈Q Ψ |G(q)|/|G * | is asymptotically smaller than one. In the rest of the proof, for any q ∈ Q Ψ , we derive an upper bound on |G(q)|. To do so, we first split G(q) into two subsets G + (q) and G − (q), which are given by We now bound |G + (q)|. |G − (q)| can be bounded in exactly the same way. The following lemma summarizes an inequality derived from the definition (B.5). It remains to calculate the left-hand side of (B.7). By Lemma 5.6, we have For |E(G) ∩ E(G )|, we use the trivial bound that |E(G) ∩ E(G )| ≤ |V (G) ∩ V (G )| 2 /2. For q k [G ⊕ G , V (G) ∩ V (G )], k ≥ 4, we apply the bound given by Lemma 5.7 and obtain Therefore by the assumption that θ ≤ (16s) −1 , we have For ∆ G,G , we use a bound similar to Lemma 5.7 but more specific for cliques. If a triangle has one edge in E(G) and two edges in E(G ), then the two vertices of the edge in E(G) must be in V (G) ∩ V (G ). Therefore, an upper bound of the number of triangles that have one edge in E(G) and two edges in E(G ) is given by the following procedure: • Pick an edge e from E[G V (G)∩V (G ) ]. • Pick a common neighbour of the two vertices of edge e. This completes the proof. Therefore we conclude the proof. Proof of Lemma B.6. For any G ∈ G * , we have
2018-09-21T16:49:23.000Z
2018-09-21T00:00:00.000
{ "year": 2020, "sha1": "d4ea5081990390ba80006a8983a847fcb09e8b26", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1809.08204", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d4ea5081990390ba80006a8983a847fcb09e8b26", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
56418266
pes2o/s2orc
v3-fos-license
Biogeographical karyotypic variation of Rhinophylla fi scherae ( Chiroptera : Phyllostomidae ) suggests the occurrence of cryptic species The genus Rhinophylla Peters, 1865 (Carolliinae: Phyllostomidae) comprises three species: R. pumilio Peters, 1865, R. fi scherae Carter, 1966 and R. alethina Handley, 1966. Only the fi rst two species have been cytogenetically studied to date. Previous studies on specimens of Rhinophylla fi scherae from two populations from East of Andes (Colombia) showed the karyotype with 2n=34 and FN=56. In this paper, we report the results of cytogenetic analysis of six specimens of Rhinophylla fi scherae from Brazil. Probably chromosomal differences can be found among the populations because of the geographic distance. Metaphase chromosomes were obtained in the fi eld by direct extraction of bone marrow. The metaphases were analyzed by conventional staining, Gand C-banding, NOR-staining and FISH with telomeric probes. Rhinophylla fi scherae has 2n=38 and Fundamental Number FN=68, with small amounts of constitutive heterochromatin in the centromeric regions of the chromosomes and the long arm of pair 16. Fluorescence in situ hybridization using telomeric probes did not show any interstitial sequences. Hybridization with human 18S and 28S rDNA probes and silver staining revealed the presence of Nucleolar Organizer Regions at the long arms of pairs 16 and 18. The pattern of G-banding showed that this population had a huge chromosome variation compared with previous studies on specimens of Rhinophylla fi scherae. The chromosomal differences among populations that have been morphologically classifi ed as R. fi scherae suggest that this species should be considered a cryptic species complex, and that the populations from different geographical regions analyzed to date should be considered species of this complex, where the chromosomal rearrangements had key importance. INTRODUCTION Phyllostomid bats constitute a complex assemblage of the Neotropical bat fauna with a long history of taxonomic controversies (Wetterer et al., 2000;Baker et al., 2003).Brazilian Amazonian rainforest has a rich bat fauna (Handley, 1967;Bernard et al., 2001;Sampaio, 2003); however, the knowledge about this regional fauna is far from satisfactory to understand the complex ecological, geographic and diversity patterns. There are few cytogenetic studies on bats from Brazilian Amazonia (Rodrigues et al., 2000(Rodrigues et al., , 2003;;Neves et al., 2000;Ribeiro et al., 2003;Silva et al., 2005;Pieczarka et al., 2005).The results of these studies have shown that the Phyllostomidae has a high intrafamiliar karyotypic variation and often new species have been detected fi rst by different karyotypes. In the present paper, we report a new karyotype for Rhinophylla fi scherae from Brazilian Amazonia and discuss the biogeographical karyotypic variation as an evidence of a species complex for this taxon. Samples Six specimens (two males, codes LR 765 and LR 855, and four females, codes LR 710, LR 732, LR 763 and LR 818) of Rhinophylla fi scherae were obtained for cytogenetic analysis.The bats were collected from natural populations using mist nets, during the expeditions to faunal inventory in the area of bauxite mine of Alcoa Inc. in Juruti, Para state, Brazil (02 °29´38.8˝S/56°11´27.1˝W;Fig. 1).The specimens were identifi ed in the fi eld with the identifi cation key for bats of the Guyanna (Lim, Engstrom, 2001).The identifi cation was confi rmed by the presence of diastema between I2 and the superior canines, as well as the hairy edge of interfemoral membrane (Rinehart, Kunz, 2006).Voucher specimens were fi xed in formalin 10%, preserved in ethanol 70% and deposited in the mammal collection of the Museu Paraense Emilio Goeldi. Chromosome preparations, cell culture and chromosome banding Metaphase chromosomes were obtained in the fi eld by direct extraction of bone marrow according to Baker et al. (2003).Chromosomal preparations and tissue biopsies were sent to the cytogenetics laboratory at the Universidade Federal do Pará in Belém, Comp.Cytogenet., 2010 4(1) Fluorescence in situ hybridization (FISH) FISH with digoxigenin-labeled telomeric probes (All human Telomere probes, Oncor) was performed according to the manufacturer's protocol.To confi rm the NOR labelled sites Biotin-dUTP was incorporated into the human 18S and 28S rDNA probes using nick translation.Briefl y, the slides were incubated in RNAse and pepsin solutions following the procedure described by Martins, Galetti RESULTS All six specimens of Rhinophylla fi scherae from Brazilian Amazonia (Fig. 1) have 2n=38 and FN=68 (Fig. 2, a), of which 12 chromosomes pairs are metacentric/submetacentric, four pairs are subtelocentric and two pairs are acrocentric.The X chromosome is submetacentric, while the Y is small and acrocentric.Cbanding detected constitutive heterochromatin at the centromeric regions of all chromosomes (Fig. 2, b), and in the long arm of pair 16.Both Ag-NOR staining (not shown) and FISH with 28S and 18S rDNA probes (Fig. 2, c) revealed Nucleolar Organizer Regions (NORs) in the proximal region of the long arm of pairs 16 and 18. FISH with telomeric probe hybridized only at the tips of the chromosomes (Fig. 2, d), without any interstitial telomeric sequence (ITS). DISCUSSION Several studies have confi rmed the occurrence of Rhinophylla fi scherae in Pará State (Eastern Amazonian region), and the previous reports were consistent with the diagnostic traits for this species (Bernard et al., 2001;Bernard, Fenton, 2007).Rhinophylla fi scherae is clearly distinguished from R. pumilio by dental and external characters (Rinehart, Kunz, 2006) but, apparently, there is no evident external morphological variation within R. fi scherae populations. The extensive chromosomal divergence between the R. fi scherae from different geographical regions suggests that these two cytotypes probably are not part of the same species.This would be an additional cryptic species situation, as already observed in Carollia brevicauda Schinz, 1821 and C. sowelli Baker, Solari et Hoffmann, 2002(Baker et al., 2002) and Carollia castanea H. Allen, 1890 and C. benkeithi Solari et Baker, 2006(Solari, Baker, 2006).Molecular data would be helpful to reinforce this hypothesis, but they are not available at the moment.Wright et al. (1999) used data from the Cytochrome-B gene to study the phylogenetic relationships between the genera Carollia and Rhinophylla.Their results suggest that R. pumilio has been separated from R. fi scherae for a relatively long time (8-10 million years).In the most parsimonious tree the branch leading to R. pumilio and R. fi scherae was supported by low bootstrap values and Bremer decay, which was interpreted as a result of intense divergence intra-and inter-species, and may suggest that the nominal taxa R. pumilio and/or R. fi scherae may encompasses more than one species.Our karyotypic results are consistent with these interpretations for R. fi scherae. Based on data here presented the population from Juruti (PA, Brazil) will be named herein as Rhinophylla fi scherae.Further, detailed studies using G-banding and chromosome painting, as well as molecular and morphological analyses of all the geographic Fig. 2 Fig. 2, a-d.Rhinophylla fi scherae metaphases.a -G-banded.b -C-banded.c -FISH with rDNA 28 and 18S probes.d -FISH with telomeric probes.Bar = 5µm.(The chromosomal pair 16 from another specimen in the box shows a more intense heterochromatin block in the long arm).
2018-12-18T08:08:44.397Z
2010-09-07T00:00:00.000
{ "year": 2010, "sha1": "864324360bfa758cb78f32766d04c0e5369286f1", "oa_license": "CCBY", "oa_url": "https://compcytogen.pensoft.net/article/1696/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "864324360bfa758cb78f32766d04c0e5369286f1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
219751643
pes2o/s2orc
v3-fos-license
Dynamic economic dispatch using hybrid metaheuristics Dynamic economic dispatch problem or DED is an extension of static economic dispatch problem or SED which is used to determine the generation schedule of the committed units so as to meet the predicted load demand over a time horizon at minimum operating cost under ramp rate constraints and other constraints. This work presents an efficient hybrid method based on particle swarm optimization (PSO) and termite colony optimization (TCO) for solving DED problem. The hybrid method employs PSO for global search and TCO for local search in an interleaved mode towards finding the optimal solution. After the first round iteration of local search by TCO, the best local solutions are considered by PSO to update the schedules globally. In the next round, TCO performs local search around each solution found by PSO. This paper reports the methodology and result of application of PSO–TCO hybrid to 5-unit, 10-unit and 30-unit power dispatch problems; the result shows that the PSO–TCO (HPSTCO) gives improved solution compared to PSO or TCO (when applied separately) and also other hybrid methods. Introduction Usually the economic load dispatch problem (ELD) implies static economic dispatch problem or SED where the objective is to determine the optimal schedule of online generating units' outputs so as to meet the load demand at a certain time at the minimum operating cost under various system and operational constraints. In contrast, the objective of the dynamic economic dispatch (DED) problem is to schedule the generator outputs over a certain period of time economically. The DED problem takes into consideration the limits on the generator ramping rate coupled with real time intervals to keep the thermal stress on the generation equipment like the turbines and boilers within the safe limits and thus protect their life [1]. The DED problem divides the dispatch period into a number of small time intervals, and a SED is employed to solve the problem in each interval. Since the DED problem was introduced in 1980s, several optimization techniques and procedures have been used for solving the DED problem with complex objective functions or constraints. There were a number of classical methods that have been applied to solve this problem such as the lambda iterative method [2], gradient projection method [3], Lagrange relaxation [4], linear programming [5], dynamic programming [6] and interior point method [7]. Most of these methods are not applicable for non-smooth or non-convex cost functions. To overcome this problem, many heuristic optimization methods have been employed to solve the DED problem; such methods include ant colony optimization (ACO) [8], particle swarm optimization (PSO) [9][10][11], Levenberg-Marquardt back-propagation algorithm (LMBPA) [12], differential evolution (DE) [13], artificial immune system (AIS) algorithm [14], harmony search (HS) [15] and bee swarm optimization (BSO) algorithm [16] among others. Many of these techniques have proved their effectiveness in solving the DED problem without any or fewer restrictions on the shape of the cost function curves. These approaches solve the DED by employing an initial population of individuals each of which represents a candidate solution for the problem. Then, they evolve the initial population by successively applying a set of operators on the old solutions to transform them into new solutions. In recent years, the trend of solving DED problems has changed from single-heuristic techniques to hybrid metaheuristics-a combination of two or more techniques like PSO-ACO, DE-SQP, PSO-SQP, etc. It is proved that these hybrid techniques have capability to solve the DED problems better than the single-heuristic problems as the hybridization causes the individual techniques to mitigate their limitations and complement each other with their characteristic strength. Earlier hybrids In 2005, Victoire et al. proposed hybrid EP-SQP made up of evolutionary programming and sequential quadratic programming technique to solve DED problems [17]. They also experimented with a hybrid of PSO and SQP techniques to solve DED problem with valve point loading (VPL) effect [18]. In 2009, Yuan et al. [19] hybridized PSO with differential evolution method for solving DED with VPL. In 2010, hybrid SOA-SQP method that combined seeker optimization algorithm (SOA) with SQP was used by Sivasubramani and Swarup [20] to solve DED with VPL. In 2012, Cai et al. [21] reported application of hybrid CPSO-SQP in DED with VPL. Again in 2012, Swain et al. [22] hybridized gravitational search algorithm (GSA) with SQP (GSA-SQP) to solve DED with VPL. Elaiw et al. [23] compared, in 2013, the efficacy of hybrids DE-SQP and PSO-SQP in solving DED with VPL effect. Chen et al. [24] used a combination of three methods, namely low-discrepancy sequences (LDS), improved shuffled frog leaping algorithm (ISFLA) and SQP, to solve DED problem. In this hybrid (termed as LDISS), LDS is used to generate initial population, ISFLA is liable for global search and SQP is used for local search. In 2013, Mohammadi-Ivatloo et al. [25] introduced hybrid immune genetic algorithm to solve DED considering VPL and prohibited operating zone and ramp rate constraints along with transmission losses. Zhang et al. [26] proposed hybrid bare bones (BB)-PSO or BBPSO in 2014 to solve DED with VPL only. Among recent developments are two hybrid techniques, BBO-PSOTVAC and FA-PSOTVAC, developed in 2018 by Hamed et al. [27], combining firefly algorithm (FA) and biogeography-based optimization (BBO) with time-varying acceleration-based particle swarm optimization (PSOTVAC) to improve the solution of DED. In the same year, Pan et al. [28] solved the DED problem with VPL using a hybrid technique MILP-IPM involving mixed-integer linear programming (MILP) and interior point method (IPM). Another very recent development (in 2018) by Xiong and Shi [29] is a hybrid of BBO and brain storm optimization (BSO) to get a better solution for DED with VPL. In the present work, the authors have applied for the first time a hybrid computational approach HPSTCO that combines PSO and TCO to solve the DED problem. The hybrid method employs PSO iterations for global search and TCO iterations for exploring the locality near the global solutions, interleaving both the search processes to overcome the drawback of fast convergence to (selection of ) global optimal solution in the original PSO method. The interleaving process requires PSO and TCO pass their solutions to each other. The solutions of TCO are updated in PSO iterations by considering the global best solution. Similarly, the solutions of PSO are adjusted by considering the locally observed information by TCO. A solution point searched by the PSO method can be used as an initial condition in the TCO method. The hybrid HPSTCO model has been programmed in MATLAB and simulation run executed for 5-unit, 10-unit and 30-unit DED system with parameters referred from the literature. Performance of the HPSTCO, as compared using a benchmark function, is quite encouraging. Motivation and contribution The novelty of the present study lies in the fact that PSO and TCO together, i.e., their hybrid combination (HPSTCO) has never been tried before to optimize small to largescale economic load dispatch problem. However, PSO and TCO individually and in combination with other metaheuristics have been applied earlier in different ELD problems. The contribution of the paper lies in the fact that it has established by reporting four distinct test cases and comparing the result with other hybrid methods for each of these four test cases that the HPSTCO hybrid is quite effective and advantageous in dealing with small-scale (5-and 10-unit) as well as medium-scale (30-unit) DED problem. In medium to large-scale systems with higher-capacity turbines, the fuel cost function is highly non-smooth and non-convex and contains discontinuous values at each boundary, forming multiple local optima. The complexity of the problem also increases significantly with the increase in the number of generating units because of their combinatorial nature. The present work has tackled this challenge nicely having no earlier precedence of application of this particular (HPSTCO) hybrid optimization mechanism. Therefore, it can be said that this paper introduces a new metaheuristics in DED with significant results. The current study is done on four different test cases of DED involving 5, 10 and 30 generating units: Case 1 5-unit system with valve point effects, ramp rate constraints, prohibited operating zones and transmission losses Case 2 10-unit system with valve point effects, ramp rate constraints and transmission losses Case 3 10-unit system with valve point effects and ramp rate constraints without transmission losses Case 4 30-unit system with valve point effects and ramp rate constraints without transmission losses. Organization The rest of this paper is organized as follows: The "Model" section presents the DED problem formulation with all constraints and limitations. The "Method" section explains the basic concepts and searching principle of each of PSO and TCO methods. In the same section, the proposed HPSTCO algorithm is introduced and described with flowchart and details of the stages. Case studies, simulation results, result analysis and comparison are presented in "Result and discussion" section. Finally, "Conclusion" section draws some concluding remarks on the limitation of the present work and future scope of research. Model A comprehensive study of basic DED problem is done here. A non-smooth, non-convex, non-differentiable single-and multi-objective multi-constraint model of ED problem is formulized in this section. Objective function The objective function of DED problem, which is to minimize the total production cost over the operating horizon, can be written as: where C T (in$/h) is the total generation cost, C i,t is the generation cost of ith unit at time t, n is the number of dispatch-able power generation units; here, n = 5, 10 and 30, and P i,t (in MW) is the power output of ith unit at time t. T is the total number of hours from operational point of view. The basic ELD objective function is represented by a non-smooth curve (quadratic polynomial) with VPL effect (ripple effect) modeled with a sinusoidal function as shown in Eq. (2). where a i (in $/h), b i (in $/MWh) and c i (in $/MW 2 h) are the cost coefficients of the ith unit, and e i (in $/h) and f i (in 1/MW) are the VPL coefficients of the ith unit. P min , i (in MW) is the minimum generation capacity limit of unit i. In the generation cost function, the term e i sin(f i (P min i − P i )) represents the VPL effect. The objective function (Eq. 1) of the DED problem should be minimized subject to the following constraints. Real power balance constraint In Eq. (3), P i (in MW) is the power generated by the ith unit, P D (in MW) is the total load demand and P L (in MW) is the total transmission loss of the system at time t. P L is computed using B-coefficients that can be calculated by using Kron's loss formula known as B-matrix coefficients. In this work, B-matrix coefficients method is used to calculate system loss, as follows: Generator capacity constraint The generator power output (P i ) of ith generator is within minimum power P min i and maximum power P max i (in MW). Ramp rate limit (RRL) A production unit, which is used for generating power P i0 , can increase or decrease its active power output (P i,t ) within upper ramp rate (UR i ) limit (in MW/h) and down ramp rate (DR i ) limits (in MW/h) as shown in Eqs. (6) and (7). Prohibited operating zone (POZ) Prohibited operating zone demarcates the scope of active power output of a generator which is otherwise affected due to the technical operation of shaft (unreasonable vibrations of bearing). Usually, modification of power is not allowed in the prohibited spans. The allowable operating range of a generator is given as in Eq. (8). here j is the number of POZs, P upper i,j−1 is the 'upper boundary' and P lower i,j is the 'lower boundary' of the jth POZ of the ith unit. n i is the number of prohibited operation zones of unit i. The main objective of DED is to minimize the generation cost C T and optimize the power generation schedule (P i,t ) as in Eqs. (1) or (2) subject to satisfying the constraints in Eqs. (3) to (9) used with different combinations in different test cases. For increase in output power: Method The hybrid approach HPSTCO taken up in this study comprises two basic metaheuristics, namely particle swarm optimization (PSO) and termite colony optimization (TCO). A brief outline and working principle of these two optimization techniques are first discussed in this section. Particle swarm optimization (PSO) PSO is a swarm intelligence technique inspired from social behavior of bird flocking and fish schooling. Birds and fish follow the neighbor that is nearest to the food, when they search for food. Each individual solution in PSO is named as 'particle' and represents a bird or a fish in the search space. Each particle has a position, velocity and fitness value. While they move in the solution space of fitness function, the particles aim to improve their next position based on their past experience and the best position in the swarm. Therefore, every individual is gravitated toward a stochastically weighted average of the previous best position of its own and that of its neighborhood companions [30]. In every iteration of PSO, the position and velocity of every particle is updated and the value of fitness function at its current location is evaluated. Mathematically, given a swarm of particles, each particle i is associated with a position vector X i = {X i1 , X i2 , … X iD }, which is a feasible solution for an optimal problem in the D-dimensional search space S. Let the previous best position or pbest of a particle i be denoted by X pi and the best position that has ever been found by any particle or gbest be denoted by X gi . At the start of search, all the positions and velocities are initialized randomly. At each iteration, the position vector of each particle i is updated by adding an increment vector or velocity V i = {V i1 , V i2 , …, V iD } as per Eq. (11). The velocity is updated according to Eq. (10): V i k and V i {k + 1} represent velocity vectors for particle i in the previous and current iterations, c 1 and c 2 are two positive constants, and r 1 and r 2 are two random parameters of uniform distribution in range of [0, 1], which limit the velocity of the particle in the coordinate direction. The new location of each particle should be compared with the pbest value. If the new location of the particle is better than the pbest value, then the pbest is updated for the new location. Otherwise the original value of pbest is stored unchanged. The new global optimum solution gbest is updated according to the gbest of the new particle swarm. This iterative process will continue until a stop criterion is satisfied or maximum number of iterations has been done. Eventually, the particle swarm will converge to the global optimum solution. Termite colony optimization TCO is an optimization method inspired from intelligent behaviors of termites during their mound structure building process. Initially the termites arbitrarily search for soil pallets, and after finding it, they deposit it on the mound. Later on, the termites move on the basis of observed trail of pheromone (a chemical) that they deposit on the path on returning after depositing soil pallets on the termite mound. The pheromone acts as attractive stimulus to other members of the colony to follow smaller paths with higher intensities as it is a volatile chemical that evaporates with time. Termites that travel the shortest path reinforce this path with more amount of pheromone, thereby helping others to follow them. Assuming that the size of the termite population M is within the D-dimensional search space, the position of the ith termite is denoted by � . . , X iD } which indicates a possible solution of an optimization problem. The cost/fitness function value for each position X i is fit ( X i ) which represents amount of pheromone deposited on a hill. The basic steps of TCO can be summarized as follows: 1. Initialize the population as weights, position of termites and the number of iterations. (Every termite has its distinct random position, velocity, desirability and rate of evaporation of pheromone). 2. Evaluate the fitness function value for each termite. 3. Determine the best position and evaporation rate of pheromone of each termite. 4. Determine the position of the best termite. 5. Update the evaporation rate of pheromone, velocity and position of each termite. 6. Stop if the condition of optimization is satisfied. If not, repeat from step 2. If τ i {t − 1} and τ i {t} stand for the pheromone level at the current and previous locations, respectively, of ith termite, then the pheromone updates rule states: where ρ is the evaporation rate of pheromone taken in the range of [0-1]. After updating the pheromone level, each termite adjusts its route and moves to a new location. Therefore, termite movement is a function of pheromone level at the visited location and the distance between a termite location and the visited locations. Now there are two possible directions of movement: if there is no previously visited location (by the swarm) in the neighborhood of a termite, it moves randomly; if there are one or more visited locations, then the termite selects the location with highest level of pheromone and moves to that position. When the termite moves randomly to search a new gainful position, then position is updated as: here X i (t − 1) and X i (t) represent the current and new position of the termite, respectively; R w is a random walk function of current position and radius of search. When the termite moves toward a gainful position or best local position B i having higher level of pheromone compared to the current position, then the position is updated as: here 1 < w b ≤ 2 and 0 < r b < 1 probabilistically controls the attraction of the termite toward local best position. Hybrid of PSO and TCO (HPSTCO) The present work adopts a hybrid of PSO and TCO algorithm (call it as HPSTCO), expecting their usefulness in solving DED problems would be enhanced when used as a combination in complementary mode. The HPSTCO exploits the global search potential of the PSO along with the local search potential of TCO in a given search space. While PSO iterations produce globally distributed solutions (overlooking the localized search space around each global solution), its hybrid partner TCO complements PSO by exploring in more detail any potential localized solution. The solutions obtained by the PSO iterations are fed to the TCO iterations in order to gravitate more termites toward gainful positions. Again the solution found by the termites in TCO updates the positions of the corresponding particles, thereby giving a good starting point of the particles in the global search space. The basic input parameters of HPSTCO are: maximum number of iterations (max_ iter), population size (s), number of PSO iterations (n 1 ), number of TCO iterations (n 2 ) and number of solutions which are fed from PSO (TCO) to the TCO (PSO) at the switching time (η). The parameter n 1 (n 2 ), respectively, shows how many times PSO iterations (TCO iterations) should be executed before a switching time, implying n 1 iterations of PSO are followed by n 2 iterations of TCO. The pseudocode of the hybrid algorithm is given in Fig. 1. Initialization The HPSTCO algorithm starts with PSO iterations with n number of particles placed in random position in the solution space. A position is a candidate for the priority list P = (p 1 , p 2 , … p n ). Each element of the list represents an activity, and its corresponding value shows the priority of that activity. Hence, the position vector X i = {X i1 , X i2 , …, X iD } of each individual i represents the priority values of n activities. A solution space of priorities will be created where the lower and upper bounds will be defined as Lb = 0:0 and Ub = 1.0. The value of each element must be limited to [Lb, Ub]. Global search Each particle presents a possible schedule for the DED problem. The velocity of the particles is updated by Eq. (13) which is a modified form of Eq. (10) of original PSO: where 0 < γ < 1 is a constriction factor that improves the convergence speed. The position of each particle is updated by considering its current position and pbest and gbest values are determined by calculating the fitness of the proposed schedules. Switching from global to local search The HPSTCO method simply switches from PSO to TCO, while switching a part of solutions found by the PSO that is passed to the TCO. Each solution determines the start position of a termite in the next iteration of TCO. Basically each particle switches its type as termite. Local search The TCO uses the solutions which are passed from PSO as the start positions of its termites. Next, TCO tries to find improved solutions in the local neighborhoods of those solutions (now the termites). To determine the neighborhood for each termite, the Euclidian distances of all termites from the candidate termite are calculated. If the distance is smaller than a threshold, the corresponding termite is considered as a neighbor of the candidate termite. The threshold value is dynamically adjusted, gradually decreasing as the algorithm proceeds. The termite with no neighbor moves randomly following Eq. (13); the termite having one or more neighbors selects one of them randomly as its neighbor and updates its position following Eq. (14). Switching from local to global search In this phase, each termite switches its type as particle. The solution found by termites updates the positions of the corresponding particles in the PSO. The earlier best position of each particle (pbest) and the global best position of the entire swarm is updated accordingly. The updated fitness of the new solution for a particle is compared to its previous fitness value; if the new fitness value is better, it will be considered as the new pbest. Similarly, the gbest position is compared with this new pbest position, and if the later has better fitness compared to the gbest, then the gbest value is updated with the current pbest. Constraint handling In each cycle of HPSTCO, a new population of feasible and infeasible solutions is generated. An infeasible solution is the one which violates the constraints of the problem. After detection of an infeasible solution, it is recovered as a feasible solution. The activity which violates the constraints is changed with the next activity (in the activity list) with lesser priority, and the constraint handling process is applied on the new activity list. This process is iterated until the infeasible solution is converted to a feasible solution. Result and discussion In order to review the effectiveness of HPSTCO, it is applied to solve the DED problem on three test systems having 5, 10 and 30 generators, considering valve point loading effect. The algorithm has been coded using MATLAB and implemented on a 64-bit PC with the detailed settings as follows: Hardware CPU: Intel ® Core ™ i5-6200U, frequency: 2.30 GHz, RAM: 8.0 GB, hard drive: 500 GB Software Operating system: Windows 10, package: MATLAB 8.1 (R2014a). The values of the input parameters of the algorithm are depicted in Table 1. The simulation in MATLAB is done on four different test cases of DED involving 5, 10 and 30 generating units: Case 1 5-unit system with valve point effects, ramp rate constraints, prohibited operating zones and transmission losses Case 2 10-unit system with valve point effects, ramp rate constraints and transmission losses Case 3 10-unit system with valve point effects and ramp rate constraints without transmission losses Case 4 30-unit system with valve point effects and ramp rate constraints without transmission losses Test case 1: 5-unit system In this test system, the valve point loading effects, ramp rate constraints, prohibited operating zones, transmission and generation limits have been considered. The essential input data of the 5-unit system are enlisted in Table 2 [22] that includes prohibited zones of units 1 to unit 5. These zones result in two disjoint subregions for each of units 1, 2, 3, 4 and 5. The B-coefficients matrix used for calculating power loss is given in Table 3. The load demand of the system is separated into 24 dispatch intervals of a day as shown in Table 4. The population size is 50. The fuel cost and transmission losses obtained by the HPSTCO technique are 42,151.3377 $/day and 194.3182 MW, respectively, as shown in Table 5. The graphical representation of Table 5 is shown in Fig. 2. Table 6 shows the comparison results for the fuel cost obtained for 5-unit DED system by HPSTCO with other hybrid methods as reported in the literature. Table 6 shows that the minimum cost yielded by CMIWO [36], MGDE [37], BBOSB [29], MILP-IPM [28], HIGA [25], BBPSO [26], LDISS-2 [24] 43,213$/day, respectively, whereas the cost for HPSTCO is 42,151.3377$ only. The average execution time required for one complete solution was 0.98 min till eighth iteration, and thereafter, the convergence curve becomes a straight line, which is acceptable for DED solutions, though it is not the least in comparison to the time taken by other methods. The convergence characteristic of the proposed algorithm is depicted in Fig. 3. Test case 2: 10-unit system In this test case, the valve point effect, ramp rate constraints, transmission and generation limits are considered. The basic input data of the 10-unit system are listed in Table 7. The B-matrix coefficients (per MW) for calculating power loss are given in Table 8. The load demand of the system is divided into 24 dispatch intervals as shown in Table 9. Results obtained by MATLAB simulation are presented in Table 10. The graphical representation of Table 10 is shown in Fig. 4. From Table 11 that compares the output of HPSTCO with that of other recently published hybrid methods such as hybrid MILP-IPM [28], hybrid BBOSB [29], HIGA [25], hybrid LDISS [24] and hybrid EP-SQP [32], it is found that the cost (in $/day) yielded by these methods for 10-unit system is 1,040,676, 1,038,362.014, 1,041,087.802, 1,039,083 and 1,035,748, respectively. In comparison, cost and loss yielded by HPSTCO are 1,035,730.203 $ and 811.6073 MW only which is the least among all. The average execution time required for one complete solution was 1.85 min, which is not the least of all but less than many DED solutions. The convergence characteristic of the HPSTCO is depicted in Fig. 5 which shows that the result converged after 50 generations and 1.85 min. Test case 3: 10-unit system without transmission loss Unlike test case 2, here a 10-unit system is considered without transmission loss. Like test case 2, valve point effect, ramp rate constraint and generation limits are considered. Input data or DED parameters of the 10-unit system are sane as listed in Table 7. The load demand of the system is divided into 24 dispatch intervals same as shown in Table 9. Results of best generation schedule at each hourly interval as obtained through MATLAB simulation are presented in Table 12. The fuel cost yielded by the HPSTCO method is 1,015,438.967 $/day. The graphical representation of Table 12 is shown in Fig. 6. Test case 4: 30-unit system In this test case, input data [15] are obtained by tripling the data of 10-unit system given in Tables 7 and 8. The load demand of the system as divided into 24 dispatch intervals is given in Table 14. In this case, the VPL effects, ramp rate constraints and generation limits are considered. DED results obtained by MATLAB simulation are presented in Table 15, the graphical representation of which is shown in Fig. 8. The fuel cost obtained by the proposed method is 1,051,964.4$/day. In Table 16, the simulation result of proposed HPSTCO is compared with other recently published hybrid methods, namely BBOSB [29], FA-PSOTVAC [27], BBPSO [26], HIGA [25], LDISS [24], HHS [15] and hybrid EP-SQP [17] Figure 9 shows that the technique takes only nine iterations to reach to steady state which is a good convergence characteristics compared to other methods. Conclusion In this paper, HPSTCO has been taken up as a cost minimization and schedule optimization method for 24-h time interval in four test cases representing small-to mediumscale thermal power generation system. Such a hybrid method was never implemented before for dynamic emission dispatch. A synergistic combination of two popular techniques for optimization has been able to mitigate the limitations of the individual techniques. Besides improving the convergence rate, the exploration of neighborhood area for finding local optima has bettered. The hybrid has overcome the problem of convergence to local optima and yields a good globally optimal solution. From the trial runs of the test cases, it can be concluded that HPSTCO is reliable, robust and can consistently provide high-quality solutions of DED considering practical operational constraints, such us valve point effects and multiple fuel changes. The convergence characteristics of HPSTCO are also quite acceptable though not one of the best. The performance of HPSTCO in terms of cost minimization and dispatch schedule optimization when compared with different other hybrids is found to be quite competitive and can be safely used as an effective metaheuristic for small to medium scale, simple to complex DED problems. In future, this hybrid method can be used to solve the problem of dynamic economic emission dispatch (DEED) problem, multi-objective economic dispatch (MOED) and multi-objective economic emission dispatch (MEED) problem and multi-area economic dispatch (MAED) problem of large-scale power generation system. The HPSTCO method can also be applied to find the impact on optimum dispatch problem of renewable energy like solar and wind energy. Optimization algorithm: particle swarm optimization (PSO) X i : position of a particle i; Vi k: velocity vectors for particle i in the previous iterations; � Vi {k + 1}: velocity vectors for particle i in the current iterations; D: dimension; S: d-dimensional search space; c 1 , c 2 : positive constants; r 1 , r 2 : random parameters of uniform distribution; X pi : local best (pbest) position of a particle i; X gi : global best (gbest) position of a particle i; V i : velocity of particle i; w: inertia weight. Termite colony optimization (TCO) X i : cost/fitness function value for each position of the termite; M: size of the termite population; D: dimension; fit(X i ): fitness function value for each position of termite; R w : random walk function of current position radius of search; B i : best local position of termite; τ i {t − 1}: pheromone level of ith termite at the current locations; τ i {t}: pheromone level of ith termite at the previous locations; ρ: evaporation rate of pheromone; w b , r b : probabilistically controls parameters for attracting the termite toward local best position; X max , X min : maximum and minimum limit of search space along a dimension.
2020-02-13T09:22:22.713Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "eddbfb3629d3355907e3cd255bd4c41b12aa6948", "oa_license": "CCBY", "oa_url": "https://jesit.springeropen.com/track/pdf/10.1186/s43067-020-0011-2", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9ea174de8213a865ab343b34faadd5bd672a404b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237769747
pes2o/s2orc
v3-fos-license
Synthesis and Application of Inverse Spinel NiFe2O4 Produced by EDTA-citrate Method: Structural, Vibrational, Magnetic, and Electrochemical Properties In this paper we studied the production of single-phase NiFe 2 O 4 powders synthesized by the complexation method combining EDTA-citrate. The structural, optical, magnetic, and electrochemical properties of were studied as a function of the synthesis pH. Powders obtained with pH 9 showed larger crystallite sizes (73 nm) in comparison to those produced with pH 3 (21 nm). The band gap energy was found to be inversely proportional to the crystal size (1.85 and 2.0 eV for powders with crystallites of 73 nm and 21 nm, respectively). The synthesized materials presented an inverse spinel crystalline structure. The samples obtained at higher pH conditions were found to be fully magnetic saturated with a saturation magnetization of 50.5 emu g -1 , while that synthesized at pH 3 is unsaturated with a maximum measured magnetization of 48.4 emu g -1 . Cyclic voltammetry and charge-discharge curves indicate a battery-type behavior, with an better performance for the material obtained at pH 9 (65 C g -1 at a specific current of 3 A g -1 ). The remarkable performance of the associated with its by microstructural characteristics (particle size, particle agglomeration and porosity). This work offers an alternative synthesis route for obtaining spinel ferrites for magnetic and electrochemical applications. Introduction The production of nanoscale ceramic materials promises to lead the industry to a new standard capable of revolutionizing modern science [1]. Ferrites are ferrimagnetic ceramics [2,3] composed of a mixture of 3d transition ions which have a general molecular formula (X1−zFez)[XzFe2−z]O4, where z is the degree of inversion. Ferrites crystallize in the spinel-type structure with tetrahedral (A-sites, cations in parentheses) and octahedral (B-sites, cations in brackets) sites occupied by divalent (X) and trivalent (Fe) cations [4]. Depending on the distribution of cations in the A and B sites, ferrites are classified as normal, inverse or partially inverse. The degree of spinel inversion depends upon the synthesis method and the synthesis parameters such as pH of the solution, temperature, heating rate, temperature and time of calcination, surfactant, chelating agents, and ratio of cations, among others [5,6]. The magnetic properties of these oxides depend also on their the degree of inversion [7,8]. Within the spinel family of oxides, nickel ferrite (NiFe2O4) has attracted considerable scientific interest due to its physical and chemical properties allowing these materials to be applied in electrochemical devices [10,16,25]. Besides, nickel ferrites have a band gap in the visible light region [26] due to the electronic excitation of the O-2p level (valence band) to the Fe-3d level (conduction band) [27], which is an interesting property for application in photocatalysis. Nickel ferrites may crystallize in an inversed spinel structure [28], however, the literature also reports several examples where Ni cations occupy both tetrahedral and octahedral sites [8,10,[29][30][31], leading to a partial inverse spinel. Several synthesis methodologies have already been used to produce ferrites including co-precipitation [14,32], proteic sol-gel [10], hydrothermal [33][34][35], solvothermal [36][37][38], conventional sol-gel [29], combustion [39], solid state reaction [40], solution blow spinning [16], and complexation [41]. The sol-gel process is one of the most accepted methods for producing ferrites where chelates are formed through a complexation reaction between multivalent ions and citric acid, a multidentate ligand [30,42]. However, the possibility of using transition metals with different oxidation states to produce a wide variety of ceramic materials requires a versatile complexing agent such as ethylenediamine tetra-acetic acid (EDTA). EDTA can be combined with metal ions in the proportion 1:1, regardless of the cation charge, the exception standing for alkali metals that do not chelate with EDTA. This molecule has six donor groups of electron pairs for bonds with metal ions, EDTA has four carboxylic groups and two amino groups, thus, it is classified as a hexadentate ligand [42]. The prerogative to use the EDTA-citrate method in this study is that it allows the formation of a more stable chelating complex than that obtained by using only EDTA or citric acid [43]. Citric acid acts as an auxiliary complexing agent in this synthesis process [44]. This methodology allows adjusting parameters such as heating time, heating rate, calcination temperature, initial pH of the synthesis, and the stoichiometric ratio chelating/cations, thus reaching nanoparticles with different morphologies, high purity, high degree of crystallinity and precise stoichiometric control [45][46][47][48]. In this synthesis process, the ions in solution are organized at an atomic level within an organic matrix formed by complexing agents [49]. This allows the crystallization process to occur at lower temperatures, resulting in particles with nanometric sizes [43]. In this work, we studied the production of inverse spinel-type NiFe2O4 nanoparticles (Z = 1) by using the EDTA-citrate complexation method under acid (pH = 3) and basic (pH = 9) conditions. The pH influence on the morphological, optical, magnetic, and electrochemical properties of nickel ferrites was investigated using several materials characterization techniques. To the best of our knowledge, this is the first report on the practical investigation of the relation between NiFe2O4 physico-chemical properties and the pH of the EDTA-Citrate route of production. Microestrutural Characterization The crystalline structure of samples was studied by X-ray diffraction (Shimadzu, XRD-7000) at room temperature using Cu-Kα radiation (λ = 1.5406 Å). The data were Vibrational Characterization Raman spectra were recorded at room temperature using a laser with a wavelength of 532 nm as the excitation source. Data were recorded in the frequency range of 100-1000 cm -1 . The laser interacting with the sample was diminished to 1 mW to avoid overheating. We used an acquisition time of 20 s and an accumulation of 30 (Horiba, model HR-Evolution). UV-vis diffuse reflectance spectra (UV-vis DRS) were recorded at 300 K in a UV-Vis NIR spectrometer (Cary, model 5G) in the range of 200-800 nm. Barium sulfate (BaSO4) was used as a reference sample. Mössbauer spectroscopy and magnetic characterization The 57 Fe Mössbauer spectra were recorded in the transmission mode, at 12 K, with a 57 Co:Rh source, using a spectrometer from SEECo equipped with a He closed cycle cryostat from Janis. The isomer shift (IS) values are related to α-Fe at 300 K. The Mössbauer spectra were fit using the software Normos-95. The magnetic measurements were performed at 5 K using a Physical Properties Measurements System (PPMS) equipped with a vibrating sample magnetometer (Quantum Design, model Dynacool). Electrochemical characterization Cyclic voltammetry (CV), charge-discharge galvanostatic (GCD) and impedance spectroscopy (EIS) measurements were carried out using a three-electrode setup (in a 3 M KOH electrolyte) on a PGSTAT204 (Metrohm Autolab) with FRA32M module. Ag/AgCl and Ni foam were used as reference and counter electrodes, respectively. The working electrode was prepared using a homogeneous slurry of NiFe2O4 with carbon black acting as a conductive additive and polytetrafluoroethylene acting as a binder in an isopropyl alcohol solution with a weight ratio of 80:10:10, respectively. The X-ray Diffraction The calcined nickel ferrite powders (NiFe2O4) obtained using different pH values were analyzed by X-ray diffraction (XRD) with Rietveld refinement analysis. The XRD patterns, shown in Fig Table 1. The lattice parameters of the NiFe2O4 phase obtained in acidic (a = 8.334 (8)) and basic media (a = 8.333 (9)) are in good accordance with those reported in the JCPDS card n° 86-2267. χ 2 ≤ 2 suggested a good agreement between the experimental data and the proposed fitting structure profile. The average crystallite size, cell parameter, volume, and density of the samples are listed in Table 1 together with the data obtained from the standard card, JPCDS 86-2267. It was found that with increasing the initial pH of the synthesis solution also increases the crystallite size while the cell parameter remains nearly constant [28]. The synthesis method we used proved to allow a better organization of cations in organometallic complexes, as a result of that high purity nanocrystalline materials are expected to be produced, even when the precursors are thermally treated at low temperatures [44] in comparison to other synthetic approaches that use higher temperatures to obtain NiFe2O4 [10,29,30,51]. The high degree of chelation of the metallic ions in the solution is thought to be responsible for the good homogeneity of metallic constituents in the precursors, which leads to the fore mentioned result [50]. Raman spectroscopy Raman spectra of NiFe2O4 samples are shown in Fig. 2. The spinel-type structure of the space group Fd -3m has five active Raman modes, A1g + Eg + 3T2g [52]. These active modes can be used to distinguish the tetrahedral and octahedral sites [53]. The 480 cm -1 and 654 cm -1 positions in the Raman spectra are attributed to Ni 2+ ions in the octahedral and tetrahedral sites, respectively [5,7], indicating that the synthesized spinel presents inversion, a fact corroborated with the results of Mossbauer spectroscopy, because when observing the relationship between the areas of these peaks, it appears that the areas of the peaks referring to the octahedral sites are larger than the areas of the tetrahedral sites. The Raman spectra are of good quality and match with those reported earlier [52,56]. Field emission scanning electron microscopy The morphology of the NiFe2O4 powders was inspected by FESEM. Fig. 3 (pH 3 (a-c) and pH 9 (d-f)) reveals that the synthesized samples consist of agglomerates of undefined particle shapes and different sizes, which is typical of a heterogeneous nucleation process [46]. The porous structure is attributed to the release of gases during the calcination process, which is mostly related to the combustion of citric acid and EDTA matrix. Rietveld refinement of the XRD data, evidencing the polycrystalline nature of the powders. It is observed that in both pH conditions the nickel ferrite was obtained free of impurities, as confirmed by XRD patterns. Besides, the acidic condition yields lower crystal size (see Fig. 3), which can be explained by the fact that the acid pH of the solution leads to a delay in the complexation process between metal ion and EDTA that is caused by the H+ competition with the metal ions for EDTA [44]. It is known that at high pH, the OHcompetes with EDTA for metal ions which may precipitate in the form of metal hydroxides or form non-reactive hydroxide complexes, and the use of citric acid as an auxiliary complexing agent serves to stabilize the complexation process [44]. However, unlike what occurred in acidic pH, the basic pH causes an increase in the size of crystals and agglomerates since during the calcination process there is a smaller amount of volatile products, which provided a faster evaporation and consequently crystallization and growth of the crystals were favored. Optical properties The optical band gap energy of the nickel ferrite nanoparticles is shown in Fig. 4. It is reported in the literature that the optical band gap of semiconductor oxides can be calculated by the method proposed by Wood & Tauc [57], as shown in Equation (1): (1) Where α is the absorbance, hv is the photon energy, k is the absorption constant that depends on the properties of the material, Egap is the range of the optical band, and "n" represents the nature of the electronic transition in the material (0.5 or 2 for allowed direct or allowed indirect transitions). Nickel ferrite is reported as a semiconductor that has a direct allowed electronic transition [27,39,[58][59][60], hence the optical band gap for the absorption peak can be obtained by extrapolating the linear portion of the plot (α )² vs. hv to zero [21,58] as shown in Fig. 4 for samples produced at pH 3 and 9. [31,39,58,59]. In the light of our XRD and SEM analysis, we can conclude that the trend related to the the band gap of the herein synthesized nickel ferrites is due to their difference in the crystallite size. Mössbauer spectroscopy study The spectra at 12 K were recorded with a maximum velocity of 12 mm/s to clearly show all the possible Fe-oxide phases. The experimental spectrum is fit to two sextets (each spectrum has six peaks), each sextet is ascribed to a Fe 3+ in tetrahedral (Asite) and octahedral (B-site) oxygen coordinated Fe sites in NiFe2O4, as shown in Fig. 5. Therefore, both samples are mostly inverse spinel Ni ferrites. The hyperfine parameters found for the Fe in A and B sites are similar to those reported for NiFe2O4 samples prepared by the oxalated and the hydroxide methods [64]. The inverse spinel Ni ferrite structure has also been predicted in a theoretical work by Szotek et al. [65]. Magnetic study The isothermal M-H measurements at 5 K are shown in Fig. 6. Samples are subjected to a maximum magnetic field of 10 Tesla. The data indicate that the sample prepared with pH = 9 is fully magnetic saturated, meanwhile the sample prepared at pH = 3 does not saturate, and the magnetic hysteresis confirms the prepared ferrite material is magnetically ordered [66]. The remanence magnetization (Mr) and coercivy field (Hc) are obtained directly from the Fig. 6 (B). These values are 17.9 emu/g (233 Oe) and 16.5 emu/g (419 Oe) for samples prepared with pH=9 and pH=3, respectively. It is noticed that the ascending and descending curves merge at a magnetic field larger than 3 T. Thus, the saturation magnetization (Ms) and magnetocrystalline anisotropy (K1) can be determined by fitting the high field data (H > 8 T) using the law of approach to saturation (LAS) (Fig. 7) [67] given by the Equation (2): surface anisotropy due to their small particle size [68]. Electrochemical characterization The electrochemical assessment of NiFe2O4 nanoparticles has been probed through CV, GCD and electrical impedance spectroscopy (EIS) measurements in 3 M KOH electrolyte at room temperature. The CV curves were recorded at different scan rates (5-100 mV s -1 ) in a potential window of 0-0.5 V vs. Ag/AgCl (Fig. 8 a,b). The CV curves of all samples show clearly separated peaks, which are related to oxidation and reduction processes of reactions occurring on the surface of NiFe2O4 based electrodes, which implies that in our samples a faradic charge storage mechanism is predominant [9,69]. With increasing the scan rate the anodic and the cathodic peaks shift in the direction of +ve and -ve potential regions, respectively, characterizing a rapid diffusion of ions in the KOH electrolyte [70]. Redox peaks attributed to the dispersal of electrolyte in the materials suggest that NiFe2O4 based electrodes are showing batterylike behavior [71,72]. This battery-like behavior and the surface faradaic redox reaction mechanism can be attributed to the redox transitions of Ni 2+ /Ni 3+ and Fe 2+ /Fe 3+ , which may merge because of the similar redox potential [73,74] (Fig. 8 a), or not, as shown by the two reduction peaks of NiFe2O4 prepared at pH 3, as previously reported in the literature [75,76]. The redox processes take place in two sequential steps in an alkaline medium (3M KOH), as shown in Equation (3) For comparison, a blank Ni foam (1 cm²) was also studied at a constant scan rate of 100 mV s −1 (Fig. 8 c). As can be seen in Fig. 8 confirming the battery-like behavior [79]. Besides, the rates of reversible surface redox reactions for the NiFe2O4 based-electrode synthesized with pH 9 are faster than those for the Ni ferrite made with pH 3. Figs. 9 (a e b) display the GCD curves recorded at different specific currents (1 to 10 A g −1 ) in a potential window of 0-0.6 V vs. Ag/AgCl. The GCD curves of all NiFe2O4 based-electrodes are typical of battery-like electrodes [71,72,80]. For this reason, the specific capacities (QS) were calculated by using the following Equation (4): where i is the discharge current (A), Δt is the discharge time (s) and m is the mass of the active material (g) [79,81]. As shown by the GCD curves, the electrode made with powders synthesized with pH 9 has significantly higher charging and discharging times in comparison to the electrode derived from the powder obtained with pH 3. Fig. 9 (c) shows the specific capacity of the electrodes. The superior discharge time of the NiFe2O4 based-electrode prepared at pH 9 becomes clearly prominent as the specific current increases (24,49,65,58, and 51 C g -1 at specific currents of 0.5, 1, 3, 5, and 10 A g -1 , respectively) in comparison to those of the NiFe2O4 based-electrode prepared at pH 3 (4.3, 2.9, 3.3, 3.5, and 5 C g -1 at specific currents of 0.5, 1, 3, 5, and 10 A g -1 , respectively) at the same specific currents. Interestingly, the enhanced electrochemical performance of the electrode made with the powder obtained at pH 9 indicates that its morphological characteristics have contributed positively to enhance the charge storage mechanism. This can be explained by the addition of ammonium hydroxide to obtain the NiFe2O4 particles at pH 9. It promotes reactions of combination of organic compounds with hydrogen ions that are eliminated during the calcination process, leading to the most porous microstructure. In this way, the microstructure of the electrode obtained at pH 9 acts as channels favoring the wettability of the electrode by the electrolyte [17]. It ensures that more intermediate species (OH -) reach as many active sites as possible [82]. Besides, despite of the proximity between the ESR values of the electrodes (Fig. 9d), their morphological characteristics (particle size, agglomeration and porosity) play a vital role on limiting their electrical conductivities, once grain boundary density acts as spreaders of charge carriers [83]. Conclusions Nickel ferrites were successfully synthesized by the combined EDTA-Citrate complexation method. The effect of the synthesis pH on the powder crystallite size was assessed by Rietveld refinement analysis. Raman spectroscopy was performed to assess the distribution of Ni and Fe cations at the tetrahedral and octahedral sites. The analysis of UV-vis DRS showed that the samples have band gap energy values of 1.85-2.0 eV. Mössbauer spectroscopy reveals that both samples are inverse spinel-type ferrites. Magnetic data indicate that the sample prepared with pH = 9 is fully magnetically saturated. Cyclic voltammetry and charge-discharge curves indicate improved performance for the sample synthesized at pH 9. The enhanced electrochemical performance of NiFe2O4 synthesized at pH 9 is due to an increase in pharyngeal reactions driven by the difference in porosity on the surface of the sample agglomerates and the boundary limits of the grains responsible for a lower resistance to charge transfer, confirmed by impedance spectroscopy. The complexing method combining EDTA-Citrate offers an alternative for the production of nickel ferrites for magnetic and electrochemical applications.
2022-08-29T17:29:59.921Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c6952ca1a535ae2532da82228c7a4a84afc3a1f1", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-703472/v1.pdf?c=1631900811000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c6952ca1a535ae2532da82228c7a4a84afc3a1f1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
134932029
pes2o/s2orc
v3-fos-license
FTIR STUDY OF TWO DIFFERENT LIGNITE LITHOTYPES FROM NEOCENE ACHLADA LIGNITE DEPOSITS IN NW GREECE The FTIR spectra for both Neogene xylite and matrix lignite samples from Achlada NW Greece show significant differences, which are mainly evident in aliphatic stretching region (3000-2800 cm) where the intensities of the vibrations are reduced in matrix lignite lithotype compared to xylite one. The intense bands in the region 3402-3416 cm are attributed to -OH stretching of H2O and phenol groups. The bands at ~3697 cm and ~3623 cm as well as at ~538 cm and 470 cm, which are more evident in the FTIR spectra of matrix lignite, are attributed to higher content of clay minerals in the samples of this lithotype. The stretching vibration appears at ~1032 cm is intense in all matrix lignite samples and it is broadening in the xylite ones. The FTIR spectra of all samples confirm the progressive elimination of aliphatic vibrations from xylite lithotype to matrix lignite one and the appearance of clay minerals in the latter. As a whole the FTIR spectra of both xylite and matrix lignite confirm the significant differences between these two lignite lithotypes. Introduction At the end of diagenesis the polycondensed organic residue, called lignite in coal swamps, also contains varying amounts of largely unaltered refractory organic material (Killops and Killops, 1993). Therefore, the organic structure of coal can be regarded as consisting of heterogeneous aromatic structures, with aromaticity increasing from low rank (lignite, brown coal) to high rank coals (semianthracite, anthracite). The term lignite refers to the maturity stage of coals, while the terms xylite and matrix refer to lignite lithotype. Fourier Transform Infra Red (FT-IR) method is a widely used analytical technique for determining the different functional groups of coal structures. Infrared (IR) spectroscopy has been extensively employed in the characterization of the mineral and organic matter of coals (Guiliano et al., 1990;Cloke et al., 1997;D' Alessio et al., 2000;Georgakopoulos et al., 2003;Kalaitzidis, 2007;e.t.c). In Greek lignites a limited number of studies have been published up to now, using FTIR method for coal characterization. Georgakopoulos et al. (2003) resulted in the presence of phenolic and alcocholic C-O bonds as well as C-O-C bonds with aliphatic carbons (strong peak at 1032 cm -1 ) concerning the initial xylite sample BEX from Vevi region. In the same study the FT-IR spectra of lignite and humic clay samples, from Apofysis-Amynteo lignite deposits NW Greece, revealed the great abundance of C=O and C-O-R structures (1800-1000 cm -1 region) as well as clay and silicate minerals in the 3600-3800 cm -1 and 400-600 cm -1 region, respectively. No study has been published concerning the investigation by FTIR method of the organic beds of Achlada lignite deposits, Florina sub-basin, NW Greece. The present study is part of a research of both xylite and matrix lignite aimed at their structural characterization by Fourier transform infrared spectroscopy (FTIR) and focuses on their significant differences. X-ray diffraction (XRD), thermo-gravimetric (TG/DTG) and differential thermal analysis (DTA) were also employed for this purpose. Geological setting The studied samples obtained from the lignite-bearing sequence of Neocene Achlada lignite deposits, which are located at the east margins of Florina sub-basin, NW Grecce. Methods Lithological features of each of the studied samples were macroscopically described and the lignite lithotype determined according to guidelines established by the International Committee for Coal and Organic Petrology (I.C.C.P., 1993), as well as by Taylor et al. (1998). Samples with less than 10% by volume woody tissues were logged as matrix lignite, whereas those of immiscibly woody tissues were classed as xylite. XLIII, No 5 -2285 Several xylite and matrix lignite samples were examined using the FTIR method of analysis. The IR measurements were carried out using a Fourier Transform Infra Red (FT-IR) spectrophotometer (Perkin Elmer GX-1). The FT-IR spectra, in the wavenumber range from 4000 cm -1 to 400 cm -1 , were obtained using the KBr pellet technique. The pellets were prepared by pressing a mixture of the sample and of dried KBr (sample:KBr, approximately 1:200), at 8 tons/cm 2 . Bands were identified by comparison to published studies (Wang and Griffith, 1985;Lide, 1991;Sobkowiak and Painter, 1992;Van Krevelen, 1993;Bustin, 1995, 1996;Ibarra et al., 1996;Cloke et al., 1997;Koch et al., 1998;Das, 2001;e.c.t). Bands assignments used in this paper are listed in table 1. The same xylite and matrix lignite samples were also examined by means of X-ray diffraction (XRD) analysis, as well as thermo-gravimetric (TG/DTG) and differential thermal (DTA) analysis. X-ray power diffraction patterns were obtained using a Bruker D-8 Focus diffractometer, with Ni-filtered CuKα 1 radiation (λ=1.5405 Å), operating at 40 kV, 30mA, while TG/DTG/DTA were obtained simultaneously using a thermal analyzer (Mettler, Toledo 851) at a heating rate of 10 ο C/min, at air atmosphere and temperature range 25 ο C-1200 ο C. FTIR study of xylite and matrix lignite samples Representative FT-IR spectra of xylite and matrix lignite samples are shown in figure 2. The spectra differ significantly in the peaks due to mineral matter, as well as to phenolic (C-O) and aliphatic carbon (C-H) groups. Both representative spectra show typical infrared characteristics of the organic matter of low-rank coals, including aliphatic C-H stretching bands at 2924 cm -1 and 2856 cm -1 , C=C or C=O aromatic ring stretching vibrations at ~1610 cm -1 and at ~1506 cm -1 , as well as aliphatic C-H stretching bands, at 1455 cm -1 , 1370 cm -1 and 822 cm -1 . Due to the fact that the present functional groups are different for xylite and matrix lignite samples it is more convenient to describe them separately. Studying the FTIR spectra of the representative xylite sample (Fig. 2a), from the Achlada lignite deposits in NW Greece, the following conclusions resulted: • The broad band at 3392 cm -1 is attributed to -OH stretching vibrations of hydrogen-bonded hydroxyl groups of absorbed water of clay minerals as well as to -OH of phenol groups. • The strong peak at ~2931 cm -1 is due to asymmetric aliphatic C-H stretching vibrations of methylene (-CH 2 ). • The band at ~1606 cm -1 is attributed either to C=O or C=C aromatic ring stretching vibrations. • The band at ~1505 cm -1 is due to C=O stretching vibrations. • The band at ~1454 cm -1 is attributed to symmetric aliphatic C-H vibration of methylene (CH 2 ) and methoxyl (OCH 3 ). • The peak at ~1370 cm -1 is due to symmetric aliphatic C-H bending vibration of methyl groups (OCH 3 ). • The band at ~1265 cm -1 are attributed to C-O stretching vibrations. • The peak at ~1033 cm -1 is due to C-O-H bonds in cellulose as well as to C-O stretching vibrations of aliphatic ethers (R-O-Ŕ) and alcohols (R-OH). • The band at ~821 cm -1 is due to out-of-plane-aryl ring with isolated C-H groups. In the FT-IR spectra of the matrix lignite samples bands corresponding to the most abundant minerals were detected confirming the occurrence of clay minerals (e.g. kaolinite bands at ~3698 cm -1 , 3620 cm -1 , 1030 cm -1 , 915 cm -1 , 531 cm -1 , and 469 cm -1 ). The small peaks at ~3698 cm -1 and 3620 cm -1 can be assigned to the crystal water which exists in clay minerals of the matrix lignite samples (Geng et al., 2009). Studying the FTIR spectra of the representative matrix lignite sample (Fig. 2b), from the Achlada lignite deposits in NW Greece, the following conclusions resulted: • The small peak at ~3698 cm -1 arises from the in-phase symmetric stretching vibration of the OH groups, either "outer" or "inner" surface OH of the octahedral sheets, which form week hydrogen bonds with the oxygen of the next tetrahedral layer (Balan et al., 2001). The peak at ~3620 cm -1 is due to the stretching vibrations of the "inner OH groups" lying between the tetrahedral and octahedral sheets (Madejova, 2002;Geng et al., 2009). • The broad band at ~3406 cm -1 is attributed to -OH stretching vibration of absorbed water either of clay minerals or of the organic matter of the matrix lignite sample. • The bands at ~2925 cm -1 and ~2855 cm -1 are attributed to asymmetric and symmetric aliphatic -CH 2 stretching vibrations respectively. • The strong band at ~1618 cm -1 is attributed either to C=O or C=C aromatic ring stretching vibrations, as well as to OH bending vibrations of adsorbed water. • The ~1030 cm -1 and 1013 cm -1 bands arise from the Si-O-Si and Si-O-Al VI vibrations, respectively. • The ~914 cm -1 band arises from the bending vibrations of "inner" OH groups of Al-OH-Al of kaolinite structure. • The band at ~680 cm -1 could be related to aromatic out-of-plane C-H vibrations, rather than to mineral matter (Georgakopoulos, 2003). • The band at ~531 cm -1 originates from Si-O-Al VI vibrations (Al in octahedral co-ordination), while the band at ~469 cm -1 is attributed to the Si-O-Si bending vibrations (Van Jaarsveld et al., 2002;Madejova, 2003). The main FTIR absorption bands of both xylite and matrix lignite samples are summarized in table 1. The comparative FT-IR spectroscopy of xylite and matrix lignite lithotypes showed that: • An intense and broad hydroxyl band of the xylite sample with a maximum at ~3392 cm -1 , was displaced relatively to the matrix lignite band which appears at ~3406 cm -1 . The latter peak is accompanied by two other small peaks around 3698 cm -1 and 3620 cm -1 due to mineral matter, which is more abundant in the matrix lignite. • The predominant FTIR feature for xylite samples (Fig. 2b), in contrast to matrix lignite ones, is the high intensity of aliphatic C-H stretching vibration at ~2931 cm -1 , which appears at slightly lower wavenumbers (2925 and 2856 cm -1 ), in matrix lignite samples. Significant differences of the containing functional groups are also present in the 1700-1100 cm -1 region. More specifically: • The stretching vibrations at ~1506cm -1 due to C=O structures tend to decrease in matrix lignite. As far as, the bands at this region (~1506 cm -1 ) practically disappear at the stage of bi- Aromatic out-of-plane-rings with 2 neighboring C-H groups 534 Si-O-Al VI vibrations (Al in octahedral co-ordination) of clay minerals 468 Si-O-Si bending vibrations of clay minerals tuminous coal (Ibarra et al., 1996), the progressive elimination of stretching vibrations in this region probably indicates increasing coalification from xylite to matrix lignite lithotype. • The vibrations due to the aliphatic C-H and C-O groups at ~1455 cm -1 , 1370 cm -1 , 1265 cm -1 and 1224 cm -1 , as well as the out of plane vibration due to the C-H bonds at 823 cm -1 also decrease in matrix lignite (Ibarra et al., 1996). • Strong vibrations corresponding to the occurrence of clay minerals (e.g. kaolinite bands at ~3698 cm -1 , 3620 cm -1 , 1031 cm -1 , 915 cm -1 , 531 cm -1 , and 469 cm -1 ) were detected in the FT-IR spectra of the matrix lignite samples, while for the xylite ones a limited number of these vibrations are present, which are also quite weak. • The prominent band at ~680 cm -1 in matrix lignite sample, could be related to aromatic outof-plane C-H deformations, rather than to mineral matter (Georgakopoulos, 2003). X-ray diffraction (XRD) analysis of xylite and matrix lignite samples The X-ray diffraction (XRD) analysis revealed that the minerals present in matrix lignite are mainly, XLIII, No 5 -2289 kaolinite and gypsum (Fig. 3aA), while anhydrite is present in both heated samples (Fig. 3bA,B). Illite (+muscovite) is identified by the sharp diffraction peaks at d 001 =~10 Å and d 003 =~3.34 Å, while kaolinite by its typical peaks at d 001 =~7.1 Å and d 002 =~3.5 Å. In the same figure gypsum is identified by its characteristic peak at d 020 =~7.56 Å. Minerals in minor contribution such as quartz at d 101 =3.34 Å and d 100 =4.26 Å and calcite at d 104 =~3.03 Å have been detected, too. It is important to be mentioned that the formation of anhydrite, in the heated samples at d 020 =~3.49 Å and d 210 =~2.85 Å indicates the presence of gypsum in raw materials. From the X-ray diagrams (Fig. 3a) it becomes clear that clay minerals are present in the matrix lithotype, while in the xylite one these are absent. This observation confirms the FTIR results, in which the typical bands of clay minerals are absent from xylite spectrum. This may be attributed to the nature of xylite samples that prevent the water movement through the xylite mass. In addition, the samples were heated up to 550 o C for 2 hours, in a static oven (Fig. 3b). Samples were then cooled at room temperature and examined by X-ray power diffraction (XRD). A decrease in the intensity of the characteristic diffraction pattern at d=~7.57 Å, due to the collapse of gypsum, as well as the presence of typical peaks at d=~3.50 Å and d=~2.85 Å (Fig. 3b), indicates clearly the presence of anhydrite for both xylite and matrix lignite lithotypes. TG/DTG and DTA study of xylite and matrix lignite samples The thermal study results of the Achlada low rank coal samples examined after heating up to 1200 o C, at a rate of 10 o C/min, are shown in Fig. 4a,b. The TG curves of the examined samples showed a continuous weight loss during heating up to ~650 o C and 900 o C, for xylite and matrix lignite samples, respectively. More specifically: • The steep slope of the xylite TG curve, in the temperature range from 200 o C to 500 o C, due to the rapid weight loss is attributed to the high devolatilization rate of organic matter. • In the same temperature range, a big and sharp devolatilization peak observed at the DTG curve indicates the high devolatilization rate of xylite lithotype comparing to matrix one. This sharp peak at ~380 ο C (Fig. 4a) can be attributed to cellulose content of xylite sample (Charland et al., 2003). Taking into consideration that this peak height can provide a relative measure of the reactivity, the xylite seemed to be more reactive, as far as its decomposition rates were higher than those corresponding to the matrix lignite (Vamvuka et al., 2004). On the other hand, the bulk of the burning process for matrix lignite occurred mainly between 450 o C and 600 o C. • An endothermic peak at ~380 ο C (DTA curve) is associated with the decomposition of cellulose, while the decomposition of lignin is characterized by an exothermic one, in the temperature range from 200 ο C to 400 ο C (Fig. 4a). • In the temperature range from 200 ο C to 560 ο C, the weight loss is less than in xylite sample and this is indicated by the slight slope of the TG curve. • The TG curve of the matrix lignite samples showed a continuous weight loss during heating up to ~900 o C, originated from the lignin content, that is quite difficult to decompose, as well as from the presence of inorganic material. • The exothermic peak in the temperature range from 200 o C to 400 o C of DTA curve is characteristic of lignin and can be attributed to the destruction of aliphatic grouping, CH groups, carbohydrate components and to some extent of oxygeneous (alcoholic, phenolic) and amino groups (Kucerik et al., 2004). • The endothermic peak at ~500 ο C is attributed to the dehydroxylation of the kaolinite, (due to the loss of OH groups, surrounding the Al VI atoms) and the progressive transformation from the octahedral co-ordinated Al, in kaolinite, to a tetrahedral co-ordinated form, in metakaolinite, through the breaking of OH bonds (Van Jaarsveld et al., 2002). A part of the weight loss in this temperature range comes from the decomposition of siderite according to the reaction XLIII, No 5 -2291 FeCO 3 → FeO+CO 2 . Chlorite and illite (+muscovite) give endothermic peaks at higher temperatures. Conclusions Studying the xylite and matrix lignite lithotypes from the organic beds of Achlada lignite deposits, Florina sub-basin, NW Greece, by FT-IR spectroscopy, in combination with X-ray diffraction and thermoanalytical methods (TG/DTG and DTA), the following conclusions were taken: • The FTIR spectra of all samples confirm the progressive elimination of aliphatic vibrations from xylite lithotype to matrix lignite one and the appearance of clay minerals in the latter. • According to X-ray analysis the minerals present in the matrix lignite are mainly illite (+muscovite), kaolinite, and gypsum, while in the xylite samples these minerals are absent. The formation of anhydrite in the heated samples indicates the presence of gypsum in both raw materials. • The TG/DTG/DTA curves of xylite lithotype present higher weight loss comparing to matrix lignite lithotype, as well as a sharp DTG peak at ~380 °C, accompanied with an endothermic peak of DTA curve, that is characteristic of cellulose decomposition. In contrast, the lignin decomposition is characterized by an exothermic peak in the temperature range from 200 o C to 400 o C.
2019-04-27T13:06:04.905Z
2017-07-31T00:00:00.000
{ "year": 2017, "sha1": "2ff30a78e5c682f571e6d1632b05c8a74e9ec5e3", "oa_license": "CCBYNC", "oa_url": "https://ejournals.epublishing.ekt.gr/index.php/geosociety/article/download/14312/12995", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8f9f0c7575423c9069c4e98a6fd539ee2e40eabc", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Chemistry" ] }
54649821
pes2o/s2orc
v3-fos-license
Nested Algebraic Bethe Ansatz in integrable models: recent results This short note summarizes the works done in collaboration between S. Belliard (CEA, Saclay), L. Frappat (LAPTh, Annecy), S. Pakuliak (JINR, Dubna), E. Ragoucy (LAPTh, Annecy), N. Slavnov (Steklov Math. Inst., Moscow) and more recently A. Hutsalyuk (Wuppertal / Moscow) and A. Liashyk (Kiev / Moscow). It presents the construction of Bethe vectors, their scalar products and the form factors of local operator for integrable models based on the (super)algebras $gl_n$, $gl_{m|p}$ or their quantum deformations. It corresponds to two talks given by E.R. and N.S. at \textsl{Correlation functions of quantum integrable systems and beyond}, in honor of Jean-Michel Maillet for his 60's (ENS Lyon, October 2017). Introduction Calculation of correlation functions is one of the most challenging problems in the study of quantum integrable models. Among various methods to solving this problem, we would like to mention the form factor approach in the framework of the algebraic Bethe ansatz (ABA). This approach was found to be very effective, in particular, in studying asymptotic behavior of correlation functions of critical models in the works by J.-M. Maillet and coauthors [1][2][3][4][5]. In the works listed above, the correlation functions of the Lieb-Liniger model and XXZ spin-1/2 chain were considered. From the point of view of the ABA, these are models respectively described by the Yangian Y (gl 2 ) and the quantum group U q ( gl 2 ). At the same time, there exist a lot of models of physical interest that are described by the algebras with higher rank of symmetries (see e.g. [6][7][8][9]). Our review is devoted to the recent results obtained in this field. The first problem that one encounters when studying models with a high rank of symmetry is to construct the eigenvectors of the Hamiltonian. In the case under consideration, they have a much more complex form, in comparison with gl 2 based models [10][11][12][13][14][15]. First we need to build the so-called off-shell Bethe vectors (BVs) that depend on sets of complex parameters. If these parameters satisfy special constraints (Bethe ansatz equations), then the corresponding vector becomes an eigenvector of the quantum Hamiltonian (on-shell Bethe vector). The second problem is the calculation of the scalar products of off-shell BVs. In the study of correlation functions, one can not confine himself to treating only on-shell Bethe vectors, since the actions of operators on states, generally speaking, transform on-shell BVs to linear combinations of off-shell BVs. In view of rather complex structure of BVs, their scalar products also were found to be difficult to compute [16,17]. The third problem consists in calculating the form factors of local operators. Formally, it reduces to scalar products of BVs. The problem, however, is to obtain such representations for form factors that would be convenient for calculation of correlation functions. In particular, such representations include determinant formulas for form factors. Finally, having convenient formulas for form factors, one can proceed to a direct calculation of the correlation functions with the framework of the form factorial expansion. It should be noted, however, that the procedure for summing the form factors strongly depends on the specific model. In other words, this procedure depends on the concrete representation of the algebra describing the given quantum model. At the same time, the first three problems can be formulated already at the level of the algebra, what makes it possible to obtain their solutions for a wide class of models within the framework of the ABA. Therefore, in this review, we will focus on the first three problems. The plan of this presentation reflects the steps described above. We first present in section 2 the framework we work with, namely the generalized integrable models. Then, we show in section 3 some results on the construction of BVs for these models. Their scalar products will be dealt in section 4, and they can be gathered in three main categories: a generalized sum formula, some determinant forms and a Gaudin determinant for the norm of BVs. To compute form factors (FF), we will use four methods: the twisted scalar product tricks, the zero mode method, the universal FF, and finally the composite model. They are presented in section 5. Finally, we will conclude on open problems. Since the calculations are rather technical we will focus on ideas and results, referring to the original papers for the details. To ease the presentation we will mainly focus on the case of Yangians Y (gl n ), possibly fixing n = 3. However, after each result, we will precise to which extend these results can be used, and refer to the relevant publications where they can be found. Generalized quantum integrable models The construction of generalized quantum integrable models relies on two main ingredients: the R-matrix and the monodromy matrix. The R-matrix. It depends on two spectral parameters z 1 , z 2 ∈ C and acts on a tensor space C n ⊗ C n , i.e. R(z 1 , z 2 ) ∈ V ⊗ V with V = End(C n ). R(z 1 , z 2 ) obeys the Yang-Baxter equation (YBE), written in V ⊗ V ⊗ V : Here and below, we use the auxiliary space notation, where the exponent indicates on which copies of C n acts the R-matrix, e.g. R 12 = R ⊗ I n , R 23 = I n ⊗ R, ... The universal monodromy matrix T (z) ∈ V ⊗ A. It contains the generators of the algebra we will work with. In the present paper we will focus on (super)algebras A = Y (gl n ), U q ( gl n ), Y (gl m|p ) and U q ( gl m|p ). Note however that the construction can be also done for other algebras, such as The algebraic structure of A is contained in the two following relations: The first relation shows how the generators 1 T ij (z) are encoded in a matrix. The second one (called RTT relation) provides the commutation relations of the algebra. Again, we have used the auxiliary space notation, i.e. We will note n = rankA (i.e. n = m or m + p, depending on the algebra we consider). Remark that T (z) is a universal monodromy matrix, meaning that the generators of A are not represented. What is usually called a monodromy matrix corresponds to π(T (z)) where π is a representation morphism. The choice of a representation (hence of the morphism π) fixes the physical model we work with. In fact, most of the calculations can be done with mild assumptions on the type of representation used to define the model. This leads to the notion of generalized models, that are based on lowest weight representations without specifying the lowest weight. Choice of (lowest weight) representations of A. The generalized models are defined from the universal monodromy matrix, assuming that it obeys the additional relations: where |0 is some reference state (the so-called pseudo-vacuum) and λ j (z), j = 1, .., n, are arbitrary functions. Up to normalisation of T (z), we only need the ratios In generalized models, r j (z) are kept as free functional parameters. The calculations we present will be valid for arbitrary functions r j (z). The transfer matrix t(z). It encodes the dynamics of the model as well as its conserved quantities. For algebras, the transfer matrix is defined as For superalgebras, it takes the form where [.] is the standard Z 2 gradation used for superalgebras, implicitly defined in (7). Due to (3), we have [t(z) , t(z ′ )] = 0, so that the transfer matrix defines an integrable model (with periodic boundary conditions). Example: the "fundamental" spin chain. To illustrate the different notions presented above, we consider the following monodromy matrix: wherez = {z 1 , ..., z L } are complex parameters, called the inhomogeneities. 1, 2, ..., L are the quantum (physical) spaces of the spin chain, they are n-dimensional: on each site the "spins" can take n values. The auxiliary space 0 has the same dimension. Due to the YBE, one shows that the above monodromy matrix indeed obeys the RTT relation (3). The form of the R-matrix, for all the algebras A = Y (gl m ), U q ( gl m ), Y (gl m|p ), U q ( gl m|p ) ensures that this monodromy matrix obeys the lowest property (4). For the Yangian Y (gl n ), the weights read and λ j (z) = 1 j = 2, ..., n. For any algebra A, it is the simplest spin chain that one can construct. It is built on the tensor product of L fundamental representations of the underlying finite dimensional Lie algebra, and corresponds to a periodic spin chain with L sites, each of them carrying a fundamental representation of A. To illustrate the presentation, we will focus on the Yangian Y (gl n ). Formulas will be displayed for this algebra, but we will mention when they exist for other algebras. The Yangian Y (gl n ) has a rational R-matrix where I is the identity matrix, P is the permutation matrix between two spaces End(C n ), c is a constant. It corresponds to XXX-like models and is based on Y (gl n ). Explicitly, in the case n = 3, the R-matrix has the form and f ≡ f (z 1 , z 2 ) = 1 + g(z 1 , z 2 ). Note that the R-matrix for U q ( gl 3 ) has a similar form, but with different functions g ± and f . Notation We have already introduced the functions that enter in the definition of the R-matrix, and describe the interaction in the bulk. The functions presented above are of XXX type. For completeness, we give below the functions f and g for the XXZ type: We have also seen the free functionals that (potentially) describe the representation used for the model. These are all the scalar functions we will deal with. We will use many sets of variables and to lighten the presentation, we will use some notation for them: • "bar" always denote sets of variables:w,ū,v etc. • Individual elements of the sets have latin subscripts: w j , u k , etc. • Subsets of variables are denoted by roman indices:ū I ,v iv ,w II , etc. • Special case of subsets: Associated to these sets of variables, we use shorthand notation for products of scalar functions (when they depend on one or two variables). If a function depends on a set of variables, then one should take take the product over the sets, e.g.: We use the same prescription for the products of commuting operators, for example, By definition, any product over the empty set is equal to 1. A double product is equal to 1 if at least one of the sets is empty. Generalities The framework to compute Bethe vectors has been developed by the Leningrad school in the 80's. It is the Nested Algebraic Bethe Ansatz (NABA), developed by Kulish and Reshetikhin [10,11]. It provides vectors (the Bethe vectors, BVs) that depend on some parameters (the Bethe parameters) and that are eigenvectors of the transfer matrix provided the Bethe parameters obey some algebraic equations (the Bethe Ansatz Equations, BAEs). When it is the case, the BVs are called on-shell, while they are said off-shell otherwise. Our first goal is to provide explicit expressions for these BVs. The general strategy of the ABA is to start with the pseudo-vacuum vector |0 , which is itself an eigenvector of the transfer matrix. Then, one applies the 'creation operators' T ij (u), i < j, on |0 to build more general vectors, and seek for combinations that can be transfer matrix eigenvectors. In the case of the "usual" XXX (gl 2 ) spin chain. The construction of BVs is rather simple, since we have only one 'raising' operator T 12 (z): which leads to one set of Bethe parametersū = {u 1 , ..., u a }. Then, asking B a (ū) to be an eigenvector of the transfer matrix t(z) = T 11 (z) + T 22 (z) leads to the BAE: In the case of higher rank n. There are many raising operators T ij (z), 1 ≤ i < j ≤ n, and the calculation becomes more tricky. In particular, there are n − 1 different sets of Bethe parameters:t On needs to find how to put together all the raising operators, and Bā(t) appears to be much more complicated. The expression of Bā(t) is fixed by asking it to be a transfer matrix eigenvector provided the Bethe equations are obeyed. For illustration, we give the eigenvalue and the BAEs in the case of the Yangian Y (gl n ) [10,11]: with the convention thatt (0) = ∅ =t (n) . Recall that here we use the shorthand notation (8) for the products of the functions r i and f . In particular, any product over the empty set equals 1. BAEs hold for arbitrary partitions of the setst (i) into subsets {t (i) Dual Bethe vectors Cā(t). As already mentioned, Bā(t) is a transfer matrix eigenvector provided the Bethe equations are obeyed. In the same way, one can construct dual BVs that are left eigenvectors of the transfer matrix provided the (same) BAEs are obeyed, and where τ (z|t) is the same as in (11). In that case they will be called on-shell dual BVs, and off-shell dual BVs otherwise. Below, we will mainly focus on the BVs Bā(t), but formulas also exist for dual Bethe vectors. A simple way to get such formulas is to use an anti-morphism ψ. For the Yangian Y (gl n ) it takes the form ψ T ij (u) = T ji (u) and allows to define the dual BV as • An example of the use of the morphism ψ in the Yangian case can be found in [18]. Note that in the case of super-Yangians, ψ relates BVs of Y (gl m|p ) to dual BVs of Y (gl p|m ), see [19,20]. The same is true for its generalization to the U q ( gl m ) algebra, see e.g. [21]. Generalized models. Usually, when dealing with e.g. spin chain models, the Bethe equations are seen as a 'quantization' of the Bethe parameterst. Here, for generalized models, since the functions r i (z) are not fixed, BAEs are rather viewed as functional relations between the functions r i (z), i = 1, ..., n − 1 and the Bethe parameters t Expressions for Bethe vectors There are different presentations for the BVs, each of them being adapted for different purpose. Known formulas: the trace formula. It is the first general expression for BVs of higher rank algebras. Again, as an illustration, we present it in the case of the Yangian Y (gl 3 ). For a BV B a,b (ū;v), where a = #ū and b = #v, one introduces a + b auxiliary spaces V = End(C 3 ). Then, the Bethe vector can be written as where e ij are the 3 × 3 elementary matrices (acting in C 3 ) with 1 at position (i, j) and 0 elsewhere. The trace tr a+b is taken over the a + b auxiliary spaces, and T(ū,v) (resp. R(ū,v)) is a product of monodromy matrices (resp. R-matrices): where we have used the auxiliary space notation, i.e. the exponents indicate in which auxiliary space(s) the matrices act. • The trace formula was introduced by Tarasov and Varchenko for Y (gl m ) and U q ( gl m ) algebras [12]. It has been generalized to superalgebras Y (gl m|p ) and U q ( gl m|p ) in [22]. Recursion formulas. They allow to build BVs with a 'big' number of Bethe parameters from BVs having a smaller number of them. Again, in the gl 2 case (10), these recursion formulas are rather trivial, B a+1 (ū) = T 12 (u k ) B a (ū k ), while they become more intricate for higher ranks. In the case of Y (gl 3 ), they take the form: Remark that considering the underlying finite Lie algebra gl 3 with simple roots α, β, one sees that B a,b (ū;v) "behaves" as the root a α + b β. This reflects the fact that BVs are eigenvectors of the zero modes T kk [0], see section 5.2. • Recursion formulas are in fact a particular case of multiple action of T ij (x) on BVs. The case of Y (gl 3 ) can be found in [23], U q ( gl 3 ) in [24], and Y (gl 2|1 ) in [25]. It exists also for Y (gl m|p ) [26] and U q ( gl n ) [27]. Explicit formulas. Solving the recursion relations, we obtain different explicit formulas for the Bethe vectors, which depend on the recursion we use, e.g. (14) or (15) in the Y (gl 3 ) case. An example of such explicit expression is given by: where the sums are taken over partitions of the sets: • A fully explicit expression for BVs in the case of Y (gl 3 ) was presented in [23] and in [27] for U q ( gl n ). The generalization to superalgebras can be found in [28] for Y (gl 2|1 ) and in [26] for Y (gl m|p ). Current presentation and projection method. Instead of presenting the algebra A in term of a monodromy matrix T (z), one can use the current realization. It exists for the quantum groups U q ( gl n ) and U q (gl m|p ), as well as for the double Yangians DY (gl n ) and DY (gl m|p ). The current realization is related to a Gauss decomposition of the monodromy matrix T (z) [29]. Using the projection method introduced by Khoroshkin, Pakuliak, and collaborators in the years 2006-10, one gets an explicit expression of BVs in a different basis. As an illustration of the projection method, we consider the current realization of DY (gl 3 ). Then, BVs can be written as 1. k 1 (z) and k 2 (z) are the Cartan generators; 2. F 1 (z) is the generator associated to the first simple (negative) root; 3. F 2 (z) is the generator associated to the second simple (negative) root; 4. P + f is the projector of the Borel subalgebra on the positive modes. • The construction of BV in the current presentation has been initiated for the U q ( gl n ) algebra in [13][14][15] and then generalized to the (super)Yangian Y (gl m|p ) case in [26]. Relations between the different expressions of Bethe vectors All the formulas presented in section 3.2 are related: 1. The explicit expressions solve the recursion formulas; 2. The trace formula obeys the recursion formulas; 3. The recursion formulas uniquely fix the BVs, once the initial values for Y (gl 3 ) are known; 4. The projection of currents coincides with the trace formula. Thus, they all describe the same (off-shell) BVs. Normalization of BVs. The main property of BVs is that they become eigenvectors of the transfer matrix if the Bethe parameters enjoy the BAEs. Since any eigenvector is defined up to a normalization factor, the BVs also have a freedom in their normalization. The choice of normalization is a question of convenience. In the above formulas, the normalization was chosen as follows. It follows from the explicit representation (16) that BV is a polynomial in T ij (i < j) acting on |0 . Among the terms of this polynomial, there is one term that does not depend on the operator T 13 . We call this monomial main term and denote by B a,b (ū;v). Thus, where ellipsis refers to all the terms containing at least one operator T 13 , and Thus, we fix the normalization of BV by the explicit form of the main term. This normalization is convenient for recursion formulas, formulas of the action of T ij (z) on BVs, calculation of scalar products of BVs. We use similar conventions on the normalization in the cases Y (gl n ), U q ( gl n ), and Y (gl m|p ). In all these cases the BV contain a term that depends on the operators T i,i+1 only. We call it the main term. In the case of Y (gl n ) it is normalized as follows: In the case U q ( gl n ) the normalization is the same, but one should take q-deformed analogs of the f -functions. For the superalgebra Y (gl m|p ), the normalization looks similar, but it takes into account the grading (see [20]). Sum formula The scalar product of generic off-shell BVs can be presented in the form known as a sum formula Korepin and then Reshetikhin were the first to obtain such formula, see references below. In (24), the sum is taken over all possible partitions of each sett (j) ands (j) into subsets {t I . The dependence on the monodromy matrix vacuum eigenvalues r j is given explicitly. The coefficient W part are rational functions of the Bethe parameterss andt. They are completely determined by the R-matrix. Thus, they do not depend on the specific representative of the generalized model. The first formula of this type, corresponding to the Y (gl 2 ) and U q ( gl 2 ) based models, was obtained by Korepin. For these models, one can derive the sum formula using the explicit form of the BVs. However, in the models with higher rank of symmetry, the use of explicit formulas for BVs leads to too cumbersome expressions. A generalization of the sum formula to the Y (gl 3 ) case was done by Reshetikhin via a special diagram technique. In the case of Y (gl n ), Y (gl m|p ), and U q ( gl m ) the sum formula was derived by the use of a coproduct formula for BVs [20]. This method allows to express an arbitrary coefficient W part in terms of so-called highest coefficients. Namely, if we set then the general coefficient W part (s I ,s II |t I ,t II ) has the following form . The highest coefficients are known explicitly for Y (gl 3 ), Y (gl 2|1 ), and U q ( gl 3 ). For higher rank algebras they can be constructed via special recursions. For U q ( gl 3 ), the highest coefficient is given in [31], and the full formula in [32]. The super Yangian Y (gl 2|1 ) was dealt in [33], while the general cases of Y (gl m|p ) and U q ( gl n ) were respectively presented in [20] and [21]. The expression (24) is valid for all BVs (on-shell or off-shell). However, it is difficult to handle, specially when considering the thermodynamic limit, so that we look for determinant expressions for Sā(s|t). Determinant formula It is known that for the Y (gl 2 ) and U q ( gl 2 ) based models the sum over partitions in (24) can be reduced to a single determinant if one of BVs is on-shell [34]. An analog of this determinant representation for the higher rank algebras is not known for today. However, determinant formulas for the scalar products have been obtained in some particular cases. One needs to impose more restrictive conditions for the BVs as we shall see below. The results have been obtained only for some specific algebras that we describe at the end of this subsection. Consider the particular case Y (gl 3 ) and the scalar product of an on-shell Bethe vector To define the twisted dual on-shell Bethe vector we consider the twisted transfer matrix The twisted dual BV is an eigenvector of t κ (z) with provided the twisted BAEs are satisfied: . Then, the scalar product can be written as: where ∆ n and ∆ ′ n are given by (18), and M is a (a+b)×(a+b) matrix. It is worth mentioning that in spite of this determinant representation is valid only for very specific case of the scalar product, it can be used as a generating formula for determinant representations of all form factors of the monodromy matrix entries (see sections 5.1, 5.2). Unfortunately, for models with higher rank symmetry determinant representations are not known, except for the norms of on-shell BVs that we present now. Norm of on-shell BVs: Gaudin determinant In this section we give a determinant formula for the norm of an on-shell BV. The case of the models described by Y (gl 2 ) and U q ( gl 2 ) algebras was considered in [30], where a Gaudin hypothesis (see [40], [41]) was proved. A generalization of this result to the Y (gl 3 ) based models was given in [16]. Here we focus on the Y (gl n ) case to lighten the presentation. The Gaudin matrix. To introduce the Gaudin matrix, we first rewrite the BAEs as Φ Then, the Gaudin matrix G is a block matrix G (i,j) i,j=1,..,n−1 , where each block G (i,j) , of size a i × a j , has entries Norm of Bā(t). For an on-shell Bā(t), the square of its norm is traditionally defined as Sā(t) = Cā(t)Bā(t), where Cā(t) is its dual BV. Then one has: where Bā(t) is normalized as in (19) and (21). Note that if Bā(t) and Cā(s) are on-shell, we have Cā(s)Bā(t) = δs ,t Sā(t). • The representation of the norm of BVs are described in [42] for Y (gl n ) and Y (gl m|p ). Similar representations for U q ( gl n ) can be found in [21]. Form factors (FF) Form factors are the building blocks to study correlation functions. Here we will consider the FF of the monodromy matrix entries: where both Cā′(s) and Bā(t) are on-shell BVs. The cardinalities of the Bethe parameters of the dual BVā ′ = {a ′ 1 , . . . , a ′ n } depend on the operator T ij (z). Since the FF is based on the monodromy matrix, we will call diagonal (resp. off-diagonal) the FF related to diagonal (resp. off-diagonal) entries of T (z). To compute these FF, we use four different techniques: 1. The twisted scalar product trick (which leads to diagonal FF); 2. The zero mode method (to deduce off-diagonal FF); 3. The universal FF (for the general form of the FF); The composite model (for FF of local operators). We describe all these techniques below, again in the case of the Yangian Y (gl n ) to give simple formulas. Twisted scalar product trick Diagonal FF F jj (z|s;t) are computed using the "twisted scalar product" trick. Consider a twist matrix M = diag{κ 1 , . . . , κ n } and define a twisted transfer matrix as From the simple identity whereκ = 1 means that κ j = 1 for j = 1, . . . , n. The function τκ(z;s) is the eigenvalue of the twisted dual on-shell BV Cκ a (s). It is given by equation (12), in which one should replacē t (j) →s (j) and λ j (z) → κ j λ j (z). Hence, if one knows the form of the twisted scalar product, one can deduce the diagonal FF. • The same trick also can be done for the cases Y (gl m|p ) and U q ( gl n ). However, determinant representations for the twisted scalar products Sκ a (s|t) for today are rather seldom, see section 4.2. They provide determinant expressions for diagonal FF in Y (gl 3 ) and Y (gl 2|1 ) models, see [39] and [19] respectively. For U q ( gl 3 ) models, due to the special twist, only F 2,2 (z|s;t) is known [37]. Other FF are missing up to now. Zero mode method Zero modes of the monodromy matrix. They correspond to the finite dimensional Lie subalgebra embedded in A. For instance, they form a gl n Lie subalgebra in Y (gl n ). Typically they are defined as but depending on the model and on A, some normalisation can be implied before taking the limit w → ∞. The monodromy matrix is a representation of this Lie subalgebra: Bethe vectors and zero modes. The zero modes occur naturally in the BVs when one of the Bethe parameter is sent to infinity: Here and further, to simplify the formulas. we omit the subscripts of the BVs that refer to the cardinalities of the Bethe parameters. In the Y (gl n ) and Y (gl m|p ) cases, the BAEs are compatible with the limit 2 t (j−1) k → ∞ for j and k fixed. This implies that if the BV B(t) is on-shell then so is B({∞,t}). Moreover, still for the Y (gl n ) and Y (gl m|p ) cases, on-shell BVs obey a highest weight property with respect to the zero modes. Indeed, if B(t) and C(s) are on-shell, witht ands finite, then From these properties, we can elaborate a method to relate different FF. For obvious reason, we call it the zero mode method. The zero mode method (Y (gl n ) and Y (gl m|p ) cases). The basic idea behind the zero mode method is to use the Lie algebra symmetry generated by the zero modes and the highest weight property of on-shell BVs to obtain relations among form factors. To illustrate the method we show it on an example in the Y (gl n ) case, starting from a diagonal FF. We have Symbolically, we will write: lim w→∞ w c F jj (z|s; {w,t}) = F j−1,j (z|s;t), w ∈t (j) . Similarly, with the zero mode method, one gets the following relations: and so on. Thus, all the off-diagonal FF can be computed starting from diagonal ones. Moreover, from the limit lim w→∞ w c F j−1,j (z|{w,s};t) = F j,j (z|s;t) − F j−1,j−1 (z|s;t), w ∈s (j) (40) one deduces that only one diagonal FF is needed to compute all the FF based on the monodromy matrix. • These considerations were developed in [43] for Y (gl 3 ), but the same consideration can be done for Y (gl m|p ). Thus, the determinant representation for the scalar product (29) does generate determinant formulas for all FF in the models described by Y (gl 3 ) [18,44] and its super-analogs Y (gl 2|1 ) and Y (gl 1|2 ) [19]. However, a generalization of this method to the case of the U q ( gl n ) algebra is not straightforward. Universal Form Factors Consider the case of Y (gl n ) or Y (gl m|p ) algebra. Let C(s) and B(t) be on-shell and such that their eigenvalues τ (z|s) and τ (z|t) are different. Then the ratio is independent of z and does not depend on the monodromy matrix vacuum eigenvalues. It depends solely on the R-matrix, and is thus model independent. We call it the universal FF. One can show that the relations (39) yield similar relations for the universal FF. On the other hand, it follows form (35) that the diagonal universal FF are related to the twisted scalar product by Thus, computing Sκ(s|t) we can find all the universal FF. Since the universal FF are completely determined by the R-matrix, they do not depend on the behavior of the monodromy matrix T (z) at z → ∞. Therefore, they can be used to calculate ordinary FF in models for which BAEs do not admit infinite roots. In this way, one can circumvent the z → ∞ limit even for models where the zero modes method formally fails. Note that in the models described by the U q ( gl n ) algebra, the universal FF exist for the diagonal operators T jj (z) only. Composite models In the models, for which an explicit solution of the inverse scattering problem is known [45][46][47], the FF of the monodromy matrix entries immediately yield FF of local operators. In other cases, the FF of local operators can be calculated within the framework of the composite model [48]. In this model, the total monodromy matrix T (z) is presented as a product of two partial monodromy matrices T (2) (z) and T (1) (z) as with where m ∈ [1, L[ is an intermediate site of the chain. One can also consider continues composite models. Then the total monodromy matrix T (z) is still given by (43), while the partial monodromy matrices T (j) (z) should be understood as continuous limits of the products of the L-operators in (44). We assume that each partial T (j) (z) possesses a pseudo-vacuum vector |0 (j) so that |0 = |0 (2) ⊗ |0 (1) , and T (ℓ) Similarly to how it was done in section 5.2, one can introduce partial zero modes T where r (1) and we used the shorthand notation (8) for the products of these functions. It is assumed in (46) that the on-shell BVs C(s) and B(t) have different eigenvalues. Since the number m of the intermediate site is not fixed, the FF of the first partial zero modes give an immediate access to the FF of the local operators (L m ) ij [0] due to where we have stressed by the additional superscript m that the partial zero mode T • These calculations for the Yangian Y (gl 3 ) can be found in [49][50][51][52], with application to the two-component Bose gas. Similar equations for FF of local operators in the case of Yangians Y (gl 2|1 ) and Y (gl 1|2 ) was obtained in [53]. Most probably, FF of local operators in the general Y (gl n ) and Y (gl m|p ) cases can be expressed in terms of the universal FF in the same way. Conclusion Concerning the points described in the present review, many directions remain to be developed. Among them, one can distinguish the following ones. (i) Finding a simpler expression for the scalar product of off-shell BVs. We have already mentioned the determinant expressions that seem to be well-adapted for the calculation of correlation functions and for the thermodynamic limit. Such expressions, even in the case of U q ( gl 3 ), are thus highly desirable. On this point, note the determinant expression for XXX model in the thermodynamic limit found by Bettelheim and Kostov [54]. Remark also the approach by N. Grommov et al. using a single 'B'-operator [55], for Y (gl n ) with fundamental representations. (ii) Another way to get simple expressions for scalar products could be to use an integral representation. A first step has been done by M. Wheeler in [17]. Remark also that the projection method in the current presentation provides an integral representation, see e.g. [56]. (iii) Once determinant expressions are known for scalar products, in the case of (super) Yangians, the zero mode methods allows to get similar expressions for the form factors. It would be good to get a similar method of the U q ( gl n ) case. It seems that the zero mode methods can be adapted to this case: we hope to come back on this point in a further publication. Obviously, all these points are the first step towards the complete calculation of correlation functions and their asymptotics. As mentioned in the introduction, this calculation depends specifically on the model one wishes to study. Among the possible applications, one can distinguished multi-component Bose gas, tJ-model or the integrable approach to amplitudes in Super-Yang-Mills theories. Finally, the case of other quantum algebras, based on orthogonal or symplectic Lie algebras is also a direction that deserved to be studied.
2018-07-04T20:00:51.000Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "a9b79c6ab80f8d6e244445fcf7d0a35858120d15", "oa_license": "CCBY", "oa_url": "https://scipost.org/10.21468/SciPostPhysLectNotes.6/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "91aef45370b5f21b61762e890db4a30503370591", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
244803578
pes2o/s2orc
v3-fos-license
Antibiotic resistance modifying ability of phytoextracts in anthrax biological agent Bacillus anthracis and emerging superbugs: a review of synergistic mechanisms Background and objectives The chemotherapeutic management of infections has become challenging due to the global emergence of antibiotic resistant pathogenic bacteria. The recent expansion of studies on plant-derived natural products has lead to the discovery of a plethora of phytochemicals with the potential to combat bacterial drug resistance via various mechanisms of action. This review paper summarizes the primary antibiotic resistance mechanisms of bacteria and also discusses the antibiotic-potentiating ability of phytoextracts and various classes of isolated phytochemicals in reversing antibiotic resistance in anthrax agent Bacillus anthracis and emerging superbug bacteria. Methods Growth inhibitory indices and fractional inhibitory concentration index were applied to evaluate the in vitro synergistic activity of phytoextract-antibiotic combinations in general. Findings A number of studies have indicated that plant-derived natural compounds are capable of significantly reducing the minimum inhibitory concentration of standard antibiotics by altering drug-resistance mechanisms of B. anthracis and other superbug infection causing bacteria. Phytochemical compounds allicin, oleanolic acid, epigallocatechin gallate and curcumin and Jatropha curcas extracts were exceptional synergistic potentiators of various standard antibiotics. Conclusion Considering these facts, phytochemicals represents a valuable and novel source of bioactive compounds with potent antibiotic synergism to modulate bacterial drug-resistance. Background The emergence of multidrug-resistant bacteria has become a threat to global public health and invoked problems, resulting in inadequate treatment for infectious diseases. The chemotherapeutic management of bacterial infections has become more challenging in recent years due to the development of antimicrobial resistance in pathogenic bacteria and their certain populations evolving into formidable super drug-resistant strains known as superbugs that are capable of causing serious illnesses [1,2]. The bacterium Bacillus anthracis was one of the organisms of interest in the recent past due to its ability to cause the life-threatening illness anthrax. It can be difficult to treat anthrax if progressed to advanced stages, due to the virulence nature of the pathogen and therefore, research has been undertaken to discover novel Annals of Clinical Microbiology and Antimicrobials therapies to treat the disease more safely and effectively [3]. Antibiotics have the ability to restrict the growth and replication of bacteria by inhibiting bacterial cellular components associated with the synthesis of cell walls, proteins and nucleic acids, along with the suspension of folate metabolism and depolarization of the cell membrane [4,5]. The successful management of bacterial infections has been achieved in the past due to the development of antibiotic agents. It seems, however, that the golden era of these synthetically produced antibiotics has come to a near end due to their irrational use. Although antibiotics have broad-spectrum activities, bacteria have evolved to combat the action of these agents through various resistance mechanisms, such as the production of antibiotic molecule inactivating enzymes, the modification and mutation of antibiotic binding sites, suspension of bacterial cell membrane porin activity associated and the expression of efflux pumps [6,7]. Another ingenious strategy of overcoming antimicrobials is the deployment of autoinducer molecules by certain bacteria to mediate quorum sensing. Quorum sensing (QS) is a bacterial cell-to-cell biochemical communication process that involves the activation of specific signals to coordinate pathogenic behaviors and assist bacteria to acclimatize against unfavorable conditions imposed by the proximal environment that they exist. Signal molecules responsible for mediating bacterial quorum sensing include autoinducing peptides, autoinducer-2 and acyl-homoserine lactone. Quorum sensing in bacteria can facilitate biofilm formation which agitates the penetration of antibiotic molecules and therefore, it is a major contributor towards antibiotic resistance [8]. Despite the complex nature of issues related to antibiotic resistance, no individual nation has independently succeeded in addressing this major public health problem up to date. The rise of antimicrobial resistance has mediated interest in research focusing on the significance of medicinal plants and their phytochemical compositions. Long before the invention of modern antibiotics, folklore medicine was able to use the therapeutic efficacy of these medicinal herbs and integrate their potentials in the treatment of infectious diseases. So far, the World Health Organization (WHO) reports that about 80% of the populations residing in Asia and Africa rely on traditional medicine for their primary health care needs [9]. These traditional therapeutic methods have been considered as possibly the safest alternative sources of antimicrobial agents available, by which the involvement of medicinal herbs in treating infectious diseases has paved the way for the development of modern medicine [10]. Plants are reservoirs of chemical agents with therapeutic properties beneficial to mankind [11]. Bioactive compounds naturally extracted from whole plants or from different parts of plants, like leaves, bark, stem, roots, fruits, fruit rind, seeds and flowers, can serve as novel sources for the management of infectious diseases caused by pathogenic microorganisms as an alternative to synthetic drugs [12,13]. Certain phytochemical compounds are capable to interact synergistically with antibiotics already available, which can be potentially an effective way to combat the phenomenon of resistance. There is evidence that combinations of natural compounds from plants can facilitate or improve the interaction of antibiotics with their target in the pathogen and thus reduce the emergence of resistance through mechanisms of resistance modification [1]. This combined therapeutic strategy can also reduce drug/ dose-related side effects to the consumer since lower concentrations of both agents can be used. Therefore, the objectives of this review are to provide an update of the literature review on the synergism between antibiotics and plant extracts, presentation of experimental data on antibiotic-potentiating mechanisms of plant-derived compounds and scientific evidence that support the successive pre-clinical application of synergistic effects of combined plant compounds to serve as a starting point for the discovery of novel antibacterial agents that are capable of neutralizing infections and reversing antibiotic resistance in anthrax causative organism B. anthracis and the Centers for Disease Control and Prevention (CDC) classified emerging bacterial superbugs. Antibacterial drug discovery and development Long before the twentieth century, the management and treatment of infectious diseases were based mainly on folk medicine. There is evidence that complex mixtures with antibacterial properties have been applied among ancient populations for over two thousand years [14][15][16]. Post-mortem analysis has revealed the presence of traces of tetracycline like compound that has been incorporated into the dentals of early Sudanese populations that lived around A.D 350-500. The presence of this compound in their corpses has lead to the impression that these populations may have used it as a medicine or included in their diet [17,18]. A similar finding was reported in ancient populations living in the Dakhleh Oasis, in Egypt, around the time of the late Roman Empire [19]. There is a popular anecdote that showed how to use the red soil in the Hashemite Kingdom of Jordan as a source of antimicrobials to treat skin infections. Interestingly, the bacterium known as Actinomycetes, which is generally found in such soils, produces modern antibiotics, such as actinomycin, streptomycin, erythromycin, nystatin, amphotericin and vancomycin [20,21]. Traditional Chinese medicine consists of a large summary Page 3 of 36 Dassanayake et al. Ann Clin Microbiol Antimicrob (2021) 20:79 of medicinal herbs used for millennia in the treatment of many infections caused by bacteria [22,23]. Their application of active compounds from ancient medicinal herbs has enriched the arsenal of many antibacterial agents used in modern medicine [24]. The modern era of antibacterial agents began with the discovery of penicillin extracted from a mould specimen known as Penicillium notatum by Sir Alexander Fleming in 1928. Penicillin caught the attention of many ancient scientists, as the compound was able to stop the growth of a wide range of bacteria [25][26][27]. During the time of its discovery, penicillin became the most popular therapeutic agent due to the wide application and the magnitude of its therapeutic outcomes. The technologies used and developed to produce penicillin became the basis for the production of all subsequent antibiotics currently in use [28,29]. Most antibiotics currently in existence, such as cephalosporins, penicillins, macrolides, tetracyclines, vancomycin, teicoplanin, daptomycin and rifamycin have been synthetically derived from natural products [30,31]. According to the World Health Organization, more than 11% of modern drugs are derivatives from plants [32]. Advanced technologies such as high throughput screening, combinational chemistry and genomic applications have been implemented to invent new antibacterial molecules to reverse antibiotic resistance [33]. An investigation conducted by Kim Lewis in 2001 lead to the key discovery of synergistic compounds of plant origin. His finding elucidated that a compound known as 5'-methoxyhydnocarpin-D isolated from the extracts of Berberis fremontii was able to potentiate the antibacterial action of berberine, inhibiting the activity of multidrug efflux pumps in Gram-positive and Gram-negative bacteria [34][35][36]. Novel approaches have been taken to combat antibiotic resistance such as the development of therapeutics based on anti-QS agents or bacterial quorum quenchers from natural products and bacterial vaccines. Unlike conventional treatments involving antibiotics, these novel therapies can be more potent and robust in combating advanced conditions of antibiotic resistance and bacterial virulence [37]. The implementation of bacterial vaccines is well evident in controlling diseases like tetanus, diphtheria, cholera, bacterial meningitis, typhoid fever and even anthrax, where measures have been taken to neutralize the outbreak of the disease in Swedish nature reserves in 2011 by vaccinating the resident animals against anthrax when treatment with penicillin was ineffective. Given this ideal, it has been predicted that such approaches will be nigh-impervious to resistance in bacterial populations and more robustly prevent the spread of infection [38,39]. Anthrax and the biological agent Bacillus anthracis Anthrax is a serious enzootic infectious disease transmitted from infected livestock animals to humans. The biological agent responsible for causing anthrax is a Gram-positive endospore forming bacilliform bacterium known as Bacillus anthracis. Anthrax can be primarily acquired through direct contact and the consumption of contaminated meat. The most common forms of the disease under natural settings are cutaneous and gastrointestinal anthrax. Other but rare means of acquiring the disease include the inhalation of bacterial endospores that can result in pulmonary anthrax. The endospores remain dormant until being inhaled by a host and internalized, where they mature into toxin producing virulent bacterial cells in thoracic lymph nodes that cause acute and severe infection. The tactical delivery of concentrated endospores obtained from wild type B. anthracis is a strategy used in biological warfare and bio-terrorism. The disease is endemic to agricultural regions of southwestern and central Asia, Central and South America, southern sub-Saharan Africa, the Caribbean and Eastern Europe [40]. The largest agricultural outbreak of anthrax was reported from Zimbabwe, with cases of infection exceeding 10,000 between 1979 and 1985. It has been reported that nearly all cases of the infection occurred in Zimbabwe outbreak were cutaneous anthrax [41]. In the year 1979, about 79 cases related to inhalational anthrax were reported from Sverdlovsk region of Russia, in which 68 of those cases became ultimately fatal. The Sverdlovsk incident was the largest outbreak of human anthrax ever documented in history and is believed to have been caused due to an accident occurred in a Soviet military affiliated Microbiology facility that lead to the release of aerosolized anthrax spores [42]. The most recent incident related to anthrax bioterrorism was reported in 2001, in which concentrated spores of highly virulent B. anthracis were delivered using postal letters that resulted in about five fatalities among the 22 infected [43]. The disease incidence was significantly reduced during the twentieth century. Hence, anthrax continued to represent globally outside the United States, with an occurrence of approximately 2000 cases annually by the end of the twentieth century. A majority of these worldwide cases were associated with cutaneous anthrax [42]. The standard therapy for anthrax include with antibiotics like penicillin G procaine, doxycycline or ciprofloxacin being first-line treatment for the infection [44]. Newer anthrax therapeutic agents like monoclonal antibody based Anthim (obiltoxaximab) and the anthrax immune globulin based Anthrasil were also deployed solo or in combination with antibiotics to control the infection more effectively [41,45]. In 2015, the Food and Drug Administration (FDA) approved BioThrax which is an immunologically active B. anthracis antigen vaccine against anthrax to prevent the disease. It is the only anthrax vaccine that has received FDA approval up to date [46]. The treatment of acute anthrax can be difficult due to the virulence properties of B. anthracis, in which the bacterium and its endospore are both encapsulated with a protective polysaccharide coating that allows immune evasion from phagocytes like macrophages. Bacterial exotoxins secreted by B. anthracis are known as edema toxin and lethal toxin which cause diarrhea and flu-like symptoms. The entry of these exotoxins into host cells and initial pathogenesis is facilitated by a major virulence factor present in B. anthracis known as the protective antigen. The polysaccharide capsule and other associated virulence factors are expressed by bacterial DNA plasmids, pXO1 and pXO2 present in B. anthracis. Although the bacterium can be eradicated by antibiotic agents, the toxins produced by the bacterium remain nonresponsive to antibiotic therapy. Hence, the CDC recommends the employment of a combined course of rapid antibiotic therapy involving two or three antibiotics along with anthrax anti-toxin therapy in order to prevent the accumulation of exotoxins in the body [47][48][49]. Antibiotic resistance in B. anthracis has been documented. One study showed that 11.5% out of 96 isolates of B. anthracis recovered from France between the time period of 1994 and 2000 indicated resistance amoxicillin and penicillin G. The same investigation revealed that all 96 isolates of B. anthracis tested were resistant to cotrimoxazole [50]. Although drug resistance mechanisms of B. anthracis has not yet been fully exploited, a study conducted by Price et al. showed that efflux-pump encoding bacterial genes gyrA, gyrB and parC can mediate cross-resistance to fluoroquinolone antibiotics like ciprofloxacin in B. anthracis [51]. Another study stated that B. anthracis consist of genes bla1 and bla2 that are capable of expressing beta-lactamases against β-lactam antibiotics [52]. Despite the treatment of anthrax infection with currently available antibiotics, the introduction of safer and more efficient chemotherapeutic options are required. Studies have demonstrated anti-B. anthracis activity of novel compounds extracted from medicinal plants and therefore, new insights involving the efficiency of plant-derived compounds and antibiotic combinations exhibiting anti-anthrax potential are needed to be addressed. Superbug bacteria The continuous or inappropriate use of antibiotics has resulted in the development of extensive drug resistance in bacteria. Overtime, these organisms will progressively advance and evolve into superbug bacteria as an adaptive response to selective antibiotic pressure [53,54]. The WHO has defined these bacteria as "Superbugs" due to the infections caused by these organisms are no longer treatable with existing antibiotic agents [55]. The CDC has categorized these organisms as urgent, serious, concerning and watch-list pathogenic threats. Antibiotic resistance is highly prevalent among Gram-positive bacteria, in which some of the well known examples of superbugs include methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococci (VRE). The United States national antimicrobial surveillance data has indicated the emergence more serious superbug infections associated with extended-spectrum beta-lactamases (ESBL) or carbapenemase producing organisms like Klebsiella pneumoniae and Eschericia coli, MDR pathogens like Pseudomonas aeruginosa, Vibrio cholera and Clostridium difficile, Salmonella spp. and drug-resistant Mycobacterium tuberculosis. However, some of the most worrisome threats up to date are associated with emerging superbugs like carbapenemresistant Acinetobacter spp., particularly Acinetobacter baumannii [56]. Methicillin-resistant Staphylococcus aureus Methicillin-resistant Staphylococcus aureus (MRSA) are Gram-positive bacteria arranged in round-shaped clusters. They are among the most successive pathogens in the modern time period. S. aureus can exist as part of the human flora and can cause opportunistic infections. Being genetically diverse, these organisms can evolve into epidemic strains like MRSA [57,58]. These MRSA are formidable clinical threats and are considered as archetypal hospital superbugs that are responsible for causing serious bloodborne infections like sepsis and infective endocarditis [59,60]. According to CDC, more than 120,000 cases of hospitalizations reported in the United States from 1999-2000 were due to S. aureus associated infections. Among these cases MRSA accounts for 43.2% of infections in those hospitalized [61]. Antibiotic agents vancomycin or daptomycin are usually the first-line treatment for MRSA bacteraemia and related infections. However, in case of severe infection, combined therapy with flucloxacillin is given for more effective treatment. The primary reason for methicillin and other β-lactamresistance in S. aureus is due to the expression of a foreign PBP known as PBP2a by the mecA gene present in them. PBP2a variant binds with β-lactam antibiotics with reduced avidity, which mediates resistance to this class of antibiotics. Lower affinity of PBP2a to β-lactam agents allows MRSA to replicate due to peptidoglycan synthesis taking place in the presence of β-lactams antibiotics that are capable of inactivating transpeptidase activity of PBPs. PBP2a is composed of a non-penicillinbinding protein and transpeptidase domain. Mutations in S. aureus associated genes like mprF, yycH, and dltA are also known for conferring cross resistance to daptomycin. S. aureus are also capable of acquiring genes that encode for antibiotic resistance from their predecessors. MRSA consist of a wide range of dynamic virulence factors that include immune-evasive bacterial surface factors (e.g. protein A and capsule) and tissue invasion promoting enzymes like hyaluronidase and toxins (e.g. leukocidins and haemolysins) for mediating pathogenesis. Two additional virulence factors known as Panton-Valentine leukocidin and arginine-catabolic mobile element have been discovered in a MRSA isolate called USA300, which facilitated the rapid spread of the strain by improving its adaptability to the pH of the human skin [62,63]. Research has been undergoing in order to introduce newer and more effective therapies for MRSA infections such as the development of vaccines and potent natural products. Vancomycin-resistant enterococci Vancomycin-resistant enterococci (VRE) are roundshaped Gram-positive bacteria that can cause serious MDR infections and persistent colonization in humans. Enterococci are opportunistic inhabitants that exist in the environment with an exceptional ability to adapt and evolve to transmit antibiotic-resistant determinants [64]. VRE can cause life-threatening infections in humans such as bloodstream infections like sepsis, endocarditis and pyelonephritis, in which most of these are nosocomial [65]. Tan et al. reported that a survey conducted from 2014 to 2016 showed that 523 out of 5,357 patients from health-care facilities in Singapore suffered from infections caused by VRE. An outbreak of VRE related infections has also been reported in 1997 from acute-care hospitals in the United Sates [64]. Linezolid is usually recommended as the first-line treatment for VRE and whereas, daptomycin, tigecycline and quinupristin-dalfopristin combined therapy are considered as last resort antibiotic agents for the management of enterococci related infections. The primary mechanism of drug resistance in enterococci involves the alteration of pathways associated with peptidoglycan synthesis, which specifically substitute D-Alanine-D-Alanine (D-Ala-D-Ala), to either D-Alanine-D-Serine (D-Ala-D-Ser) or D-Alanine-D-Lactate (D-Ala-D-Lac). These termini in VRE cell walls bind poorly with glycopeptide antibiotics like vancomycin [66]. A total of eight van gene clusters, vanA, vanB, vanD, vanE, vanG, vanL, vanM, and vanN are responsible for expressing elements required for antibiotic resistance in enterococci with vanA and vanB being the most abundant [67]. VanA is responsible for mediating the primary mechanism for antibiotic resistance in enterococci [68]. In addition to drug-resistance mechanisms, these enterococci consist of virulence factors like DNAse, caseinase and gelatinase to promote pathogenesis [69]. Extended-spectrum beta-lactamases and carbapenemase producing Klebsiella pneumoniae Extended-spectrum β-lactamases (ESBL) and carbapenemase producing Klebsiella pneumoniae are rod-shaped Gram-negative bacteria that cause high morbidity and mortality among hospitalized patients under intensivecare and neonatal intensive-care. These organisms are capable of producing carbapenemase against carbapenems and a rapidly evolving class of β-lactamase enzymes known as extended-spectrum β-lactamase, which have the ability to hydrolyze the β-lactam ring of a range of third/fourth-generation cephalosporin antibiotics and render them ineffective. EBSL and carbapenemase mediated drug resistance to numerous antibiotics make it challenging to treat infections caused by these organisms. Klebsiella spp. are ubiquitous in nature that belong to the family of bacteria called Enterobacteriaceae. These organisms exist in the natural environment and are a part of the human flora. K. pneumoniae are opportunistic pathogens, which have the ability to colonize the respiratory tract, gastrointestinal tract, genitourinary tract and eyes of those vulnerable [70]. Appropriate first-line treatment for EBSL producing K. pneumoniae include with antibiotics like amoxicillin/clavulanic acid, ceftriaxone, ciprofloxacin or cotrimoxazole) [71]. In case of carbapenemase producing K. pneumoniae, the treatment options include high-dose or combined antibiotic therapy with meropenem, tigecycline and/or colistin, gentamicin or fosfomycin depending on susceptibility [72]. Müller-Schulte et al. stated that 94% of infections reported from the University Teaching Hospital in Bouaké, West Africa from 2016-2017 were caused by EBSL producing K. pneumonia [73]. A survey conducted from 2014-2015 in long-term acute care hospitals based in the United States indicated that nearly 25% of infections caused in hospitalized patients were due to carbapenemase producing K. pneumonia [74]. Genes responsible for coding EBSL mediated antibiotic resistance in K. pneumoniae include blaSHV, CTX-M and TEM [75]. Navon-Venezia et al. indicated that plasmid genes blaVIM-1, blaOXA-48, blaVIM and blaNDM-1 are responsible for carbapenemase mediated drug resistance in K. pneumoniae [76]. Additionally, K. pneumoniae consist of virulence factors for facilitation pathogenesis such as capsular (K) antigen for evading phagocytosis, O antigen for invasion of host cells and siderophores like enterobactin and aerobactin for iron acquisition [77]. Extended-spectrum beta-lactamases and carbapenemase producing Eschericia coli ESBL and carbapenemase producing Eschericia coli are Gram-negative rod-shaped bacteria. These organisms are responsible for causing serious community and hospital-acquired infections worldwide, especially in places where inadequate hygienic practices are common and poor sanitation. E. coli is well known for causing gastroenteritis and infections associated with the urinary tract [78,79]. In case of EBSL producing E. coli, the organism is responsible for causing bacteraemia in more than 5000 cases of hospitalized patients in the United Kingdom [80]. A prevalence survey conducted in a Spanish University Hospital indicated that 7.69% and 1.83% of admitted patients out of 10,643 suffered from infections associated with ESBL producing E. coli and carbapenemase producing E. coli respectively [81]. Antibiotics like colistin and carbapenems are usually the first-line treatment for ESBL producing E. coli [82]. Fritzenwanker et al. suggest that ertapenem infusion with meropenem or doripenem combine antibiotic therapy is given infections caused by carbapenemase producing E. coli [83]. A study conducted by Overdevest et al. showed that ESBL producing E. coli harbored plasmid genes like blaCTX-M-1 and blaTEM-52 [84]. Shin et al. detected blaNDM-5 gene in high level carbapenemase-resistant E. coli [85]. Aside from antibiotic resistance, E. coli has a number of virulence factors for mediating pathogenesis like heat-labile toxin, heat-stable toxin, enterohaemolysin, shiga-like toxin, enteroaggregative heat-stable, enterotoxin, haemolysin, cytotoxic necrotizing factor, uropathogenic specific protein and invasin for host cell invasion and K1-capsule and intimin for immune evasion and cellular attachment [86,87]. Multidrug-resistant Pseudomonas aeruginosa Pseudomonas aeruginosa are Gram-negative bacteria arranged in rods or bacilli. These organisms can be found in the environment (e.g. soil and water) and are known for causing blood borne infections and pneumonia in humans under opportunistic conditions [88]. P. aeruginosa are also associated with hospital-acquired infections, in which MDR P. aeruginosa is responsible for causing 32,600 infections among patients who were hospitalized and 2700 fatalities in the United States in 2017 [89]. Monotherapy and combined therapy with antibiotic agents like ceftolozane-tazobactam, ceftazidime-avibactam, cefiderocol and imipenem-relebactam/cilastatin are used for the treatment of infections caused by MDR strains of P. aeruginosa [90]. The most common mechanism of antibiotic resistance in P. aeruginosa is associated with the overproduction of drug efflux pump systems like MexAB-OprM, MexEF-OprN, MexXY-OprM and MexCD-OprJ induced by mex gene mutations. These multi-drug efflux pumps function as antibiotic molecule extruders. Apart from efflux pumps, these organisms also consist of genes like AmpC that code for the production of β-lactamases and OprD for encoding alterations in type II topoisomerases (DNA gyrase) to mediate resistance against fluoroquinolone and carbapenem antibiotics [91]. P. aeruginosa also expresses a number of virulence factors such as protease A, exotoxins, phospholipase C and cytotoxins for host cell invasion and pyoverdine and QS system regulatory proteins essential for the formation of biofilms, that plays a vital role in host immune evasion and antibiotic resistance [92]. Multidrug-resistant Vibrio cholera Vibrio cholera are comma-shape Gram-negative bacteria well known for causing the severe water-borne acute, diarrheal illness cholera. These organisms have become a major threat to public health, particularly in the developing world [93]. According to recent reports, the V. cholerae O1 El Tor variant is responsible for causing cholera outbreaks worldwide [94]. The transmission of V. cholerae generally occurs via the faecal-oral route by ingesting contaminated water and food [95]. Typically, doxycycline is used as the first-line treatment for cholera infection caused by V. cholera [96]. According to CDC, V. cholerae colonizes the small intestine to cause cholera and an estimation of 2.9 million cases and 95,000 fatalities occur annually worldwide [97]. WHO estimates that 1.3 to 4 million cases of pathogenic infections and 21,000-143,000 fatalities reported across the globe are due to cholera [98]. It has been reported that V. cholerae have caused seven pandemics related to cholera in different countries [99]. MDR V. cholerae has a number of antibiotic resistance mechanisms such as active antibiotic molecule efflux, reduced cell wall permeability to antibiotics, alteration of binding targets sites for antibiotics via undergoing post-transcriptional or translational modifications (e.g. mutations in topoisomerase and DNA gyrase) and hydrolysis or chemo modification of antibiotic agents. These resistance mechanisms are expressed by genes like blaNDM-1, blaDHA-1, carR, ant 3' , tet(M), tetD, foIP, qacEΔ1, mph2, mel, armA, rmtB, rmtC, rmtF, aphA1, arr2, bcr, mphRK, mrx, blaP, vigA, blaCTX-M, sh ble, floR, cat, aacA, aphD, tetG, aac-Ib, qnrVC3, ereA2, bla, strA, strB, sul2, mdtH, rpsl, dfrA, dhfrII, aad3' and mph in MDR V. cholerae [100]. Major virulence factors necessary for mediating pathogenesis and host cell invasion in V. cholerae include the cholera, toxin-coregulated pilus and O antigen [101]. Multidrug-resistant Clostridioides difficile Clostridioides difficile are Gram-positive spore-forming bacteria that are ranged in rods or bacilli. These organisms are responsible for causing colitis and nosocomial diarrhea. C. difficile are opportunistic pathogens, in which they colonize the small intestine when the gut microbiota is disrupted as a result of antibiotic misuse [102]. Metronidazole is the first-line treatment mild to moderate infection, whereas advanced forms of infection are treated with vancomycin and fidaxomicin monotherapy or combined therapy. C. difficile infection is more prevalent among the elderly who have prescribed antibiotics for other conditions [103,104]. CDC reports that 223,900 patients admitted to hospitals and 12,800 fatalities in the United States in 2017 were associated with C. difficile infections [105]. Data obtained from the French National Uniform Hospital in 2016 indicated that 3.6 cases per 10,000 acute care patient days account for infections caused by C. difficile [106]. MDR strains of C. difficile consist of genes like gyrA for mediating moxifloxacin and rpoB for mediating resistance against rifampicin. Moreover, C. difficile consist of genes associated with tetracycline resistance like tetM and genes that code for aminoglycoside-modifying enzymes like aac(6′)-aph(2″) and aadE. C. difficile also expresses genes like mef(A)-msr(D), ermG, and vat for coding resistance to antibiotics lincosamide, streptogramins and macrolide. A mutation associated with Cys721Ser PBP in C. difficile has been speculated to contribute resistance towards imipenem [107]. The bacterial exotoxins TcdA, TcdB and binary toxin are the common virulence factors associated with C. difficile for host invasion and promoting pathogenesis [108]. Drug-resistant Mycobacterium tuberculosis Mycobacterium tuberculosis are acid-fast bacilli that has been categorized under serious threats by the CDC. These organisms are well known for causing the highly infectious lung associated disease named tuberculosis (TB) [109]. The transmission of M. tuberculosis occurs via droplet nuclei from an infected person [110]. TB caused by MDR, XDR or pandrug-resistant (PDR) strains of M. tuberculosis poses a serious threat to the public health worldwide, in which the disease has claimed the lives of 1.3 million and about 8.6 million cases of the infection has been reported in 2012 [111]. The WHO in 2016 estimated that there were 600,000 cases of TB and 240,000 fatalities attributed to the disease have been reported [112]. FDA approved first-line treatment anti-TB agents include isoniazid, rifampin, ethambutol, pyrazinamide [113]. In case of MDR or XDR TB, anti-TB drugs like pretomanid in combination with linezolid and bedaquiline are given [114]. Bacillus Calmette-Guérin immunotherapy is generally used to prevent TB, which uses a live-attenuated vaccine derived from Mycobacterium bovis to immunize against the TB infection [115]. M. tuberculosis consists of several genes capable of mediating antibiotic resistance via drug target modulation. These include katG, inhA, ndh and ahpC targeted against isoniazid, rpoB against rifampicin, pncA and rspA targeted to counter pyrazinamide, embCAB and embR that modulates binding sites of ethambutol, rpsL, rrs and gidB targeted against streptomycin, rrs and eis modulates binding sites of amikacin/kanamycin, ethA, inhA, ethR, ndh and mshA to counter ethionamide and gyrA and gyrB modifies DNA gyrase against fluoroquinolones [116]. The major virulence factors for promoting pathogenesis in M. tuberculosis include phthiocerol dimycocerosate for host cell invasion and phenolic glycolipids involved in the evasion of host immune responses and inducing macrophage toxicity [117]. Multidrug-resistant Salmonella spp. Salmonella spp. are Gram-negative zoonotic disease causing enteric bacteria. Over 2600 serotypes of Salmonella have been identified, which are responsible for causing gastrointestinal diseases such as food poisoning. Depending on the nature of symptoms, Salmonella infections can be classified as non-typhoidal, paratyphoidal and typhoidal, in which both paratyphoidal and typhoidal Salmonella causes high fever (typhoid fever). Most cases of foodborne infections have been found to be associated with Salmonella enterica serovar Enteritidis [118][119][120]. The transmission of Salmonella occurs via the fecal-oral route by ingesting contaminated food [121]. Generally, the first-line treatment for Salmonella infections include with antibiotics like fluoroquinolones for adults and azithromycin for children pediatric patients. Alternatively, ceftriaxone can also be used as a first-line antibiotic therapy for Salmonella [122]. The FDA has approved Salmonella vaccines such as Vi bacterial polysaccharide (Vi antigen) under the brand name Typhim Vi and Vivotif that uses a live-attenuated ty21a strain via oral administration for the immunization against typhoid fever [123]. Another study revealed that aac(6′)-I gene is frequently associated with aminoglycoside resistance. Moreover, the β-lactamase producing blaCMY-2 gene and tetR** gene that encodes for tetracycline resistance were abundantly present in Salmonella [127]. There are a number of associated with Salmonella such as the Vi capsular antigen, somatic O antigen, H antigen (flagella), fimbriae and type III secretion systems that include Salmonella pathogenicity island 1 (SPI-1), SPI-2, which promote host cell invasion and pathogenesis [128,129]. The cytolethal distending toxin in Salmonella has been found to cause typhoid fever among the infected. Other salmonellosis mediating toxins like pertussis-like toxin A and pertussis-like toxin B were also have been found to be expressed by these organisms [130]. Acinetobacter baumannii Acinetobacter baumannii is accountable for causing most community and hospital-acquired infections. Overtime, these organisms can evolve into XDR or PDR superbugs as a result of continuous selective pressure and rendering the majority or all existing antibiotics ineffective. The CDC has alerted and listed A. baumannii as an organism that needs to be considered as an urgent threat [56,131,132]. A. baumannii is a coccobacillus Gram-negative bacterium that is known for colonizing the gastrointestinal, respiratory tract and the oral cavity of humans. It is also recognized as a formidable opportunistic pathogen that causes many forms of severe recalcitrant infections. A. baumannii infections are resulted frequently due to wound contamination [133]. However, it is a clinically dominant bacterial species that has a pronounced tendency to cause healthcare-associated nosocomial infections [134,135]. A. baumannii has been listed under ESKAPE pathogens and is the most aggravating member of the Acb-complex (A. calcoaceticus, A. baumannii and Acinetobacter genomic species 13TU) that have been found to show high resistance to antibiotic agents which increases the risk of mortality among hospitalized patients under intensive care [133,134,[136][137][138][139][140]. Although infections caused by this bacterium were able to keep under control in early 1970s, A. baumannii lately re-emerged as MDR and XDR strains with marked resistance to most antibiotics like gentamicin, nalidixic acid, minocycline, carbenicillin, and ampicillin. The bacterium exhibited resistance to a majority of antibiotics during early 1990s and by late 1990s, the only treatment of choice was carbapenems in combination with rifampicin [141,142]. Presently, infections caused by MDR and XDR strains of A. baumannii were being treated with antibiotics like polymyxin B, colistin and tigecycline. However, more new strains of A. baumannii have been frequently reported that can exhibit resistance to the aforementioned antibiotics [143,144]. The extensive resistance to antibiotics in A. baumannii is primarily due to the prevalence of adaptive multidrug efflux pumps like adeA, adeB, adeC, adeDE, adeABC, adeFGH, adeXYZ and adeIJK in the bacterium [145]. A. baumannii harbors a multitude of virulence factors for facilitating host cell invasion and pathogenesis such as the biofilm promoter outer membrane protein A, surface antigen 1, lipid A, phospholipase, secretion systems (type 1, type 2, type 4, type 5, type 6 and type V), siderophores for iron acquisition, binding domains for the acquisition of zinc and magnesium and drug resistance promoting QS systems [146]. These rapid mutations in A. baumannii make it one of the most challenging biological factors to human health and public health-care systems. The recent emergence of pandrugresistant superbugs like A. baumannii has indicated an urgent necessity for the discovery of novel antibacterial agents and chemotherapeutic strategies [137,139]. Plant-derived compounds as antibacterial agents Plants are natural factories capable of producing a series of different phytochemical compounds. These compounds were produced in response to adverse biotic and abiotic environmental conditions. Phytoconstituents have a major impact on other plants, animals and microorganisms in their immediate environment that surrounds them [147]. Plant-derived constituents are biologically active organic compounds and are generally defined as secondary metabolites. These secondary metabolites are structurally diverse compounds that are classified into three primary groups as phenolic compounds (phenolic acids, simple phenols, flavonoids, quinones, coumarins and tannins), alkaloids and terpenes (Fig. 1). These compounds can be isolated from crude extracts and essential oils of plants. Complex mixtures of phytochemicals are represented in crude extracts that contain primary and secondary metabolites of different classes, chemical and biosynthetic. These compounds share some of the common mutual characteristics, such as volatility and/or polarity. Since antiquity, extracts obtained from medicinal plants have been known to have broad-spectrum antimicrobial activities and have been frequently studied and reviewed. Their profound antibacterial activity is generally recognized as a safe substance and, with the minimal risk of developing bacterial resistance, has qualified them as suitable sources for the development of new antibacterial agents [148,149]. Mechanism of action of plant-derived antibacterial compounds The efficiency of antibacterial compounds derived from plants depends on several factors, such as features of test microorganism (type, species and strain), botanical source and composition of the bioactive phytochemical compounds, as well as the stage of development, time of harvesting of the plant material and most importantly, the method of plant extraction. Due to the complex nature of the compounds present in crude extracts of plants, they can exhibit multiple mechanisms of action on bacteria. These include the suspension of bacterial growth, function or viability, targeting bacteria virulence factors, potentiating the effectiveness of antibiotics as agents that modify bacterial resistance. Similar to antibiotic agents, these phytochemicals can inhibit the growth and replication of bacteria, disrupting the structure and function of the bacterial cell membrane [150], interrupting the synthesis of nucleic acids such as DNA or RNA [151], disrupting the intermediary metabolism [152], and the coagulatory induction of bacterial cytoplasmic constituents [153]. Several studies have been conducted to understand and illustrate the antibacterial action of phenolic compounds such as flavonoids, coumarins and tannins. Flavonoids are a diverse group of polyphenolic compounds that have the ability to inhibit the activities of DNA gyrase and DNA topoisomerase, energy metabolism mediated by NADH-cytochrome C reductase or inhibition of ATP synthase and the interruption of components involved in the synthesis of the cell wall and cell membrane [154]. Possible targets of quinones include peptidoglycan from the bacterial cell wall and enzymes associated with the cell membrane [151]. It is known that tannins cause the destabilization of the cell membrane and alterations in metabolic pathways and inactivation of membrane-bound proteins [155]. Phytochemicals like coumarins mediate the delay in bacterial cell respiration. Terpenes disrupt the bacterial cell membrane due to their lipophilic nature. Alkaloids are some of the most widely studied plant-derived compounds which can intercalate with bacterial DNA and enzymes associated with nucleic acids as esterase, DNA or RNA polymerases [156]. Examples of antibacterial mechanisms of action of plant secondary metabolites against CDC classified bacterial superbugs and anthrax biological agent B. anthracis are elucidated in Table 1. Growth inhibitory indices The agar diffusion assay based synergistic activity of antibiotic-plant extract combinations can be evaluated using the application of the growth inhibitory indices (GIIs), calculated according to the formula below: The GIIs value > 1 will be considered as synergistic, 1 as additive, and < 1 as antagonistic [157]. Fractional inhibitory concentration index The fractional inhibitory concentration index (FICI) is used for the evaluation of synergism between two antimicrobial compounds in micro/macrobroth dilution assays. The FICI of the antibiotic-plant extract combination agent can be estimated using the standard formula shown below: FIC index ≤ 0.5 will be considered as synergistic, > 0.5 but < 1 as partially synergistic, additive when = 1, indifferent when > 1 but < 4 and ≥ 4 as antagonistic [158]. Synergistic interaction between phytoextracts and antibiotics Synergism between plant-derived compounds and existing antibiotic agents is an effective and an efficient way to manage the development of bacterial multi-drug resistance [159,160]. Several studies have shown the significance of this type of synergistic interaction in the discovery of novel antibacterial agents. Phytochemicals are cable of interacting with synthetic antibiotic agents. This phytochemical and antibiotic interaction has been classified as antagonistic, additive or synergistic. The term antagonistic is given when a plant-derived compound reduces the effectiveness of an antibiotic agent against a certain type of bacteria, whereas the terms additive and synergistic are assigned to compounds that can enhance the antibacterial activity of the antibiotic [161]. An additive effect is usually considered as the baseline effect for detecting synergy in antimicrobial assays, in which such effect can be theoretically expected from a combination of multiple antimicrobial agents when the synergistic effect is absent. Synergistic effect can be defined as a combined effect that is significantly greater than the additive effect. A plant extract/compound fused with an antibiotic agent can be considered as a synergistic product when their combined action is superior to that of their individual antibacterial activity [162]. The distinctive action of phytochemical-antibiotic synergism is the ability to overcome antimicrobial resistance. Besides reducing antibiotic resistance, another advantage of this type of synergism is that it can reduce the minimum inhibitory concentration of an antibiotic agent, which also lower the dose needed for its effect to take place and mitigation of possible adverse effects [9,163]. A recent study indicated that epigallocatechin gallate isolated from Camellia sinensis, which is a variant of green tea was able to potentiate the antibacterial action of sulfamylon (mafenide acetate) against a clinical isolate of A. baumannii [164]. Another study showed that phytoextracts obtained from plants like Alstonia scholaris, Adenium obesum, Cerbera odollam, Cerbera manghus, Nerium oleander, Holarrhena antidysenterica, Plumeria obtuse, Wrightia pubescens, Thevetia peruviana, Punica granatum, Terminalia bellirica, Quisqualis indica, Terminalia sp. and Terminalia chebula were able to synergistically potentiate the activity of seven antibiotic agents like cephazolin, [175]. A study indicates that the isolated phytochemical compound known as berberine exhibited antibiotic potentiating ability of ciprofloxacin and Imipenem against A. baumannii [176]. Allicin (Fig. 2) isolated from Allium sativum (garlic) indicated FICIs of 0.5 and 0.38 when used in combination with cefazolin and oxacillin respectively against S. aureus and FICIs of 0.25 for both antibiotics when used in combination against S. epidermidis [158]. Ekambaram et al. showed that rosmarinic acid (Fig. 2) was able to synergistically potentiate the antibacterial activity of vancomycin, amoxicillin and ofloxacin when used in combination against S. aureus and MRSA with FICIs of 0.5 for each combination [177]. One study indicated that oleanolic acid (Fig. 2) was able to enhance the antibacterial action of kanamycin and gentamicin against A. baumannii. A FICI of 0.313 for gentamicin and 0.375 for kanamycin was indicated when combined with oleanolic acid [178]. Mechanism of synergistic activity of phytoextracts and antibiotic combined agents Plant-derived compounds and agents combined with antibiotics have broad antibacterial activity against many types of bacteria. Several studies have indicated various antibacterial mechanisms of these combined compounds that highlight their ability to reverse antibiotic resistance. These mechanisms include the modification of active sites in the bacterial cell wall and the plasma membrane to increase the permeability of the antibiotic molecule, inhibition of extracellular enzymes that catalyze the modification or degradation of antibiotics, inactivation of efflux pumps to facilitate the intracellular accumulation of antibiotic molecules and disruption of quorum sensing signal molecules and their corresponding receptors (Fig. 3) [163]. Examples of phytoextract-antibiotic combinations and their synergistic effects/mechanisms against Gram-positive bacteria and A. baumannii are elucidated in Table 2. Plant-derived synergists as inhibitors of antibiotic binding site modification Bacteria are capable of modifying the antibiotic binding target sites known as receptors (e.g. penicillin-binding proteins) to mediate antibiotic resistance. These alterations will no longer permit the binding of the antibiotic molecule to its specific receptor and permeate into the bacterial cell, rendering the antibiotic ineffective [187,188]. Examples of this type of plant-derived synergistic compounds include corilagin, tellimagrandin I, pinoresinol, tiliroside, coronarin D, totatrol, baicalin, momorcharaside B and magnatriol B (Fig. 2) [1,[189][190][191][192]. Corilagin is a type of tannins isolated from Arctostaphylos uva-ursi, which indicates a MIC of 128 μg/mL against MRSA. However, the MIC dropped 2000 fold when used in combination with Oxacillin and β-Lactam antibiotics. Corilagin indicated strong synergism with an FICI of 0.5 with bactericidal action against MRSA [193]. Tellimagrandin I is another tannin compound that indicates a FICI of 0.39 for MRSA when used in combination with β-Lactam antibiotics. The combination of antibiotics and Tellimagrandin I had a MIC reduction of 128-512 fold when compared to the isolated phytochemical compound [194]. [195]. Another study revealed that proanthocyanidin (Fig. 2) isolated from Vaccinium macrocarpon Aiton was able to synergistically interact with levofloxacin against H. pylori. Morphological investigations of the study revealed the reduction of PBP2a synthesis in H. pylori by proanthocyanidin [196]. A novel study revealed that garlic extracts obtained from the plant species Allium sativum L., which predominantly composed of phthalic acid (Fig. 2) and conceivably allicin showed synergistic antibiotic potentiating activity when used in combination with tetracycline, penicillin, rifampicin against the potential anthrax causing bio-agent B. anthracis strain Sterne 34F2. The indicative FICIs ranged from 0.5 to 0.8 for the selected plant-antibiotic combinations and microscopic analysis in the study detected garlic extract induced morphological disruptions on the cell wall of B. anthracis [49]. Isolated compounds like cinnamic acid, ferulic acid and p-coumaric acid are capable of inhibiting the synthesis of S. aureus cell membrane when combined with amikacin [197]. Plant-derived synergists as inhibitors of antibiotic degrading/modifying enzymes Certain bacteria can produce extracellular enzymes like β-lactamases and transacetylase that can chemically alter or even degrade antibiotic molecules. These enzymes can effectively retard the action of the antibiotic and render the antibiotic agent ineffective against the bacterium [33,[198][199][200]. However, studies have shown naturally occurring plant-based compounds that can synergistically interact with antibiotics to overcome these bacterial defenses. Examples of this type of phytochemical synergists include baicalin, rugosin B, 5-O-Methylglovanon and epigallocatechin gallate (Fig. 2) [1]. Baicalin extracted from Scutellaria amoena is one of the generally studied examples of plant-derived compounds contributing to this type of synergism, which was able to inhibit the activity of β-lactamases in MRSA and facilitated the antibacterial action of β-Lactam antibiotics [1,201]. Epigallocatechin gallate is a polyphenolic compound that belongs to a class of catechin. An investigation revealed that epigallocatechin gallate (Fig. 2) isolated from tea extracts was able to reduce the MIC to 4 mg/L of ampicillin/sulbactam when used in combination with the antibiotic. The compound indicated a good FIC between 0.19 and 0.56 for MRSA and another study revealed that the compound had a FICI between 0.126 and 0.625 for 28 strains of MRSA. The study showed that epigallocatechin gallate can reduce the activity of penicillinase and β-lactamases in MRSA [202]. 5-O-methylglovanon isolated from Glycosmis plants is an isoprenyl flavonoid compound with broad-spectrum antibacterial activity. The compound can lower the production of β-lactamases to facilitate the action of ampicillin in S. epidermidis and S. aureus [203]. Plant-derived synergists as inhibitors of active bacterial efflux pumps Efflux pumps are among the most common bacterial defenses that lead to antibiotic resistance. These bacterial structures have the ability to extrude antibiotic molecules at a faster rate than the antibiotic can diffuse in the bacterial cell [204]. Efflux pumps are structurally present in Gram-positive and Gram-negative bacteria [205,206]. There are several genes involved in the expression of these efflux pumps in Gram-positive and Gram-negative bacteria. Examples of such classes of genes include Tet, Acr, Ydh, Mex, Bla, Mdtef, and Nor [207]. These efflux pumps can be classified into five groups, depending on their capacity and drug extrusion mechanisms, such as MFS, SMR, MATE, RND and ABC (Fig. 3) [208]. Several studies have identified a number of plant-derived compounds that can counter the effects of these efflux pumps. Examples of such phytochemical synergists that modulated antibiotic resistance against Gram-positive bacteria include carnosic acid, carnosol, baicalin, erybraedin-a, sophoraflavanone-G, 2,6-dimethyl-4-phenyl-pyridine-3,5-dicarboxylic acid diethyl ester, myricetin, tiliroside, carnosic acid, carnosol, piperine, indoline, indirubin, capsaicin, kaempferol-3-o-α-l-(2,4-bis-E-pcoumaroyl) rhamnoside, reserpine, epicatechin gallate, 5'-methoxyhydnocarpin-D, pheophorbide A, isoflavonoids and tannic acid (Fig. 2) [1,[209][210][211]. Plant-derived synergists capable of modulating drug resistance facilitated by efflux pumps in A. baumannii include conessine and epigallocatechin gallate (Fig. 2). The indoline compound indirubin isolated from Wrightia tinctoria indicated a high FICI of 0.45 for S. aureus SA199B when used in combination with ciprofloxacin. The compound was able to inhibit the NorA gene expressed efflux pump of the bacterium [212]. Capsaicin extracted from chili peppers (Capsicum) indicated similar synergistic action for S. aureus SA199 and S. aureus SA199B targeting their efflux pumps when used in combination with ciprofloxacin. Nevertheless, the compound reduced the MIC of ciprofloxacin by 2 to fourfold [213]. Carnosic acid and carnosol are terpenes isolated from Rosmarinus officinalis indicated a MIC of 64 μg/mL and 16 μg/mL respectively, against MDR S. aureus. However, the MIC decreased 32 fold for carnosol and 16 fold for carnosic acid when used in combination with erythromycin at a lower concentration of 10 μg/mL. It was found that their synergistic action was targeted at the S. aureus NorA efflux pumps [165]. Baicalin isolated from Thymus vulgaris L and Scutellaria baicalensis indicated synergistic action targeting NorA and TetK efflux pumps expressed in MRSA when used in combination with β-lactam antibiotics and tetracycline [201]. The flavonoid compound kaempferol rhamnoside demonstrated the ability to inhibit the activity of NorA efflux pumps in S. aureus when used in combination with ciprofloxacin and synergistically reduce the MIC by eightfold compared to the compound alone against the bacterium [214]. Epicatechin gallate (Fig. 2) is a type of catechin isolated from green tea extracts were able to interact synergistically with tetracycline to inactivate TetK and TetB efflux pumps expressing Staphylococcus spp. [215]. Another study indicated a fourfold reduction in MIC and efflux pump inhibitory activity in norfloxacin-resistant S. aureus when epicatechin gallate was used in combination with norfloxacin against the bacterium [216]. Epigallocatechin gallate isolated from Camellia sinensis potentiated the antibacterial activity of tetracycline against S. aureus by inhibiting the activity of TetK efflux pumps [215,217]. The alkaloid compound reserpine was able to reduce the MIC of moxifloxacin, ciprofloxacin and sparfloxacin fourfold against S. aureus. Bacteriological studies indicated that reserpine was able to synergistically inhibit the activity of multi-drug efflux pumps expressed by the NorA gene in S. aureus [218]. Moreover, another study showed that reserpine was able to inactivate efflux pumps present in S. aureus, S. pneumonia and B. subtilis when used in combination with norfloxacin, tetracycline and ciprofloxacin [219,220]. Recent investigations revealed that 5'-methoxyhydnocarpin-D and pheophorbide A (Fig. 2) isolated from Berberis fremontii, tiliroside isolated from Herissantia tiubae and piperine purified from Piper nigrum extracts were able to potentiate the antibacterial action of amikicine, ampicillin, tetracycline, lomefloxacin, norfloxacin and ofloxacin by inhibiting the activity of NorA multi-drug efflux pumps in S. aureus [1,35,221]. Phytoextracts obtained from Punica granatum (pomegranate) was able to potentiate the action of gentamicin, chloramphenicol, tetracycline, ampicillin, and oxacillin by inhibiting the activity of NorA efflux pumps in MRSA [31]. Another study indicated that conessine isolated from Holarrhena antidysenterica was able to synergistically potentiate the antibacterial action of novobiocin and rifampicin by inhibiting the activity of multidrug efflux pumps expressed by the AdeIJK gene in XDR A. baumannii [207]. A recent study revealed that curcumin (Fig. 2) purified and isolated from Curcuma longa Linn. was able to potentiate the antibacterial action of polymyxins when used in combination against MDR strains of A. baumannii. Curcumin-polymyxin B combinations indicated remarkably high FICIs of 0.156, 0.375, 0.068 for AB12, AB14, AB16 and NCTC 19606 strains of A. baumannii respectively. Bacteriological studies in the investigation elucidated that curcumin was able to reverse polymyxin resistance in MDR A. baumannii by modulating the activity of EtBr and EmrAB efflux pumps [222][223][224]. Furthermore, purified plant compounds like tannic acid and ellagic acid enhanced the antibacterial action of novobiocin, coumermycin, chlorobiocin, rifampicin and fusidic acid by reducing the MIC of each antibiotic by 2-fourfold. Bacteriological investigations in the study indicated that tannic acid and ellagic acid disrupted the activity of multidrug efflux pumps present in A. baumannii [225]. One study indicated that phytoextracts obtained from plants like Erythrina variegata, Jatropha elliptica, Cytisus striatus and Persea lingue also synergistically potentiated the action of antibiotics by inactivating efflux pumps present in drug-resistant Gram-positive bacteria like MRSA and VRE [1,[226][227][228]. Artemisinins (Fig. 2), an AcrAB-TolC gene associated bacterial efflux pump disruptor isolated from Artemisia annua was able to potentiate the antibacterial action of penicillin G, cefazolin, ampicillin, cefoperazone and cefuroxime when used in combination against E. coli, which indicated FICIs of < 0.5 for each antibiotic [229]. Isolated compounds like Cathinone and Theobromine (Fig. 2) worked synergistically with ciprofloxacin and tetracycline when used in combination against S. typhimurium and K. pneumoniae. Both phytochemical compounds lowered the MICs of ciprofloxacin and tetracycline by 2-fourfold via the inhibition of efflux pumps expressed by AcrAB-TolC gene [230]. Phenolic compounds like p-Coumaric acid, caffeic acid, vanillic acid, sinapic acid, gallic acid and taxifolin (Fig. 2) reduced the MICs of ciprofloxacin by 32 fold and erythromycin by 16 fold when used in combination against C. jejuni. These phenolic compounds disrupted the activity [233]. Catechin (Fig. 2) compounds, epigallocatechin and epicatechin gallates were the first herbal drugs to receive FDA approval in 2006. The leaf extract ofCamellia sinensis consists of about 85% to 95% catechins and the essence of the plant was used in the topical management and treatment of genital warts. [234]. Curcumin isolated from Curcuma longa Linn. is another FDA approved plant based natural product that has proven benefits in clinical trials, and capable of potentiating synthetic antibiotics when used in combination with polymyxin B against MRSA and A. baumannii [222][223][224]. Cranberry juice extract of Vaccinium macrocarpon Aiton which is abundant of proanthocyanidin also received FDA approval for treating uropathogenic E. coli [235,236]. A derivative phytochemical compound known as salicylate (Fig. 2) mediated synergistic action against the pulmonary pathogen Burkholderia cenocepacia when used in combination with trimethoprim, ciprofloxacin and chloramphenicol. The study indicated that salicylate reduced the MIC of the selected antibiotics by tenfold and the combinations potentially inhibited the activity of efflux-pumps in B. cenocepacia [237]. Plant-derived synergists as biofilm formation/quorum sensing antagonists Plant-derived synergists are also capable of negating the process of quorum sensing in bacteria. A novel study conducted by Christensen et al. indicated that horseradish juice and curcumin supplemented with furanone C-30 was able to induce significant synergistic quorum quenching activity when combined with tobramycin against P. aeruginosa PAO1 in female BALB/c mice [238]. Bacteriological studies indicated that curcumin and phytochemicals from horseradish juice reduced of the secretion of autoinducer molecules like C-4 and C12-homoserine lactones from the bacterium. A similar study detected In vitro synergistic anti-quorum sensing activity of curcumin with gentamicin and azithromycin combinations against P. aeruginosa. The study also concluded that plant-antibiotic combinations were able to reduce the activity of N-acyl-homoserine lactone autoinducer signaling molecules and down regulate virulent genes like rhlA, lasB and rhl associated with quorum sensing in P. aeruginosa [239,240]. Furthermore, synergistic anti-biofilm and anti-quorum sensing activities were detected when phytochemical compounds baicalin, hamamelitannin and cinnamaldehyde (Fig. 2) were combined with vancomycin, clindamycin and tobramycin against clinical isolates of B. cenocepacia, S. aureus and E. coli in both in vitro assays and in greater wax moth (Galleria mellonella), Caenorhabditis elegans nematode and female BALB/c mice models used in in vivo assessments [241]. Moreover, naturally synthesized phytochemical compounds like furanones, particularly the halogenated variant known as (5Z)-4-bromo-5-(bromomethylene)-3-butyl-2(5H)-furanone ( Fig. 2) was able to attenuate the QS system of B. anthracis [319,320]. Concluding remarks and future perspectives The problem of antibiotic resistance is growing rapidly, and the prospects for the application of antibiotic agents in the future have reached uncertainty. Despite the mass production of antibiotics by the pharmaceutical industries in recent decades, bacteria have shown greater resistance to these antibiotics. Plants are remarkable and phenomenal sources of new bioactive compounds with broad-spectrum antibacterial properties. These compounds can assign direct action or interact synergistically with antibiotics to work against bacteria. The following review summarizes the findings of recent investigations based on phytoextracts in combination with existing antibiotics in the context of their drug resistance modulating potential against the anthrax causative organism Bacillus anthracis and MDR and XDR strains of emerging bacterial superbugs. Phytochemical-antibiotic combinations have shown promising results as agents with different mechanisms for modifying and reversing antibiotic resistance. For instance, phytochemicals such as epigallocatechin gallate can interact synergistically with different classes of antibiotics. Depending on the bacterium, this compound can mediate synergism and increase the potency of antibiotics, deactivating β-lactamases and multidrug efflux pumps. Pre-clinical studies have shown that these synergistic compounds can significantly reduce the MIC of bacteria when used in combination with antibiotics. The motivation in antimicrobial synergy research leads to the discovery and production of new antimicrobial agents. However, the underlying action mechanisms of synergistic compounds have not yet been fully explored. A profile that indicates a complete understanding of the pharmacokinetics and pharmacodynamics of the combination agents are required to qualify as a standardized and effective antimicrobial drug. Furthermore, in vivo and nano-medicine drug delivery studies based on combined synergists of plant compound-antibiotics can be deployed for better understand the toxicological responses and bioavailability of the combined agents, to determine their true relevance and safety in the treatment of bacterial infections in humans. Advanced techniques such as isobolograms and phytochemical paradigms can be used to analyze and utilize regions of synergistic interaction between mixtures of antibacterial drugs. At present, the availability of experimental data based on antibiotic-potentiating mechanisms of plant synergists against Bacillus anthracis and antibiotic resistance modulating effects of plant based QS antagonists are limited and therefore, broadening of these studies are imperative. Furthermore, it is necessary to exploit drug resistance modulating potentials of novel combinative products focusing on plant-derived antibodies and antibiotics against bacterial superbugs and B. anthracis.
2021-12-03T14:50:03.479Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "46ae36968a4fe9e47b76f927f712a7d8c1bbdd22", "oa_license": "CCBY", "oa_url": "https://ann-clinmicrob.biomedcentral.com/track/pdf/10.1186/s12941-021-00485-0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c795a41b51161a86a86db77000cf1810fea47905", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
251791669
pes2o/s2orc
v3-fos-license
A Photochemical Overview of Molecular Solar Thermal Energy Storage : The design of molecular solar fuels is challenging because of the long list of requirements these molecules have to fulfil: storage density, solar harvesting capacity, robustness, and heat release ability. All of these features cause a paradoxical design due to the conflicting effects found when trying to improve any of these properties. In this contribution, we will review different types of compounds previously suggested for this application. Each of them present several advantages and disadvantages, and the scientific community is still struggling to find the ideal candidate suitable for practical applications. The most promising results have been found using norbornadiene-based systems, although the use of other alternatives like azobenzene or dihydroazulene cannot be discarded. In this review, we primarily focus on highlighting the optical and photochemical aspects of these three families, discussing the recently proposed systems and recent advances in the field. Introduction Energy generation and storage has become one of the major challenges in our society and are especially relevant for industry [1,2]. The current energy demand is continuously rising [3] each year by 1.3%, and this progression is expected to last at least until 2040 [4], even considering that many industries worldwide have been affected by COVID-19. According to the International Energy Agency, buildings are responsible for almost 30% of energy consumption and account for 28% of CO 2 emissions [4,5]. To avoid the environmental impact from conventional energy sources, the use of renewable electricity needs to augment considerably. However, we are not yet able to avoid our dependence on fossil fuels. Consequently, significant efforts to find better alternatives to generate and store energy are under exploration. This is especially relevant for solar energy use and storage [6], which has been envisioned as an abundant, clean, and promising energy source. Using natural photosynthesis as a working model for solar energy use, scientists are designing and preparing chemical systems capable of capturing and storing solar energy. Nowadays, different alternatives to make use of sunlight are under research, including direct use of photonic solar power and heating capacity of solar radiation. The variability in solar income is a very significant drawback to solar energy, as the power of both types of energy (photonic and heating) is not constant during the four seasons of the year [7] and strongly depends on the weather and geographical factors. This non-constant power supply unequivocally demands a storage solution, which should allow wider usability under conditions such as night or winter. Consequently, different methodologies have been developed to exploit solar power such as underground solar energy storage (USES) and molecular solar thermal (MOST) systems. The USES system mechanism consists of the storage of sun energy underground during summer months using a pile [8,9]. There are four basic types of USES systems: hot-water-thermal storage, borehole thermal storage, aquifer thermal storage, and water gravel pit storage [10]. This mechanism requires a plant of quite large dimensions, making it quite difficult to use this technique once the building has already been built [11]. Similarly, in these approaches, the thermal insulation requirements usually imply a challenge for long-term storage. On the other hand, MOST technology has become a promising candidate to capture and store solar energy in a sustainable and efficient manner. These systems have been expanded significantly in the last decades [7], even though the first idea dates a while back [12]. The MOST approach is based on the storage of solar energy as chemical energy using a photoactive molecule, which, after being exposed to sunlight, isomerizes into a metastable high-energy photoisomer [13]. The release of chemical energy as heat can be performed during the back conversion step using an external stimulus (Figure 1a) either through heat, through a catalyst, or electrochemically [14]. Along the years, a large number of systems have been proposed as MOST candidates. There are at least six major requirements for a practical system, which makes the design of successful candidates a challenging task; this will be described further in the next section. Until now, none of the previously proposed compounds completely fulfils this list of requirements. Thus, the design and preparation of new alternatives to MOST technology is still a hot topic. In the following sections, the most relevant efforts to prepare suitable compounds according to these requirements will be discussed. For the scope of this review, we will describe the three types of systems that have been mainly used within the current framework of MOST technology: norbornadiene, azobenzene, and dihydroazulene derivatives [15][16][17] (Figure 1b). In addition, we will focus on the central role of the optical properties in the storage of solar energy. Requisites for MOST Systems: Optical Properties The development of new MOST candidates is a challenging task, which has gained more attention and visibility in the last decades owing to the expectations raised by this technology. As mentioned above, the ideal compound for a MOST device is still unknown, so many different strategies combining experimental and computational tools are being used to assist in the molecular design. In order to maximize the efficiency of MOST systems, their optimization has received a great deal of attention; in light of this, the design of a suitable photoswitch must meet a large number of objectives to fulfil the ideal MOST system. Thus, the design criterion of a MOST system is subjected to several parameters involving both engineering and chemical challenges [18]. The first key step in the molecular solar thermal energy storage system is the absorption of light by the parent molecule, which undergoes a reversible photoisomerization reaction to its corresponding metastable isomer. This photoisomer should be stable enough to store the chemical potential for varying periods of time, depending on the envisioned application. Then, this stored energy should be released when and where required, in the form of heat. For the initial photoisomerization part of the MOST cycle to be successful, the photoswitch pair needs to fulfil a long list of features, in some cases even contradictory [3]: • A large difference in the free energies of the parent molecule and its photoisomer, with a minimal increase in the molecular weight to maximize energy density [19,20]. • A moderately large kinetic barrier for back conversion [21,22]. • An absorption spectrum of the photoactive molecule matching the solar spectrum [22,23]. • A high quantum yield (Φ = 1) for the photogeneration of the metastable photoisomer. • A photochemically inactive (or non-absorbing) photoisomer. • Negligible degradation of both the photoactive molecule and its photoisomer after multiple cycles, especially moving towards higher temperatures. These are the major requirements that should be optimized to improve the performance of any potential candidate. Furthermore, when considering a MOST molecule in an integrated device, the use of environmentally friendly compounds and solvents is desired to minimize the risks in case of leaching or losses to its surroundings. In this review, we will focus on the optical properties. Thus, a more detailed explanation of some of the requirements related to the photochemistry of these compounds will be presented: the absorption spectrum (solar match), photoreaction efficiency (quantum yield), and energy storage capacity. Solar Match The photochemical reaction from a low (parent molecule, photoactive molecule) to a high energy configuration (photoisomer) is the central focus of the photochemical part of the MOST cycle. For this transformation to take place, the main requirement is the absorption of energy supply from the sun. Hence, the ideal MOST systems should absorb the maximum number of photons in the UV and visible range of the solar spectrum, 300-700 nm, which corresponds to the maximum intensity of sunlight. Ideally, the absorption spectrum of the lower-energy isomer should overlap with the most intense region of the solar energy window (solar match) and preferably in the solar spectrum range between 340-540 nm, where the solar radiation is relatively high [22]. Moreover, it is desirable that no absorption overlap between the initial isomer and photoisomer exists to avoid a non-desirable photon absorption competition between the two states. Most of the basic cores of the photoswitches reported to date do not show wavelengths going far beyond 350 nm (for instance, parent norbornadiene has a maximum at 236 nm and some ruthenium derivatives absorb at 350) [24]. This is a significant drawback as the solar photons' flux at wavelengths below 330 nm is quite low. In this regard, both experimental [25,26] and computational [27] progress has been made in providing functionalized photoswitches that absorb at larger wavelengths. The most used and successful chemical strategy to shift the absorption of MOST compounds toward higher wavelengths is by creating a 'push-pull' effect through the introduction of donor-acceptor substituents and increasing the molecule's conjugated π-system. Preferably, low molecular weight electron donor and acceptor groups are prominent targets for generating relevant photoswitches, as they cause a lower impact on the energy density (affected by both the difference in energy between isomers and the molecular weight). However, beyond the stored energy, the chemical modification of photoswitches may also negatively affect other relevant properties, making it clear that the optimization of a set of molecules for MOST applications is extremely challenging. Quantum Yield Once the parent molecule absorbs a photon from sunlight, the excitation from the ground state (S 0 ) to the excited state (S n ) takes place. Subsequently, a certain number of molecules will undergo photo-conversion, but a fraction could undergo relaxation, returning to their initial state. To quantify the fraction of molecules effectively performing the photoconversion from the photoactive molecule to the photoisomer, the quantum yield is measured. This dimensionless number provides the probability of a parent molecule to furnish the metastable high-energy photoisomer per absorbed photon. From an efficiency perspective, the photo-conversion that leads to the high isomer needs to be as high as possible, being close to unity if possible. This should allow for an efficient conversion of solar energy into chemical energy, hence avoiding other competitive processes such as radiative, non-radiative, or quenching. Storage Energy Density While it is not strictly a photochemical property, another crucial concern in MOST systems is the energy storage. MOST technology is designed for generating the greatest possible increase in temperature after releasing the stored chemical energy in the photoisomer as heat. In this way, the key property to achieve this goal is the enthalpy difference (∆H) between photoisomers. This means that, the bigger the energy difference between the (not charged) metastable photoisomer and its parent state, the larger the energy storage density that will be accumulated in the system. As a rule of thumb, MOST systems should provide at least 0.3 MJ/kg to be of practical use, prior to the subsequent heat release. Then, heat could be released using an external stimulus like a thermal increment or via a catalyst. Thus, the photoisomer should not undergo back-conversion quickly at room temperature in order to store the energy for hours, days, or months (storage time) depending on the target application. Even if this review is focused on the photochemical aspects of the MOST technology, it is also relevant to mention that alternative cooling and heating methods are available. For instance, the use of phase transition materials and water adsorption in zeolites has been already commercialized [28,29]. Photoswitches Used in MOST Technology As a brief introduction to the state-of-the-art of the historic development of MOST candidates, very different sets of families were considered at some point. However, most of them were discarded at a relatively early stage because of some practical reasons or flaws. Considering the most explored molecular systems, the main parts of them are photoswitches, as explained above. This is partly because of the considerable overlap between the requirements for photoswitches in general and the compounds used in the MOST concept (absorption, high quantum yields, photostability, and robustness). His-torically, the compounds designed to be used in MOST systems can be grouped into two main types [2,16], depending on the photochemical transformation that takes place. In this sense, the mechanism of the photochemical process transforming sunlight into chemical energy can be an isomerization or a cycloaddition. Other types of photochemically induced intramolecular rearrangements were also studied. As an example of some more complex rearrangements, organometallic diruthenium fulvalene's have also been considered [30]. According to the photochemical transformation involved, the systems based on an isomerization are typical examples of molecular photoswitches, like stilbenes [31], azobenzenes, retinal-based photoswitches [32], or other less-known families like hydantoins [33]. The main problem behind using traditional cis-trans photoswitches is the typical small energy gap between the two isomers, producing a small amount of energy storage. This problem has been overcome by two different strategies. Firstly, stabilization of the E-isomer usually occurs when increasing electronic delocalization. Secondly, with a destabilization of the Z-isomer attributed to vicinal groups, steric interactions are incurred. Combining those strategies, some stilbene derivatives could be designed to reach an energy storage of 100 kJ/mol higher than the original unsubstituted stilbene molecule, reaching 105 kJ/mol [34]. Comparably, following the same strategies, retinal-like systems were postulated for this application too, with more modest energy storage capacities [35]. The employment of systems based on a photochemical cycloaddition typically has better properties in terms of energy storage, but their optical properties (absorption spectra) are usually less tuneable as absorption usually lies in the high-energy region of the UV spectra. The main exponent of this approach is the norbornadiene (NBD)-quadricyclane (QC) couple [36], which has been studied since the 1980s and nowadays is a focus of most of the efforts from the community. One of the first proposals using cycloaddition reactions was the use of anthracene derivatives, thanks to their well-known intermolecular [4 + 4] cycloaddition. These compounds also present some problems, as the absorption usually occurs below 300 nm, meaning a low efficiency exposed to solar radiation. This was partially solved by adding (a) bridge group(s) to link two anthracene moieties, but in this case, the efficiency decreased drastically [37]. Another cycloaddition system used is the pair based on dihydroazulene (DHA) and vinylheptafulvene (VHF) [38,39]. Unfortunately, the parent compounds in this couple present a small energy difference between isomers, plus the tunability of the optical properties has already been exhaustively explored [40]. Other systems such as ruthenium fulvalene complexes have also been proposed and studied but have been discarded for practical applications because of the low efficiency and high preparation costs [30,41]. In summary, many molecular systems have been studied along the years as potential MOST candidates. In the following, we will focus our attention on the three most promising families of MOST molecules to date, namely, norbornadiene, azobenzenes, and dihydroazulenes. These three families combine relatively good (or tuneable) properties and are synthetically attainable. Norbornadiene/Quadricyclane Couple Among the previously mentioned MOST systems under investigation, the most profoundly explored is without a doubt the NBD to QC isomerization. Even if the foundations of the MOST concept did not begin with the NBD/QC photoswitch, it is nowadays the main area of research in the field. These molecules have reached energy density values close to the maximum energy density limit of a solar thermal battery at 1 MJ/kg [42]. In contrast, the absorption of unsubstituted NBD is within the UVC range (less than 267 nm) and does not overlap with the solar spectrum, which begins at 340 nm [26]. The ideal absorption scenario for molecular solar thermal energy storage systems is to use solar radiation, which reaches the Earth's surface at high intensities [43]. Thus, targeting a photoisomerization induced reaction in the 350-450 nm range is highly desirable. In designing new NBD/QC molecules, the difference in the absorption maxima between the NBD and QC molecules needs to be large enough to minimize spectral overlap [25,44], which could diminish the incident radiation flux reaching the photoactive isomer. A significant advantage of NBDs over other photoswitches (i.e., azobenzenes) is that, upon absorption, a blue-shift usually occurs. Thus, the QC molecule tends to absorb outside of the visible spectrum, which purposely mismatches the solar spectrum and, consequently, the presence of photostationary states is circumvented. In turn, this avoids the need for engineering adjustments such as band pass filters. The long list of requirements for an optimal molecule for MOST applications has caused the design of new molecules with increased complexity. For instance, two principal chemical strategies have been applied in the shifting of the absorption of the parent molecule to higher wavelengths [2]. One of them involves adding electron-donating or electron-withdrawing substituent groups to create a 'push-pull' effect, while the other involves increasing the system's extent of π-conjugation [45]. Some representative examples are shown in Table 1 and Figure 2. Table 1. Molar mass, λ max ; enthalpy of isomerization (∆H isomerization ); and energy density of NBD derivatives, which are shown in Figure 2 [2]. The first method is typically performed by adding substituents with electron-rich phenyl rings to red shift the absorption to higher wavelengths. A potential drawback caused by this modification is an increased spectral overlap between the NBD and QC isomers [46]. Concordantly, solely changing the acceptor group from a cyano to a trifluoroacetyl group red shifts the absorbance by 100 nm, although it considerably shortens the lifetimes of QC [47]. NBD In the second strategy, the use of NBD dimers in MOST systems is being explored, although the origin of this molecular cooperativity remains to be fully understood. Extending the π-conjugation by linking two NBD units through an electron-rich aromatic unit minimizes the impact of molecular weight increase as two units are considered and two photons may be absorbed. Unfortunately, shorter wavelengths of absorption are required for the second photoisomerization process from QC/NBD to QC/QC, ultimately decreasing the quantum yield. Moreover, both NBD moieties should be photoisomerized as NBD/QC is far more labile than QC/QC [24]. Another method to try to improve the performance of the NBD/QC couple is the use of alternative deactivation channels. In this sense, thermally activated delayed fluorescent (TADF) molecules undergo an excitation to the lowest-lying singlet state, relax to the triplet state, and finally can be thermally converted back to the singlet state (otherwise known as a reversible intersystem crossing) and relax from this singlet excited state. TADF molecules have the lowest-lying excitation band ideally situated in the perfect range of the solar spectrum required for MOST systems, at lower wavelengths than typical phosphorescent molecules (480-550 nm) and higher wavelengths than fluorescent molecules (330-380 nm). TADF molecules were originally built around a benzophenone core structure, with the addition of carbazole and diphenylamine moieties being common, wherein keto groups are in a co-planar position relative to aromatic moieties [48]. This co-planarity effect enhances the overlap between n and π* states, creating a very small singlet-triplet energy gap. A new promising approach was attempted linking thermally activated delayed fluorescence molecules like phenoxazine-triphenyltriazine (PXZ-TRZ) to NBD moieties, where redshifted absorption peaks were in the 400-430 nm range [49]. Moreover, NBD molecules were substituted onto these structures with an increased conjugation as a possible solution to maximize the low-energy storage density of a single NBD unit. Despite the copious publications using the NBD/QC photoswitch, the optimal system has not been devised yet. The prime-substituted NBD has a red-shifted absorption of 59 nm, yet it has an increase in molar mass of 131 g/mol, inevitably reducing the energy density by 13.3 kJ/mol. The energy density of unsubstituted NBDs to the present date still outcompetes the prime-substituted NBD by a margin of 13% [50]. To implement an NBD/QC MOST photoswitch that absorbs in the 350-450 nm region into practical applications will entail a compromise of 20 kJ/mol of storage energy for a better fitting with the solar spectrum, especially considering that, when moving to highly absorbing materials like NBD, a major fraction of light will nonetheless be converted to heat. Controlling thermal fluxes at the surface is not only required to keep the photoisomer stable, but it is also a safety mechanism essential in energy storage devices like batteries to prohibit overheating under working conditions. Utilizing waste heat by coupling a heat exchanger to the final device will prevent local temperature extremes and maintain a truly closed MOST system [44]. As mentioned previously, it is also crucial to control the efficiency of the photochemical process, which starts with the absorption of a photon by NBD. The incoming photon can induce an electronic transition from the minimum in the ground state to the S 1 state, causing a [2 + 2] intramolecular cycloaddition [51]. Under natural solar irradiation conditions, this photoconversion does not take place in the parent NBD, as just a few photons below 300 nm arrive at sea level. Therefore, the unsubstituted NBD/QC couple is chemically inactive to sunlight. Furthermore, in the event that the NBD absorbs a proper solar radiation, just a few molecules will undergo photoisomerization because of the quantum yield of this system being quite low (φ = 0.05). In consideration of these requirements, the NBD/QC system has been chemically modified in order to increase the quantum yield [18] as well as red-shift the wavelength of absorption. As shown in Figure 3, the introduction of donor-acceptor groups increases the quantum yield and makes the system capable of absorbing natural solar irradiation above 400 nm. Thus, when the incoming photon is absorbed by NBD, the photoisomerization transformation starts through a S 0 -S 1 transition. The molecules will then undergo relaxation in the excited state potential energy surface to reach the minimum energy conical intersection point (MECI) S 0 /S 1 , leading to the photoisomer QC [24]. In the heat release step, the thermodynamic driving force of the process (∆H isom = 372 kJ/mol) pushes QC in forming the less-strained geometry (NBD), based on the cleavage of two single bonds of the four-membered ring and the transformation of the remaining bonds to double bonds, as seen in Figure 4. In this stage, the reaction releases a high amount of energy because of the high-standard enthalpy of the reversion of QC to NBD. This energy also depends on the gravimetric energy density (MJ/kg), hence systems with smaller molecular weights will hold comparatively greater energy densities. Thus, cyano-substituted NBD derivatives have been computational and experimentally studied as they show attractive energy densities (0.4-0.6 MJ/kg) [25]. Control of the back conversion reaction is crucial in providing energy at the right time. It is well known in these systems that the optimization of solar match and storage lifetime at the same time is challenging, because one property is improved at the expense of the other. However, a recent novel strategy defies this, wherein NBD molecules have an improved solar match, yet the storage time remains untouched. Chiefly, via the introduction of substituents in the ortho position, a substantial increase in back conversion ∆S* occurs, which comes from the steric interference of the side group of the parent molecule [55]. Azobenzene Photoswitches Azobenzene is a chemical compound based on diazene (HN=NH), where both hydrogens are substituted by a phenyl ring [56,57]. Azobenzene can be found either in a cis or trans conformation [58]. The trans to cis isomerization can be triggered by different stimuli such as irradiation with ultraviolet (UV) light [59], mechanical force [60], or an electrostatic stimulus [61,62]. Contrastingly, the cis to trans isomerization can be observed in dark conditions when applying heat owing to the thermodynamic stability of the trans isomer, although it can also be driven by visible light (see Figure 5a). The trans conformation assumes a planar structure with C 2h symmetry [63,64], with a maximum distance of 9.0 Å between the most distant positions of the aromatic rings. On the other hand, the cis one adopts a non-planar conformation with C 2 symmetry [65,66] and the end-to-end distance is reduced to 5.5 Å. The cis-isomer is not planar as a result of steric hindrance, and this causes the π-electron clouds of the aromatic rings to face each other, leading to a small dipolar moment in the molecule (µ~3D) [67]. This ring's disposition is also reflected in the proton nuclear magnetic resonance ( 1 H-NMR) spectra. The shielding effect produced by the anisotropy of the π-electron cloud in the cis-isomer affects the signals, thus making them appear at a higher shift compared with the trans-isomer. The absorbance spectrum of the trans-azobenzene molecule typically presents two absorption bands (see Figure 5b), corresponding to the electronic transitions n-π* and π-π*. The electronic transition π-π* produces a strong band in the UV region and is also present in analogous-carbon systems such as stilbene [68]. The n-π* electronic transition presents a different band, which is far less intense, in the visible region. This latter transition is produced by the nitrogen's lone pair of electrons [69], which generates a completely different photoisomerization process compared with its analogous carbon system, the stilbene. The trans-cis isomerization process is usually followed by a color change towards more intense tonalities. The absorbance spectra of both isomers are differentiated by the following aspects: (i) trans-isomer: the absorption band related to the π-π* electronic transition is very intense, with a molar extinction coefficient (ε) around 2-3 × 10 4 M −1 cm −1 . On the other hand, the band corresponding to the n-π* electronic transition is much weaker (ε~400 M −1 cm −1 ), as this transition is forbidden by the symmetry rules in this isomer. (ii) Cis-isomer: the absorption band related to the π-π* electronic transition is hipsochromically shifted and its intensity diminishes greatly (ε~7-10 × 10 −3 M −1 cm −1 ) with respect to the trans-isomer. On the other hand, the band corresponding to the n-π* electronic transition (380-520 nm) is allowed in this isomer, thus significantly increasing the intensity of this band (ε~1500 M −1 cm −1 ). These optical properties can be greatly modified upon substitution and this fact has been exploited to match the requirements of MOST technology. Although azobenzene can adopt trans and cis conformations, the former is more stable at the electronic ground state by roughly 58.6 kJ/mol (0.6 eV) [71]; this is explained by the lack of delocalization and a distorted configuration of the cis-form in comparison with the trans-isomer. The experimentally measured energy barrier between trans and cis conformations is about 96.2 kJ/mol (1.0 eV) [72]; thus, in dark conditions and at room temperature, the predominant species is the trans one ( Figure 5c). As explained above, the energy difference between isomers will have a strong effect on the stored energy density. This value could be also affected by substitution, but in general, smaller values of energy density could be obtained using azobenzene compared with other MOST molecules such as NBD/QC. The differences in spectroscopical properties for the cis and trans isomers and the distribution of excited state electronic levels allow them to undergo isomerization upon irradiation. This photochemical interconversion usually ends up providing a varying mixture of cis and trans isomers on the photostationary states (PSS). The ratio of isomers in the mixture depends on the substitution of the azobenzene core and the wavelength of irradiation. In turn, the excitation wavelength at which this process takes place depends on the nature of the substituents on the aryl groups of the azobenzene; although, usually, in the trans to cis isomerization, this process is promoted by wavelengths around 320-380 nm, while exposition to 400-450 nm wavelengths typically favors cis to trans isomerization. This reversion can also be induced thermally. Although both photochemical conversions take place in the order of picoseconds, the cis to trans thermal relaxation is in the order of seconds or even hours and can be tuned by substitution. Although several mechanistic studies have focused on the isomerization mechanisms of azobenzene, in fact, the effect of the substituents on the phenyl rings as well as the influence of other parameters yields a complex mechanism [73][74][75]. In fact, several mechanisms of the isomerization of the azobenzene and its derivatives can take place depending on the nature of the substituents and the reaction conditions (see Figure 6): (i) The inversion of one of the N-C bonds corresponds to an in-plane inversion of the NNC angle between the azo bond and the first carbon of the benzene ring (angle Φ), reaching 180 • , while the angle of CNNC remains fixed at 0 • . (ii) Through the rotation of the N=N double bond. This mechanism is similar to the one in the stilbene isomerization [73]. The rotational pathway starts with the breakage of the N=N p-bond, thus becoming an N-N bond. After that, there is an out-of-plane rotation of the CNNC dihedral (ϕ) angle, while the CNN angle remains at 120 • . (iii) Through the concerted inversion [76], where there is a simultaneous increase in the NNC angles up to 180 • . (iv) Lastly, through inversion-assisted rotation mechanism, which implies changes in both CNNC and NNC angles at the same time. Nevertheless, recent computational studies [75] using ab initio methodology including dynamic electron correlation have precisely depicted the topography of the potential energy surfaces of the S 1 (n-π*) and S 2 (π-π*) energy levels. The inclusion of the dynamic correlation makes this study more reliable as the topography of the PES can be highly altered when it is not included, thus providing contradictory data because of changes in the location of the minima and shape of the reaction paths. A high-level computational study [75] at the CASSCF/CASPT2 (Complete Active Space Self Consistent Field/CAS second-order perturbation theory) level, reoptimizing some of the critical points of the excited states at the MS-CASPT2 level, was performed. The mechanisms described fit into the different reaction pathways depending on the isomer excited and the excited state reached (see Figure 7). The inversion mechanism is the most likely pathway for the cis to trans thermal-back isomerization, but for the trans to cis case, depending on the excitation, different paths are available. When an azobenzene is promoted to the S 1 state and rotation is not hindered by substituents, a CNNC rotation will take place, reaching the S 1 potential energy surface minimum. An available conical intersection between this excited state and the ground state is located close in terms of geometrical distortion and energy, usually above by just 4-8 kJ/mol. Therefore, the population is funneled to the ground-state surface, yielding photoisomerization. For the excitation to S 2 , an energy degenerated region is usually present, thus leading to the same rotational pathway discussed for S 1 . However, alternative pathways can lead to photoisomerization back to the starting material. All of these processes produce differences in the quantum yield observed [76] depending on the excited state reached and the geometries allowed. Consequently, the PSS composition and the photoisomerization quantum yield become largely dependent on the substitution of the phenyl rings of the azobenzene and the specific irradiation wavelength used. As shown in previous paragraphs, substitution has a major impact on the properties related to the MOST concept. Azobenzene derivatives can be classified into three different groups depending on the energetic order of its electronic states (π-π* and n-π*). This order is mainly dependent on the electronic nature of the azobenzene aromatic rings and their substitution pattern. The three classes are usually labelled as follows (see Figure 8a): (i) Azobenzene-type molecules, in which the electronic nature of the phenyl rings is analogous to that existing in the unsubstituted azobenzene (Ph-N=N-Ph). They present a strong π-π* band in the UV region and a weak n-π* band in the yellow visible region. (ii) Aminoazobenzenes, which can be orthoor para-substituted with an electron-donating group (EDG), [(para-or ortho-(EDG)-C 6 H 4 -N=N-Ar)]. The π-π* and n-π* bands are very close or even overlap in the UV/Vis region; they are typically orange. (iii) Pseudo stilbenes, which have an electron-donating and an electron-withdrawing group (EWG) at the para position of the phenyl rings, thus being a push-pull system [(p-(EDG)-C 6 H 4 -N=N-C 6 H 4 -p-(EWG)]. They are typically red and the absorption band corresponding to the π-π* electronic transition undergoes a bathochromic shift, overlapping bands or even changing the order of appearance with the corresponding band of the n-π* electronic transition. Pseudo-stilbenes have a highly asymmetric electron distribution alongside the molecule, which turns into a large molecular dipole and anisotropic optical properties. In addition, this class also presents the best photo-response, which is mainly caused by an overlapped absorption spectrum of the E and Z forms, thus reaching a mixed photo-stationary state where the trans and cis isomers are continuously interconverting. Therefore, for pseudostilbenes, a single visible light wavelength can induce both forward and reverse isomerization. Thermal relaxation from cis to trans spans from milliseconds to seconds. On the other hand, in the other two classes of compounds, the absorption spectra usually do not overlap, meaning that two different wavelengths are needed to switch between the cis and trans forms, which is ideal for photoswitchable materials. Particularly, azobenzene-type molecules are proven to isomerize back from cis into trans isomers very slowly through thermal relaxation when bulky substituents are introduced into the structure, thus providing a thermal relaxation time from cis to trans isomers of hours for the azobenzene-type molecules and minutes for the amino-azobenzene-type molecules. Recently, it has even been proven that some stable cis-isomers can be isolated using the proper solvent, which leads to a stable two-state photoswitchable system. Even if the optical properties of azobenzenes discussed in previous paragraphs are quite interesting for MOST applications, the applicability of these compounds is somewhat limited by the energy density. The energy difference of a typical azobenzene is usually around 40 kJ/mol, which does not outcompete other systems. In addition, spectral overlap and the increased molecular weight to obtain excellent optical properties have led to a relative decrease in the interest of these compounds [16]. However, the broad tunability and the synthetic availability of different compounds achieved in the last two decades have reactivated their use as MOST candidates. Additionally, the wider scope obtained by azoheteroarenes, in which one of two phenyl rings are modified by a heterocyclic ring, can exponentially increase its possibilities by tuning some of the relevant properties. For instance, we have already discussed that the photosiomerization quantum yield should be as high as possible [78,79]. Thus, this parameter has been subject to extensive research. The effect of various solvents, temperatures, and rigid environments was explored. Zimmerman et al. reported that n-π* quantum yield is more efficient than π-π* in both directions of conversion (cis-trans/trans-cis) and, in accordance with Birnbaum and Style, the absorption in the cis-trans conversion is more efficient (cis-trans: 0.39, while trans-cis: 0.33) [80][81][82]. This should be carefully considered when designing MOST applications based on azobenzenes. On the other hand, the ability of azobenzenes to absorb light in the visible region to induce the photoisomerization has been considered a clear advantage of these compounds [83]. The poor performance of azobenzenes with respect to energy storage has been addressed differently. In 2014, Grossman and Kolpak performed a series of density functional theory (DFT) computational studies of the azobenzene coupled with carbon nanotubes (CNTs) (Figure 9a). The addition of nanotubes to azobenzene increases the rigidity and conformational hindrance of the structures, so the energy storage per azobenzene increased up to 30% and the storage lifetimes almost reached the unit, in addition to increased fatigue resistance (a half-life (t 1/2 ) of 33 h was observed for the cis isomer) [84]. Later, the same authors extended their DFT study, incorporating azobenzenes functionalized with carbon-based templates such as graphene, fullerene, and β-carotene, identifying various potential molecules that have improved properties. Upon modification, the cisisomer is stabilized by intramolecular hydrogen bonds (Figure 9b), leading to long-term storage lifetimes (t 1/2 = 5408 h) [85,86]. In view of these results, different experimental groups carried out the synthesis of the photoswitches, obtaining good results [87,88]. Specifically, they obtained molecules with 11-12 carbon atoms per azobenzene and the hydrogen bond stabilization of the azobenzenes was confirmed by NMR, FT-IR, and DFT studies. In a different approach, the addition of azobenzene to macrocycles [89] has also been suggested to improve energy storage capabilities [90]. The use of rings formed by azobenzenes connected through a suitable linker agent increase the barrier of the back reaction (Figure 9c). In this approach, the formation of the macrocycle contributes to increasing the rigidity of the system. In addition, it is possible to add functional groups to the macrocycle, as done in the graphene, allowing to establish hydrogen bonds with the aim of obtaining longer lifetimes for the energy storage. Even if azobenzene-functionalized CNTs have increased the energy density, these systems cannot be deposited into uniform films. To avoid this problem, an azopolymer was prepared to act as solid-state solar-thermal fuel (STF) [91]. This novel design allows to prepare uniform films. This device allowed to increase the temperature of the heat generated in the back conversion by up to 180 • C [86]. Following this breakthrough, different studies have been conducted on azopolymers [92,93] because of the ease of application of the photoswitches in large areas. Additionally, different studies attempted to combine the rigidity obtained by coupling a polymer and the addition of functional groups to generate non-covalent interactions to the azobenzene, with the aim of stabilizing the azopolymers. Non-covalent interactions have also demonstrated an increase in the lifetime of the stored energy, such as π-π stacking and hydrogen bonds. Taking advantage of these interactions, a new mechanism to synthesize azopolymers controlling the thickness and film morphology by electrodeposition was described. Unfortunately, when the substitution of the polymers is modified, the energy storage is greatly affected [94]. It is known that preparing azobenzenes with very bulky substituents increases the rigidity and, therefore, the lifetime of energy storage. However, different and simpler systems continued to be sought in parallel. In 2017, Grossman and co-workers demonstrated that it was not necessary to use templates or even polymers to achieve SFT materials with high-energy density and thermal stability. In this study, they synthesized various azobenzenes substituted with bulky aromatic rings, obtaining an enthalpy difference between the cis and trans isomers comparable to the unsubstituted azobenzene. A critical factor to improve the azobenzene properties was to distort the molecule to avoid planarity, demonstrating that using small molecules made it possible to generate thin films with excellent charging and cycling properties [95]. Lately, a novel approach implying the use of azoheteroarene photoswitches has been explored. In various experimental and computational studies, it was discovered that changing the type of heteroaromatic ring or the position of the heteroatom with respect to the azo group had a crucial effect on the photoswitching properties [35,96]. Using this approach, it was possible to prepare molecules with a half-life of days or even years, providing an excellent alternative to use these compounds in a MOST system. The increase in the halflifetime in the photoswitches is due to the formation of an intramolecular hydrogen bond affecting the energy difference between isomers, and thus the energy storage ( Figure 9d). However, not all studied azoheteroarenes absorb in the visible region, making their use for energy storage problematic. Again, the long list of features that a system should fulfil to be of practical use in the MOST technology makes it extremely difficult to select the ideal candidate. However, the ever-growing list of available azoheteroarenes turns these compounds into excellent candidates to absorb and store sunlight in MOST devices. Dihdroazulene-Vinylheptafluvene (DHA-VHF) Another system widely studied in the context of MOST applications is the dihdroazulene/vinylheptafulvene (DHA/VHF) couple. While these compounds have some practical issues that hamper their applied use, they also feature some interesting properties. The optical properties of this system imply a DHA absorption of ca. 350 nm and a band at 480 nm for VHF, which also includes an isosbestic point at 400 nm with a very low overlap between both species. The photoisomerization quantum yield is around 0.6, which could be considered a good value for practical applications, although clearly lower than the better examples of NBD/QC, but by far better in comparison with NBD (0.05) or azobenzene (0.33). However, the main drawback for this couple is the short half-life time of parent VHF, being only 10 h at 25 • C. This should rule out its use for a long-term storage application, but others implying daily cycles (as charging during the day and releasing at night) can be considered as an alternative. On the other hand, it has shown some promising features in certain application tests. For instance, it has been used on solar and fluidic devices to provide an efficiency of up to 0.13% in non-laboratory experiments, considering the total solar income harvested. In addition, DHA/VHF also has a very high robustness against degradation, featuring less than 0.01% degradation in 70 irradiation back conversion cycles in toluene solutions [2,17]. Another drawback of this system is the low-energy storage capacity of DHA/VHF, which present an energy difference of ca. 28 kJ/mol, which is quite modest in comparison with NBD derivatives. The general energy landscape of this couple can be seen in Figure 10. Upon light absorption, DHA isomerizes to VHF through a photoinduced carbon-carbon ring-opening reaction. In the excited state, DHA planarizes, allowing the stereochemical conditions to open the ring. This is favored by an increased steric hindrance due to the electronic reorganization. After the photochemical step, two possible isomers can be obtained. The initially formed cis-VHF can further evolve to the thermodynamically stable trans-VHF [38], with both species being in equilibrium through a small energy barrier [97]. This behaviour can facilitate an early back reversion to DHA, decreasing the lifetime of the photoisomer and affecting the energy density at the same time. In addition, the enthalpy of the photoreaction was measured to be ca. 35.2 kJ/mol in a vacuum [2]. Another relevant point is the large solvent effect that has been observed for the DHA/VHF couple. The photoreaction from DHA to VHF is faster in polar solvents than in apolar ones [98]. In contrast, the robustness through different irradiation conditions decreases notably in polar solvents (0.18-0.28%) compared with in apolar ones like toluene (0.01%). From these studies, some differences in the kinetic parameters have been invoked to explain the stabilization of charge-separated species, which acts as an intermediate during the photochemical transformation [39,98]. On the other hand, DHA is 10 times more soluble in toluene (apolar and aprotic) than in ethanol (polar and protic) and the photoisomerization quantum yield is larger (0.6) in toluene than in ethanol (0.5), being intermediate in acetonitrile (0.55). To sum up these effects, the more polar the solvent, the better the kinetic parameters, but, in contrast, an additional enhancement of system robustness is found with apolar solvents. Along the years, many different studies were performed to better understand and improve the DHA capabilities, aiming at a better performance. The effect of some vicinal chemical modifications over the photoisomerization from DHA to VHF was improved in the 90s, usually by adding some donor or acceptor electronic groups on para positions of the phenyl ring. This yields in a moderate modification of the photoreaction quantum yield, ranging from 0.6 to 0.3 when increasing the electron-withdrawing strength of the substituent in the para position [98]. More recently, some extensive computational analysis was performed for the screening of interesting new candidates. In this work [40], an impressively wide number of chemical modifications has been performed on DHA systems to tune some of their properties. The most relevant and effective ones are summarized in Figure 11, whereby the main interest was the substitutions in positions 1-3 and 7, according to the numbering seen below for DHA systems. One of the first modifications was to check the importance of the CN groups in position 1 and, as can be expected according to the mechanism, the elimination of this strong electron-withdrawing group causes a lack of photoswitching capacity, proving that the strong electrophilic behavior of C1 is crucial in the charge transfer intermediate on the photochemical transformation. As the elimination lacks the photoisomerization, some interesting attempts to replace one of the CN with other EWGs were carried out [99]. To finalize, the substitution of one CN by an ethoxycarbonyl or carbamoyl group can maintain the photoswitching ability and enhance the energy storage capacity, increasing it by 0.05-0.1 MJ/kg on average, being a relevant increment, also considering the higher molecular weight. As counterpoint, the predicted ∆G is lower than that found in base DHA, facilitating the back conversion. Moreover, with these systems, the solvent dependence seems to be slightly lower, even being quite representative. The modulation on the stability of VHF and the back reversion speed can be controlled by the modification of positions 2, 3, and 7; herein, the effect of donor-and electronwithdrawing groups can drastically change some properties. The inclusion of a donor group at C2 and an acceptor group at C3 or C7 have some impact, increasing the lifetime of these derivatives [100,101]. The addition of electronwithdrawing groups (EWGs), like cyano, in position 7 can increase the half-life of VHF by up to six times, in some cases reaching an exceptionally long-lived VHF in acetonitrile. This behavior can be explained by the stabilization of the VHF form, increasing the energy needed to initiate the back-reaction, but, in contrast, having a low impact on the energy storage capacity. The inclusion of an amino group in position 3 yields a hydrogen bond between the amino and the electron-withdrawing groups (CN). This seems to stabilize the DHA system, increasing the storage energy, also blocking VHF in the s-cis-VHF conformer. This means that the s-cis-VHF becomes the most stable isomer of VHF. Thus, geometrically, it is more similar to the intermediate and, as expected, the back-reaction barrier becomes even lower. This behavior reduces its usability because of the fast reversion achieved despite the higher energy stored. In contrast, the addition of a donor group (amino) on position 3 and an acceptor group (NO 2 ) at C2 produces an increase in the ∆G of the back-reaction, also lengthening the lifetime of the VHF form [40]. Some other modifications, like the condensed effect of adding a 9-anthryl group in position 2 and 3, were found to increase the storage density to 0.38 MJ/kg, but this facilitates the VHF-to-DHA ring closure in a few seconds [46]. The addition of bulky groups on the ortho position of the phenyl group in C2 can stabilize the cis conformer of VHF, increasing the energetic barrier of the back-reaction [102]. The quantum yields of photoisomerization for these ortho substituted derivatives range in acetonitrile close to DHA (0.55) values, thus improving the lifetime without perturbing the photoisomerization rate. This effect is only with an iodine atom; if the size of the added group increases, this effect is maximized, reaching a 60 times longer half-life and quantum yields slightly higher than DHA of ca. 0.6-0.7. The last modification that we will comment about is the preparation of multi-switcher devices; on these, the combination of two DHA moieties, connected through a bisacetylene bridge, can differ too much depending on the substitution in ortho, meta, and para, ranging from an increase in the lifetime from a few hours to a couple of days [102]. In addition, combinations with other MOST candidates were attempted, offering some promising results, especially the increase in storage density due to the charge of two molecules, and in case of DHA-NBD, the combined absorption is notably red-shifted and the spectral overlap is decreased [103]. Moreover, the use of an azobenzene derivative together with phenylene bridge bis-DHA moieties, all included in a macrocyclic structure, can yield a modification of the DHA isomerization because of the predominant azobenzene isomerization found. This can limit the possibilities of combining azobenzenes and DHA moieties [104]. A principal outlook towards the use of DHA systems presents a few advantages, such as efficient isomerization and good optical properties, as well as a series of drawbacks including the modest energy storage and short lifetimes. Conclusions In this review, we have covered the three more relevant families of compounds that are under investigation as MOST systems, norbornadiene, azobenzene, and dihydroazulene derivatives. Every set of compounds has its own drawbacks and strengths, which should be carefully considered for any specific application. However, it is also relevant to recognize that we are still far from finding the ideal MOST candidate. The mentioned improvements in the molecular design constitute the basis for the future development of these systems. In addition, the use of molecular modelling and machine learning strategies can provide a fast and valuable input in this way [105]. A general increase in the performance of the molecules used for solar energy storage is required before this technology could provide an alternative and efficient way of harvesting and storing solar energy, as well as its use and release on demand. In the near future of MOST devices, the exploitation of hybrid strategies (multijunction devices) is the more promising field to improve the overall performance.
2022-08-25T15:12:22.204Z
2022-08-22T00:00:00.000
{ "year": 2022, "sha1": "88d9f6236b6c52a5bf27309949114f0547f811a8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-7256/2/3/45/pdf?version=1661238470", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "a9ca37035c904c553c13618068e6836e8814b11e", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [] }
245475404
pes2o/s2orc
v3-fos-license
Role of Ligand Distribution in the Cytoskeleton-Associated Endocytosis of Ellipsoidal Nanoparticles Nanoparticle (NP)–cell interaction mediated by receptor–ligand bonds is a crucial phenomenon in pathology, cellular immunity, and drug delivery systems, and relies strongly on the shape of NPs and the stiffness of the cell. Given this significance, a fundamental question is raised on how the ligand distribution may affect the membrane wrapping of non-spherical NPs under the influence of cytoskeleton deformation. To address this issue, in this work we use a coupled elasticity–diffusion model to systematically investigate the role of ligand distribution in the cytoskeleton-associated endocytosis of ellipsoidal NPs for different NP shapes, sizes, cytoskeleton stiffness, and the initial receptor densities. In this model, we have taken into account the effects of receptor diffusion, receptor–ligand binding, cytoskeleton and membrane deformations, and changes in the configuration entropy of receptors. By solving this model, we find that the uptake process can be significantly influenced by the ligand distribution. Additionally, there exists an optimal state of such a distribution, which corresponds to the fastest uptake efficiency and depends on the NP aspect ratio and cytoskeleton stiffness. We also find that the optimal distribution usually needs local ligand density to be sufficiently high at the large curvature region. Furthermore, the optimal state of NP entry into cells can tolerate slight changes to the corresponding optimal distribution of the ligands. The tolerance to such a change is enhanced as the average receptor density and NP size increase. These results may provide guidelines to control NP–cell interactions and improve the efficiency of target drug delivery systems. Recently, Schubertová et al. [33] focused on the specific case of receptors being overexpressed in cancer cells and demonstrated that spherical NPs with uniform ligand distribution yield the fastest wrapping speed by the cell membrane through performing a molecular dynamic simulation, in which the receptors can directly react with the ligands on the surface of NPs without diffusion. While the receptors have relatively low density, NP endocytosis is generally limited by receptor diffusion [4,8,34]. In the case of receptordiffusion-mediated endocytosis of NPs, we have previously examined the effect of ligand distribution on wrapping cylindrical NPs by cell membrane and determined an almost uniform ligand distribution as the optimal distribution associated with the highest cellular uptake efficiency [35]. The above studies have focused on describing the effect of the ligand distribution on the internalizations of NPs with spherical and cylindrical shapes, both of which have surfaces of constant Gaussian curvature. In addition, cylindrical NPs have also been used to investigate how aspect ratios may affect the cell-NP interactions [7,36,37]. However, for NPs with complex shapes, it is still unclear how non-constant surface curvature may couple other factors like the distribution patterns of ligands and cytoskeleton remodeling to influence their cellular uptake. Endocytosis includes a number of biophysical processes by which cells internalize NPs. It has been recognized that the cytoskeleton plays an essential role in several of these processes [38]. The molecular structure, on the basis of the association of the cytoskeleton to endocytosis, is made up of many endocytic proteins potentially linked to the actin cytoskeleton so that the NP entry via endocytosis essentially accompanies cytoskeleton remodeling [38][39][40][41][42][43][44]. Such a point of view has been confirmed through the observations of HIV viral particle entry into host cells, where a large number of protein-protein interactions are indicative of the links between the endocytic machinery and the actin cytoskeleton [45,46]. Most recently, while taking advantage of today's nanotechnology, an experiment has been performed on measuring the dynamic process of internalization of a single human enterovirus 71 with size 26 nm via endocytosis by a force tracing technique on atomic force microscopy (AFM) [47]. It is found that if cytochalasin B, a cell-permeable mycotoxin that strongly inhibits network formation by actin filaments, is injected into the AFM liquid chamber during force tracing measurement, most of the force signal disappeared, which indicates that the wrapping of the NP cannot occur without the association of the actin filament network [47]. From a mechanics point of view, the involvement of cytoskeleton remodeling in the endocytic process embodies mechanical resistance through elastic deformation [21][22][23]. This deformation of the cytoskeleton during the NP-cell interaction, at the length scale of about ten nanometers, usually follows the linear theory of continuum mechanics, as confirmed by the experiments on nanoneedle (radius from 20 to 150 nm) indentation to living cells [48][49][50]. Although tremendous progress in understanding cell-NP interactions has been achieved, the role of ligand distribution in the cytoskeleton-associated endocytosis of non-spherical NPs remains unclear. In this work, we propose a coupled elasticity-diffusion statistical dynamic model to investigate the influence of ligand distribution on the cellular uptake of ellipsoidal NPs, where we assume that the uptake process is driven by the binding energy between diffusive receptors on the cell membrane and ligands on the NP surface to overcome resistances from membrane deformation, cytoskeleton deformation, and changes in the configuration entropy of receptors. Figure 1 shows the schematic for the interaction between an ellipsoidal NP and a cell via cytoskeleton-associated endocytosis. The cell is modeled as an elastic half-space covered by a cell membrane embedded with diffusive mobile receptors. The ellipsoidal NP is considered as rigid and coated with immobile ligands. For simplicity, we consider the NP as an ellipsoid of revolution with equatorial and polar semi-axes, R a and R b , as shown in Figure 1a. We assume that the mobile receptors are distributed evenly with constant density ξ 0 prior to the engulfment. The ligand density ξ L (h), as a function of the central axis h, can be non-uniform. And the depth of the NP engulfment is denoted as h d . As long as the receptors diffuse to binding sites and bind with the ligands on the NP surface, the receptor density within the contact area becomes identical to the ligand density. The distribution of the receptors on the membrane surface then become non-uniform, which is denoted as ξ(s, t) at time t. During the cytoskeleton-associated endocytosis of NPs, engulfment is driven by the formation of ligand-receptor complexes to overcome the resistance from the membrane and cytoskeleton deformations, and the changes in the configuration entropy of receptors. The energies that determine the internalization process include: (1) , favorable binding energy of ligand-receptor; (2) , unfavorable energy of membrane deformation; (3) , unfavorable energy attributed to changes in the configuration entropy of receptors; and (4) , unfavorable energy of cytoskeleton deformation. The total energy for cellular uptake of NPs can then be expressed by, Materials and Methods The binding energy during the wrapping process can be obtained as, where is the contact area between the NP and cell, and is the chemical energy released by each receptor-ligand binding event. The axially symmetric system causes the infinitesimal area element to be expressed in the following form, The fluid membrane theory of Canham-Helfrich [51] states that during NP-cell interaction, the energy of membrane deformation is, where and are the bending modulus and surface tension of the membrane, respectively. Once the membrane is in close contact with the NP, its mean curvature, , becomes equal to that of the corresponding position of the NP, and can be expressed as (see Appendix A for details), During the cytoskeleton-associated endocytosis of NPs, engulfment is driven by the formation of ligand-receptor complexes to overcome the resistance from the membrane and cytoskeleton deformations, and the changes in the configuration entropy of receptors. The energies that determine the internalization process include: (1) F 1 , favorable binding energy of ligand-receptor; (2) F 2 , unfavorable energy of membrane deformation; (3) F 3 , unfavorable energy attributed to changes in the configuration entropy of receptors; and (4) F 4 , unfavorable energy of cytoskeleton deformation. The total energy for cellular uptake of NPs can then be expressed by, The binding energy during the wrapping process can be obtained as, where A w is the contact area between the NP and cell, and e RL is the chemical energy released by each receptor-ligand binding event. The axially symmetric system causes the infinitesimal area element to be expressed in the following form, The fluid membrane theory of Canham-Helfrich [51] states that during NP-cell interaction, the energy of membrane deformation is, where κ and γ are the bending modulus and surface tension of the membrane, respectively. Once the membrane is in close contact with the NP, its mean curvature, H, becomes equal to that of the corresponding position of the NP, and can be expressed as (see Appendix A for details), To estimate the entropic contribution to the energy associated with the receptor distribution, we adopted the model based on the theory of ideal gas [4,52]. The unfavorable energy of changes in the configuration entropy of the bonds and free receptors is given as, where T is the absolute temperature in the order of 300 K. In the cytoskeleton-associated endocytosis, the entry of NPs into the cells inevitably deforms the cytoskeleton. Obataya et al. [48] used an atomic force microscope with a long nanoneedle (radius = 100 nm to 150 nm) to perform indentations on living cells. They demonstrated that the loading force curve for an indentation depth of up to 2 µm remains consistent with the classic Hertz model in contact mechanics. Similarly, Beard et al. [49] used a nanoneedle probe with a radius of only 20 nm and recorded the loading force curve during the indentation of a corneocyte. The corneocyte is usually a target of numerous viruses, including the Herpes simplex virus Type 1 [53]. This study [49] confirmed that the Hertz model can well fit the loading force curve. As the Hertz model is derived based on the contact problem of two elastic bodies, these results therefore clearly indicate that living cells during indentation behave as a deformable elastic solid, implying that cytoskeleton deformation plays an important role in resisting NP intrusion. Based on the Hertz contact theory for rotating solids [54], the deformation energy of cytoskeleton can be derived as, where R 2 a /R b is the curvature radius at the initial contact point, and the combined elastic modulus is defined as 1/E * = 1 − µ 2 c /E c + 1 − µ 2 n /E n , in which µ c and E c represent the Poisson ratio and Young modulus of the cell, and the corresponding values for NPs are µ n and E n . In the limit wherein NPs are stiffer than cells, the combined elastic modulus can be simplified to E * = E c / 1 − µ 2 c . The differentiation of free energy in Equation (1) with respect to time yields, (8) where µ = 1 + ln ξ/ξ 0 is the local chemical potential of a receptor, ξ + L ≡ ξ L (h d ) and ξ + ≡ ξ(a + , t) denote the receptor-ligand bond density and receptor density, respectively. dh is the contact area, and the function f (h) associated with NP shape can be given as (see Appendix B for details), Receptors on the membrane are driven by the density gradient to the adhesion region, and this driving force is, Further, the diffusion flux can be given as, with the diffusion coefficient D. Finally, owing to membrane fluidity, the free receptors outside the contact region are mobile within the membrane, and the receptor density can be governed by the diffusion equation as follows [4,52], The conservation condition of receptors on the membrane can be given as [4,52], Substitution of the continuity equation ∂ξ ∂s into Equation (13) under the boundary conditions ξ(s, t) → ξ 0 , j(s, t) → 0 as s → ∞ yields [4,52], where j + ≡ j(a + , t) denotes the flux in front of the contact edge. By incorporating the conservation relation and integrating by parts, the integral term in Equation (8) takes the form [4,52], The first term on the right side represents the energy transport across the front at s = a(t) due to receptor diffusion. The second term on the right in Equation (15) is precisely the rate of energy dissipation as a result of receptor diffusion along the cell membrane. Similar to the Griffith crack growth criterion, the decreasing rate of free energy should equal to the energy dissipated during receptor diffusion for a power-balanced process [52]. Hence, the energy balance equation at the contact edge features the form, Given the ligand distribution on the NP's surface, the diffusion Equation (12) can be numerically calculated by adopting the finite difference method subjected to the boundary conditions ξ(∞, t) → ξ 0 and ξ + ≡ ξ(a + , t). Under the contact edge condition, wherein the contact area reaches the total ellipsoid surface area as A t = πa 2 = 4π R 2 a + 2R a R b /3, the NP is totally internalized, and the wrapping time is determined. Results To simulate the complex situation of non-uniform ligand distribution, we denote a distribution pattern of the ligand density with an approximate harmonic distribution, where the dimensionless constant c represents the degree of non-uniformity of ligand distribution. For example, the case of c = 0 means that the ligand is uniformly distributed with a density of ξ L0 . Under such a situation, the total number of ligands is independent of the constant c and is denoted by ξ L0 A t . In Figure 2, we illustrate how such a degree of non-uniformity of the ligand distribution c, determines the ligand distribution on an NP with different shapes. Influence of Ligand Distribution on the Cellular Uptake of NPs with Different Shapes We exclude the effects of the cytoskeletal deformation and set the initial receptor density as 0.01 to systematically investigate the effect of NP shape on uptake. Figure 3 shows the wrapping time as a function of the degree of non-uniformity of ligand distribution, c, for three NPs with different shapes. The wrapping time initially decreases and then increases as the constant increases; this trend indicates the existence of an optimal ligand distribution corresponding to the minimum wrapping time. This optimal distribution is influenced by the shape of the NP. For spherical NPs, the optimal ligand distribution approaches a uniform distribution, which is consistent with previous predictions by Li et al. [35] and Schubertová et al. [33]. However, when the NP shape changes from spherical to oblate, the optimal ligand distribution transitions from uniform to polarized at the edge of the oblate NP (that is, ℎ = ). For the cellular uptake of elongated NPs, the optimal ligand distribution becomes densely distributed at both ends of the elongated NP. For each case of the oblate and elongated NPs, a large amount of binding energy from the receptor-ligand binding events is required to consume the high bending energy of the local cell membrane with large deformation. Figure 3 also demonstrates that wrapping times are only slightly different for a large range of the constant c. When the degree of non-uniformity of ligand distribution, , is higher or lower than a threshold value, the wrapping process becomes difficult to complete. Then, we consider the typical values of the binding energy of a single bond e RL = 15 k B T [55], bending modulus of the cell membrane κ = 20 k B T [56], surface tension of the cell membrane γ = 0.005 k B T/nm 2 [22], diffusion coefficient of the receptors on the membrane D = 10 4 nm 2 /s [4,55], and ξ L0 = 5000/µm 2 [4]. Influence of Ligand Distribution on the Cellular Uptake of NPs with Different Shapes We exclude the effects of the cytoskeletal deformation and set the initial receptor density as 0.01ξ L0 to systematically investigate the effect of NP shape on uptake. Figure 3 shows the wrapping time as a function of the degree of non-uniformity of ligand distribution, c, for three NPs with different shapes. The wrapping time initially decreases and then increases as the constant c increases; this trend indicates the existence of an optimal ligand distribution corresponding to the minimum wrapping time. This optimal distribution is influenced by the shape of the NP. For spherical NPs, the optimal ligand distribution approaches a uniform distribution, which is consistent with previous predictions by Li et al. [35] and Schubertová et al. [33]. However, when the NP shape changes from spherical to oblate, the optimal ligand distribution transitions from uniform to polarized at the edge of the oblate NP (that is, h = R b ). For the cellular uptake of elongated NPs, the optimal ligand distribution becomes densely distributed at both ends of the elongated NP. For each case of the oblate and elongated NPs, a large amount of binding energy from the receptor-ligand binding events is required to consume the high bending energy of the local cell membrane with large deformation. Figure 3 also demonstrates that wrapping times are only slightly different for a large range of the constant c. When the degree of non-uniformity of ligand distribution, c, is higher or lower than a threshold value, the wrapping process becomes difficult to complete. Effect of Ligand Distribution on the Cytoskeleton-Associated Endocytosis We consider the internalization of spherical NPs under the influence of cytoskeleton deformation and determine how the cytoskeleton stiffness may affect the optimal ligand distribution associated with the smallest wrapping time. Figure 4 displays the relation between the wrapping time and the degree of nonuniformity of ligand distribution, , for spherical NPs with a radius of 50 nm for entry into cells. It can be seen that the cytoskeleton deformation can significantly affect the uptake. In the membrane-mediated endocytosis, the optimal distribution of the ligands coated on the spherical NP surface tends to be uniform. With an increase of Young's modulus of the cytoskeleton, the optimal ligand distribution becomes dense at the NP ends with a large curvature, in which the large deformation energy of the cytoskeleton plays a key role in resistance to the cellular uptake. When NPs are uptaken by stiff cells, since a large amount of energy for elastic deformation needs to be overcome, long wrapping times will be needed. Similar results for the oblate ellipsoid and elongated NPs can be seen in Figure 5. Regardless, both Figures 4 and 5 show that the range of the ligand distribution constant, , for effective NP uptake decreases as the Young modulus of the cytoskeleton increases. These findings imply that the internalization of spherical NPs into a soft cytoskeleton can not only shorten the wrapping time but also broaden the range of ligand distribution for effective NP uptake. Effect of Ligand Distribution on the Cytoskeleton-Associated Endocytosis We consider the internalization of spherical NPs under the influence of cytoskeleton deformation and determine how the cytoskeleton stiffness may affect the optimal ligand distribution associated with the smallest wrapping time. Figure 4 displays the relation between the wrapping time and the degree of nonuniformity of ligand distribution, c, for spherical NPs with a radius of 50 nm for entry into cells. It can be seen that the cytoskeleton deformation can significantly affect the uptake. In the membrane-mediated endocytosis, the optimal distribution of the ligands coated on the spherical NP surface tends to be uniform. With an increase of Young's modulus of the cytoskeleton, the optimal ligand distribution becomes dense at the NP ends with a large curvature, in which the large deformation energy of the cytoskeleton plays a key role in resistance to the cellular uptake. When NPs are uptaken by stiff cells, since a large amount of energy for elastic deformation needs to be overcome, long wrapping times will be needed. Similar results for the oblate ellipsoid and elongated NPs can be seen in Figure 5. Regardless, both Figures 4 and 5 show that the range of the ligand distribution constant, c, for effective NP uptake decreases as the Young modulus of the cytoskeleton increases. These findings imply that the internalization of spherical NPs into a soft cytoskeleton can not only shorten the wrapping time but also broaden the range of ligand distribution for effective NP uptake. Effect of Ligand Distribution on the Cytoskeleton-Associated Endocytosis We consider the internalization of spherical NPs under the influence of cytoskeleton deformation and determine how the cytoskeleton stiffness may affect the optimal ligand distribution associated with the smallest wrapping time. Figure 4 displays the relation between the wrapping time and the degree of nonuniformity of ligand distribution, , for spherical NPs with a radius of 50 nm for entry into cells. It can be seen that the cytoskeleton deformation can significantly affect the uptake. In the membrane-mediated endocytosis, the optimal distribution of the ligands coated on the spherical NP surface tends to be uniform. With an increase of Young's modulus of the cytoskeleton, the optimal ligand distribution becomes dense at the NP ends with a large curvature, in which the large deformation energy of the cytoskeleton plays a key role in resistance to the cellular uptake. When NPs are uptaken by stiff cells, since a large amount of energy for elastic deformation needs to be overcome, long wrapping times will be needed. Similar results for the oblate ellipsoid and elongated NPs can be seen in Figure 5. Regardless, both Figures 4 and 5 show that the range of the ligand distribution constant, , for effective NP uptake decreases as the Young modulus of the cytoskeleton increases. These findings imply that the internalization of spherical NPs into a soft cytoskeleton can not only shorten the wrapping time but also broaden the range of ligand distribution for effective NP uptake. Effect of Ligand Distribution on the Cellular Uptake of NPs under Different Initial Recepto Densities The relation between the wrapping time of NPs and the degree of non-uniformity ligand distribution, , for different initial receptor densities is displayed in Figure which further identifies how the ligand distribution may influence the wrapping time u der the different initial receptor densities. The semi-axis and engulfing pattern are set = 50 nm and membrane-mediated endocytosis, respectively. It can be seen that t wrapping time decreases as the initial receptor density increases. Therefore, the role receptor diffusion becomes negligible for large initial receptor densities. Additional there exists a broad range of ligand distribution patterns determined by the coefficient corresponding to an almost identical wrapping time. The ranges of coefficient c are almo independent of the initial density of the receptors. However, the corresponding wrappi time decreases with the increase of the initial density of the receptors. Effect of Ligand Distribution on the Cellular Uptake of NPs under Different Initial Receptor Densities The relation between the wrapping time of NPs and the degree of non-uniformity of ligand distribution, c, for different initial receptor densities is displayed in Figure 6, which further identifies how the ligand distribution may influence the wrapping time under the different initial receptor densities. The semi-axis and engulfing pattern are set as R a = 50 nm and membrane-mediated endocytosis, respectively. It can be seen that the wrapping time decreases as the initial receptor density increases. Therefore, the role of receptor diffusion becomes negligible for large initial receptor densities. Additionally, there exists a broad range of ligand distribution patterns determined by the coefficient c, corresponding to an almost identical wrapping time. The ranges of coefficient c are almost independent of the initial density of the receptors. However, the corresponding wrapping time decreases with the increase of the initial density of the receptors. Effect of Ligand Distribution on the Cellular Uptake of NPs under Different Initial Receptor Densities The relation between the wrapping time of NPs and the degree of non-uniformity of ligand distribution, , for different initial receptor densities is displayed in Figure 6, which further identifies how the ligand distribution may influence the wrapping time under the different initial receptor densities. The semi-axis and engulfing pattern are set as = 50 nm and membrane-mediated endocytosis, respectively. It can be seen that the wrapping time decreases as the initial receptor density increases. Therefore, the role of receptor diffusion becomes negligible for large initial receptor densities. Additionally, there exists a broad range of ligand distribution patterns determined by the coefficient c, corresponding to an almost identical wrapping time. The ranges of coefficient c are almost independent of the initial density of the receptors. However, the corresponding wrapping time decreases with the increase of the initial density of the receptors. Effect of Ligand Distribution on the Cellular Uptake of NP with Different Sizes Size-dependent NP endocytosis has been demonstrated in previous studies [4][5][6]. However, how the ligand distribution and NP size may couple together to influence the cellular uptake of NPs remains unclear. We consider membrane-mediated endocytosis to address this issue. Figures 7 and 8 plot the wrapping time of NPs with different aspect ratios, such as λ = 1, λ = 5/3, and λ = 5/9, as a function of the degree of non-uniformity of ligand distribution c, for NPs with different sizes. The optimal ligand distributions for different NP shapes, such as c = 0 (uniform) for a spherical NP, c < 0 for an oblate NP, and c > 0 for an elongated NP, are determined. The results also show that the shortest wrapping time associated with the optimal ligand distribution increases with an increase in the NP's size. For large NPs, a long wrapping time is required to recruit numerous mobile receptors to bind with the ligands on the NP's surface. Effect of Ligand Distribution on the Cellular Uptake of NP with Different Sizes Size-dependent NP endocytosis has been demonstrated in previous studies [4][5][6]. However, how the ligand distribution and NP size may couple together to influence the cellular uptake of NPs remains unclear. We consider membrane-mediated endocytosis to address this issue. Figures 7 and 8 plot the wrapping time of NPs with different aspect ratios, such as = 1, = 5/3, and = 5/9, as a function of the degree of non-uniformity of ligand distribution , for NPs with different sizes. The optimal ligand distributions for different NP shapes, such as = 0 (uniform) for a spherical NP, < 0 for an oblate NP, and > 0 for an elongated NP, are determined. The results also show that the shortest wrapping time associated with the optimal ligand distribution increases with an increase in the NP's size. For large NPs, a long wrapping time is required to recruit numerous mobile receptors to bind with the ligands on the NP's surface. For the wrapping of large NPs, there exists a very large range of c corresponding to a wrapping time almost identical to the optimal one. Under this situation, resistance to the cellular uptake from membrane deformation becomes negligible when compared to the energy associated with the receptor distribution, and a long wrapping time mainly results from the large area of NPs that should be wrapped. Effect of Ligand Distribution on the Cellular Uptake of NP with Different Sizes Size-dependent NP endocytosis has been demonstrated in previous studies [4 However, how the ligand distribution and NP size may couple together to influence cellular uptake of NPs remains unclear. We consider membrane-mediated endocytosi address this issue. Figures 7 and 8 plot the wrapping time of NPs with different asp ratios, such as = 1, = 5/3, and = 5/9, as a function of the degree of non-uniform of ligand distribution , for NPs with different sizes. The optimal ligand distributions different NP shapes, such as = 0 (uniform) for a spherical NP, < 0 for an oblate and > 0 for an elongated NP, are determined. The results also show that the shor wrapping time associated with the optimal ligand distribution increases with an incre in the NP's size. For large NPs, a long wrapping time is required to recruit numer mobile receptors to bind with the ligands on the NP's surface. For the wrapping of large NPs, there exists a very large range of c corresponding a wrapping time almost identical to the optimal one. Under this situation, resistanc the cellular uptake from membrane deformation becomes negligible when compared the energy associated with the receptor distribution, and a long wrapping time mai results from the large area of NPs that should be wrapped. For the wrapping of large NPs, there exists a very large range of c corresponding to a wrapping time almost identical to the optimal one. Under this situation, resistance to the cellular uptake from membrane deformation becomes negligible when compared to the energy associated with the receptor distribution, and a long wrapping time mainly results from the large area of NPs that should be wrapped. Discussion It has been revealed that the optimal ligand distributions of NPs depend on the NP's shape and cytoskeleton stiffness in cytoskeleton-associated endocytosis. Such optimal ligand distributions become non-uniform when cytoskeleton deformation is taken into account, which is different from that of membrane-mediated endocytosis on the spherical or cylindrical NPs [35]. The non-uniform optimal ligand distribution results from the competition between the thermodynamic driving force and the kinetics of receptor diffusion. For ligand densities lower than their optimal values at local wrapping edges, insufficient binding energy prolongs the wrapping time. For ligand densities higher than their optimal values at the local wrapping edges, receptor recruitment can also increase the wrapping time, as the high energy dissipation is due to the large change in the configuration entropy of the receptors. In the cellular uptake of spherical or cylindrical NPs via membrane-mediated endocytosis, the uniform distribution of ligands becomes the optimal one corresponding to the high uptake efficiency. By contrast, during the cellular uptake of ellipsoidal NPs via cytoskeleton-associated endocytosis, the mean curvature of the membrane at each wrapping edge is no longer a constant, but is determined by NP shape. In the power balance in Equation (18), the fourth term at the left hand is the energy contribution of cytoskeleton deformation, which is dependent on the engulfment depth. Therefore, the optimal ligand distribution turns to non-uniform and becomes strongly dependent on NP shape and cytoskeleton stiffness. Bio-inspired methods from viruses are suitable for designing drug delivery systems. Thus, a biophysical understanding of NP-cell interactions is urgently needed. For spherical NPs, fast wrapping occurs in a large range of different ligand distribution patterns around a uniform distribution; this finding provides a physical insight into robust virus infection rather than the point of view of gene expression [57,58]. The optimal size (tens of nanometers) [4,6] and shape (sphere) [8] have been revealed from a physical optimization standpoint. In this study, we confirm that ligand distribution is another significant factor in determining the receptor-diffusion-mediated NP uptake into cells. The almost uniform ligand distribution of spherical viruses is possibly controlled by physical evolution and guarantees viral infectivity via receptor-mediated endocytosis. We also examined the critical state as follows: By solving Equation (16), where ProductLog(.) is the Lambert-W function. In the gradient-driven diffusion process of mobile receptors along the cell membrane, effective wrapping requires ξ + < ξ 0 , When the receptor density in front of the contact edge ξ + is equal to the initial receptor density ξ 0 , the wrapping process begins to be terminated. Accordingly, the solution to Equation (18) in such a critical case, ξ + = ξ 0 , yields the critical ligand density in front of the contact edge, If the ligand density at the wrapping edge is larger than the critical value, then the wrapping process cannot occur due to the large unfavorable entropic contribution of the energy from the receptor configuration change. Our study presents certain limitations. Although we provide a detailed description on the dependence of the optimal ligand distribution on the NP shape and cytoskeleton stiffness, we have treated the NPs as a rigid body. We did not consider in detail whether NP deformation significantly influences the optimal ligand distribution. We also disregard membrane tension [18], the kinetic reaction between receptor and ligand molecules [32,[59][60][61], as well as the NP concentration [62]. We apply these assumptions in this study so that we can easily focus on the effect of ligand distribution on the cellular uptake of ellipsoidal NPs via cytoskeleton-associated endocytosis. Conclusions We show the systematically distinct effects of ligand distribution on the dynamics of cytoskeleton-associated endocytosis of ellipsoidal NPs with different sizes under different initial receptor densities. NPs with the same number of ligands but different distributions can have a wide range of different dynamic processes of cellular uptake. We find that there exist optimal ligand distributions corresponding to the minimum wrapping times for NPs with different shapes. Unlike the findings in previous studies on the cell entry of spherical and cylindrical NPs with different ligand distributions via membrane-mediated endocytosis, such optimal ligand distributions become non-uniform and dependent on the NP shape and cytoskeleton stiffness. For example, the optimal distribution favors that the ligands are densely distributed in the region with large local curvature or large cytoskeleton deformation. These findings supply an insight into the physical understanding of NP-cell interactions and may provide guidelines for targeted drug delivery. Conflicts of Interest: The authors declare no conflict of interest. Appendix A In order to better describe the mean curvature at any point p corresponding to engulfment depth h, a Cartesian coordinate system is established at the center of an ellipsoid. As shown in Figure A1, the mean curvature at point p x p , y p of this ellipsoid can be expressed as, with two Gaussian principal curvatures c 1 and c 2 . Furthermore, c 1 is the Gaussian principal curvature of the blue intersecting line at point p shown in Figure A1, and c 2 is the Gaussian principal curvature of the red intersecting line which is orthogonal to the blue one. One of the principal curvatures along the symmetry plane can be determined as, c 1 = x p (θ)y p (θ) − x p (θ)y p (θ) x p (θ) 2 + y p (θ) 2 3/2 , Besides, the parameter equation of the symmetry plane and geometrical relation can be written as, where θ is the phase angle. Substitution of Equation (A3) into Equation (A2) yields, in which, = ⁄ . The other principal curvature at point is the reciprocal of the principal curvature radius ( Figure A1). As well, the line pn in Figure A1 can be given as, in which, λ = R a /R b . The other principal curvature c 2 at point p is the reciprocal of the principal curvature radius l pn ( Figure A1). As well, the line pn in Figure A1 can be given as, where angle α depends on the position of point p, and it can be obtained by, dy dx x=x p , y=y p = tan π 2 + α = − cot α. According to the geometrical relation, The combination of Equations (A6) and (A7) and sin 2 α + cos 2 α = 1 gives, sin 2 α = 1 Hence, the other principal curvature c 2 can be deduced from Equations (A5) and (A8), which is, The substitution of Equations (A9) and (A4) into Equation (A1) yields the mean curvature as a function of engulfment depth h, About the curvature of an ellipsoid, there are some existing results [63] based on different analysis. Appendix B As shown in Figure A1, the infinitesimal wrapping area of the ellipsoidal NP by cell membrane can be expressed as, with infinitesimal arc length dx 2 + dy 2 . Differentiation of Equation (A3) with respect to engulfment depth h yields, The infinitesimal wrapping area can be rewritten as, Insertion of Equations (A12) and (A3) into Equation (A13) and subsequent integration yields,
2021-12-22T16:21:43.755Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "763bec2c1772daa45dfc313c3b0f622e206b64f8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0375/11/12/993/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f48370b849c23d34f1e3da50021cba4693302d0b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119390326
pes2o/s2orc
v3-fos-license
The Apparent Host Galaxy of PKS 1413+135: HST, ASCA and VLBA Observations PKS 1413+135 (z=0.24671) is one of very few radio-loud AGN with an apparent spiral host galaxy. Previous authors have attributed its nearly exponential infrared cutoff to heavy absorption but have been unable to place tight limits on the absorber or its location in the optical galaxy. In addition, doubts remain about the relationship of the AGN to the optical galaxy given the observed lack of re-emitted radiation. We present new HST, ASCA and VLBA observations which throw significant new light on these issues. The HST observations reveal an extrremely red color (V-H = 6.9 mag) for the active nucleus of PKS 1413+135, requiring both a spectral turnover at a few microns due to synchrotron aging and a GMC-sized absorber. We derive an intrinsic column N_H = 4.6^{+2.1}_{-1.6} times 10^{22}cm^{-2} and covering fraction f = 0.12^{+0.07}_{-0.05}. As the GMC is likely in the disk of the optical galaxy, our sightline is rather unlikely (P ~ 2 times 10^{-4}). The properties of the GMC typical of GMCs in our own galaxy. The HI absorber appears centered 25 milliarcseconds away from the nucleus, while the X-ray and nearly all of the molecular absorbers must cover the nucleus, implying a complicated geometry and cloud structure, with a molecular core along our line of sight to the nucleus. Interestingly, the HST/NICMOS data require the AGN to be decentered relative to the optical galaxy by 13 +/- 4 milliarcseconds. This could be interpreted as suggestive of an AGN location far in the background compared to the optical galaxy, but it can also be explained by obscuration and/or nuclear structure, which is more consistent with the observed lack of multiple images. Introduction Despite two decades of observations in the radio, infrared, optical and X-rays, the unusual flat-spectrum, radio loud AGN PKS 1413+135 remains a puzzle. The source was first classified as a "red quasar" by Rieke et al. (1979), and Bregman et al. (1981) and Beichman et al. (1981) presented the first analysis of its broadband spectrum, showing an extreme, essentially exponential cutoff just blueward of the thermal infrared. Subsequent observations have shown that its appearance changes radically as one observes in progressively blueward bands throughout the near-infrared and optical. In K band, the source is highly polarized (16 ± 3%, Stocke et al. 1992) and shows a featureless spectrum , both properties typical of BL Lacertae (BL Lac) objects, a classification which was first applied to PKS 1413+135 by Bregman et al. (1981) and Beichman et al. (1981). Similarly, ground-based imaging in the H band reveals a dominant nuclear point source surrounded by a faint nebulosity (Lamer et al. 1999). But in the optical, the spectrum of the object is red, with only stellar features and a very weak [OII] line . Optical images obtained with both ground-based telescopes and HST reveal a clear spiral galaxy but show no evidence of an active nucleus McHardy et al. 1991McHardy et al. , 1994. A variety of evidence points to heavy absorption as the cause of this spectral cutoff. Einstein observations require an absorbing column of N H > 2 × 10 22 cm −2 . Radio observations reveal a strong, redshifted 21 cm HI absorption line (Carilli et al. 1992), and a rich variety of molecular species, including OH, 12 CO, 13 CO, HCN, HCO + , HNC, and CN (Wiklind & Combes 1995, 1997Kanekar & Chengalur 2002). And an optical HST image reveals a prominent dust lane across the disk midplane (McHardy et al. 1994). With such a large absorbing column within a powerful AGN's host galaxy, one might expect to see evidence for a bright, reradiated thermal continuum or near-IR emission lines (e.g., NGC 1068, Thompson, Lebofsky & Rieke 1978;Rotaciuc et al. 1991). Yet the object's broadband spectrum shows no evidence of a thermal "hump" (Bregman et al. 1981, Beichman et al. 1981, and near-infrared spectroscopy found no evidence for emission lines. The resulting picture of PKS 1413+135 is both puzzling and incomplete. It was in the light of these mysteries that Stocke et al. (1992) proposed that the AGN might be background to the optical galaxy, and perhaps also amplified by gravitational lensing. But confirmatory evidence for this hypothesis has been difficult to come by. Ground-based images reveal that the nuclear point source appears centered within the optical galaxy to within 0.05 − 0.1 ′′ (Lamer et al. 1999. And radio VLBI imaging (Perlman et al. 1994 shows no evidence of double images, instead revealing an arcuate morphology reminiscent of wide-angle-tail radio sources, but with a total linear size of only 200 pc. However, as was pointed out by Perlman et al. (1996), these arguments are not entirely decisive, particularly given the large scale height of the optical galaxy. Here we report new HST, ASCA and VLBA observations of PKS 1413+135, which shed considerable light on the nature of the host galaxy and absorbing medium. In Section 2, we discuss the details of the observations themselves and our data reduction procedures. In Section 3, we present the new HST/NICMOS image and analyze it along with an archival HST/WFPC observation. In Section 4, we discuss new X-ray observations of PKS 1413+135. In Section 5, we discuss new VLBA observations, in the redshifted 21 cm line as well as continuum observations at 2 cm and 0.7 cm (15 GHz and 43 GHz). Section 6 discusses the overall impact of these results, particularly as it affects both our picture of the optical galaxy and its relationship to the AGN. Throughout this paper, we assume a redshift of z = 0.24671 for the optical galaxy associated with PKS 1413+135, as derived from the redshifted HI absorption (Carilli et al. 1992). We assume H 0 = 60 km s −1 Mpc −1 and Ω tot = 1 throughout, which gives a map scale of 1 arcsecond = 4.05 kiloparsecs. HST Observations We observed PKS 1413+135 with HST/NICMOS on 25 August 1998, using the F160W filter (corresponding roughly to H band) and the NIC1 camera. The resulting image has a scale of approximately 0.0432 arcseconds per pixel. The total integration time was 5120s. To maximize resolution and minimize the effect of known instrumental problems such as grot (Sosey & Bergeron 1999), bad columns and warm pixels, the NICMOS observations were dithered in a square, four-position pattern with offsets of 1.5 ′′ between each pointing. We obtained from the archive WF/PC 1 observations of PKS 1413+135, taken by I. McHardy and collaborators on 26 December 1992, using the Planetary Camera mode, which has a scale of 0.046 arcseconds per pixel. The observation was taken with the F555W filter, corresponding roughly to V band. The total integration time was 2523 sec, split between three exposures offset from one another by ∼ 0.5 ′′ . The WF/PC observations were analyzed extensively by McHardy et al. (1994), and we do not attempt to repeat their analysis, except to compare the V and H band images and create an optical/near-IR color image. We reduced both HST datasets in IRAF using the best recommended flat fields, darks, biases and illumination correction images. Unequal pedestal effects in the NICMOS data were eliminated with UNPEDESTAL. The dithered NICMOS images were combined using DRIZZLE. In the process, we corrected the image for the slightly rectangular NICMOS pixels and for geometric distortion using the best available correction files. The dithered WF/PC images were first registered, then combined, using CRREJ. We then smoothed the resulting image using GAUSS, assuming a Gaussian of σ = 3 pixels. Flux calibrated images were obtained from the reduced image using SYNPHOT. The images were rotated so that North is along the y axis using header information + IMLINTRAN in IRAF.The NICMOS image is shown in Figure 1, with the panels showing two stretches meant to emphasize (respectively) the unresolved AGN and the galactic structure. Figure 2 shows a contour map of the nuclear regions of the NICMOS image. The WF/PC image is shown in Figure 3. X-ray Observations The ASCA observation of PKS 1413+135 was carried out on 24-25 July 1998. The SIS detectors were operated in a 1-CCD mode, and all data were converted to the BRIGHT mode. The GIS detectors were operated in the standard PH-nominal mode. We used standard ASCA data selection, which includes rejection of the data during SAA passages, and when the geomagnetic rigidity was lower than 6 GeV c −1 . The only exception to the standard screening criteria was that we accepted only data when the source was at least 20 o from the Earth limb. This is because we expected that 1413+135 would be a faint X-ray emitter, and we wanted to minimize the effect of any potential contamination from the Earth's atmosphere. We extracted photon data from the resulting X-ray images from a circle within 3.5 ′ and 3 ′ radii respectively for the GIS and SIS detectors, selecting the SIS data corresponding to grades 0, 2, 3, and 4. For background, we used regions away from the target, avoiding any other obvious point sources. For all instruments, the net exposure time was about 36000 seconds after the above selection criteria were applied. The net count rates were 0.0080 ± 0.0012 ct s −1 for SIS0, 0.0060 ± 0.0012 ct s −1 for SIS1, 0.0062 ± 0.0011 ct s −1 for GIS2, and 0.0118 ± 0.012 ct s −1 for GIS3. The count rates for SIS0, SIS1 and GIS2 match well; however, the count rate for GIS3 is somewhat higher. The most likely reason for this discrepancy is that the location of the source on GIS3 is closer to the optical axis than on GIS2, SIS1 or SIS0. As a result, the observing efficiency was higher in GIS3. For the subsequent spectral fitting, we grouped the data to have at least 20 counts in each PH bin. As expected, the source was quite faint; the 2-10 keV flux inferred from the data using the best-fit absorbed power law model as below was ∼ 9 × 10 −13 erg cm −2 s −1 . Needless to say, with such low count rates, there was no indication of variability during the ASCA observations. For the X-ray spectral fitting, we prepared the response matrices using the standard ASCA tools. This included the sisrmg tool for generation of the redistribution matrix for the SIS detectors, and the ASCAarf tool for preparation of the effective area files. The results we report supercede those of Sugiho et al. (1999), which included an earlier analysis of the same data. Redshifted HI line Observations VLBA observations of PKS 1413+135 in the redshifted HI line were carried out on 8 July 1998. All ten antennas of the VLBA were used, for 12 hours observing time. The observing frequency was 1.137319 GHz, corresponding to the rest frequency of the HI 21 cm line at z = 0.24671 (Carilli et al. 1992). Spectral line mode was used, with 256 channels of frequency width 15.625 kHz, corresponding to a velocity width of 4.1 km/s. The beam of the VLBA was 21.53 × 14.55 milliarcseconds in PA −7.57 • . The data were correlated at the VLBA correlator in Socorro, NM. Fringe-fitting, calibration and mapping were done in AIPS, using a point-source model for initial fringe fitting. A priori amplitude calibration was done using the gain and system temperature curves for each station, yielding correlated flux densities. Maps were made in each channel; a channel 0 continuum map was made and then subtracted from each channel to produce the final, continuum-subtracted cube. Hybrid-mapping procedures were started using clean components from the 18 cm image of Perlman et al. (1996) as a starting point for initial phase calibration. In subsequent iterations of self-calibration, we allowed first the phase and then both amplitude and phase to vary. The cumulative line profile from these VLBA observations was discussed in a different context by Carilli et al. (2001). We refer the reader to that paper for a discussion of the cumulative line profile. That paper did not give absorption maps, which is the subject of our discussion in §5. High-frequency Continuum Observations VLBA observations of PKS 1413+135 at 15 and 43 GHz were carried out on 16 July 1995. All ten antennas of the VLBA were used. The data were obtained in continuum mode, with 128 MHz bandwidth. Both left and right hand circularly polarized data were obtained. These observations were part of a campaign featuring twice-yearly monitoring, designed to measure proper motions and high-frequency spectral structure in PKS 1413+135; this time block also included 8 and 22 GHz observations which will be discussed, along with a detailed discussion of the source structure, spectral morphology and proper motions in Langston et al. (2002). The beam of the VLBA at 15 GHz was 0.96 × 0.49 milliarcsec in PA −4.43 • , while at 43 GHz it was 0.34 × 0.19 milliarcsec in PA 0.87 • . The data were correlated at the VLBA correlator in Socorro, NM. Fringe-fitting, calibration and mapping were done in AIPS, using a point-source model for initial fringe fitting. A priori amplitude calibration was done using the gain and system temperature curves for each station, yielding correlated flux densities. Hybrid-mapping procedures were started using a point source model for initial phase calibration. In subsequent iterations of self-calibration, we allowed first the phase and then both amplitude and phase to vary. Uniform weighting (Robust = -1 in IMAGR) was used to make the final images. Results from the HST Observations As can be seen from Figures 1 and 2, the dominant feature in the NICMOS image is the AGN point source, which is so bright that its first Airy ring is roughly twice as bright as the galaxy at the same radius. This is quite different from the situation for the WF/PC image ( Figure 3, McHardy et al. 1994), where no evidence of an unresolved source is seen. Also marked on Figure 1 is a faint companion galaxy about 6 ′′ away, which was noted by Lamer et al. (1999). Isophotal Analysis of the NICMOS Image We extracted isophotes for the host galaxy of PKS 1413+135 using the IRAF tasks ELLIPSE, BMODEL and IMCALC. This extraction and analysis was done on the unrotated image, in order to minimize any effects from rotating the image to the North-up configuration. The results of this procedure are shown in Figure 4a-e. We exclude from our analysis isophotes with semi-major axis a < 0.3 ′′ because of the extreme brightness of the unresolved AGN source (Figure 1, left panel). Indeed, the innermost of the isophotes beyond 0.3 ′′ still shows some significant effect from the first Airy ring, and there is also evidence for a second, and even third Airy ring in the isophotes within 0.7 ′′ . Figure 4a shows the isophotal profile of the optical galaxy. We fit this profile to an exponential model of the form log 10 (ADU) = b+m * a, with best fit values for the parameters b = 2.385 and m = −1.072. As can be seen, the fit to this model is quite good at 0.75 < a < 2 ′′ . At smaller radii, the galaxy's profile visibly flattens, such that the model overshoots. This is a typical characteristic of spiral galaxies with a significant bulge; the presence of a significant bulge at a < 0.75 ′′ (i.e., 3 kpc) was in fact noted by McHardy et al. (1994) in their analysis of the WF/PC 1 data. We have shown the division between the bulge and disk at a = 0.75 ′′ in Figure 4a by a dashed vertical line. The large size of the bulge suggests that the optical galaxy is a fairly early spiral, perhaps an Sa, as suggested by McHardy et al. (1994). The disk itself is rather extensive, as at a > 2 ′′ (8 kpc), the galaxy's profile flattens yet further, such that the model overshoots. This might be a remnant of past interactions with companions such as the one seen on Figure 1. The bulge can also be seen in Figure 4b, where we plot isophote ellipticity. As can be seen, ǫ varies from near zero at a < 0.4 ′′ to ∼ 0.75 at a > 3 ′′ , increasing nearly monotonically with a. As Figure 4c shows, the dependence of isophotal PA with semi-major axis appears weak. This is not at all atypical of bright spiral galaxies seen at large inclination angle (Courteau 1996). Figures 4d and 4e plot the location of the centroid of each isophote. In those plots, the location of the unresolved point source is denoted by a dashed line. It is particularly interesting to examine the points within the inner 0.75 ′′ , i.e., the bulge. One would expect that the location of the point source should coincide with the centroid of the bulge isophotes (the disk isophotes might be affected by patchiness in the disk and/or dust obscuration). However, as can be seen, there appears to be a significant offset. We take the most conservative estimate of this offset, generated by the isophotes at 0.3 − 0.5 ′′ , which yields a decentering of 0.013 ′′ ± 0.004 ′′ or 53 ± 16 pc at z = 0.24671, relative to the isphote centroids (as can be seen in Figure 4e we find even larger decentering at larger values of a). The X isophotal centers show much less significant (if any) evidence of an offset. The offset can just be seen in Figure 2, which shows that the nucleus appears just northeast of the center of the galaxy isophotes (marked by a cross). This is quite surprising, as all previous images of this object have shown that the AGN appears to be well centered within the optical galaxy (see Stocke et al. 1992, Lamer et al. 1999. Those images were, however, ground-based and could not detect a decentering of 0.013 ′′ . We tried several different permutations of initial parameters for ISOPHOTE but this effect remains present. An off-center AGN could indicate either that the AGN is background to the optical galaxy, or alternately if the AGN is hosted by the optical galaxy, that there is significant nuclear structure aligned with the AGN or radio lobes, perhaps augmented by any effects due to dust, which is not completely absent at H band (A H /A V = 0.176 for R = 3.1, Mathis 1991). We discuss both possibilities in §6. WF/PC Image and Optical/Near-IR Colors As shown by Figure 3, a rather different picture of this system is seen in the optical than in the infrared. The archival WF/PC image shows no evidence for a central point; instead, the most prominent feature on the image is a dust lane more than 4 arcseconds long, extending roughly along the disk plane of the galaxy, and traversing the nuclear region. The dust lane appears to be clearly resolved with a width ∼ 0.25 ′′ (1 kpc). The nuclear regions appear distinctly peanut-shaped, with a central bar extending for over 0.5 ′′ perpendicular to the dust lane. Interestingly, the orientation of the nuclear bar corresponds fairly closely to that of the milliarcsecond radio structure . By combining the WF/PC and NICMOS images, we constructed a V − H color image. To do this, we resampled the NICMOS image to 0.046 ′′ /pix, and registered the two images by assuming that the unresolved point source seen in the NICMOS image corresponds to the center of the nuclear bar seen on the WF/PC image (which also corresponds to the middle of the dust lane). Even though this assumption is not supported by an image in an intermediate band, it is consistent with the large absorbing column required by the Einstein data ) as well as the HI and other radio absorption line observations (Carilli et al. 1992, Wiklind & Combes 1997, see also §5). The resulting V − H image is shown in Figure 5. This image has two dominant features. First, the disk midplane is considerably redder than other regions of the galaxy, with typical values of V − H ∼ 2 − 2.8 compared to V − H ∼ 1 − 1.5. This is consistent with the presence of dust, as noted above, and as in McHardy et al. (1994). Interestingly, the dust disk does not appear in the V − H image as a sharp feature, instead appearing more gradual, perhaps indicating a patchy distribution of dust in the optical galaxy's disk. There is also significant variation in V − H color along the disk midplane, with generally redder values closer to the nucleus. This is a property that shared with the small companion galaxy 6 ′′ away, which also appears to have a 'disky' morphology and somewhat redder colors in its midplane and Second, the nucleus is extremely red (V − H = 6.9), four magnitudes redder than any other feature in the map. Given the lack of a nuclear point source in the optical, it is useful to note that since the central 'bar' is the brightest feature in the WF/PC image any alternate choice for registering the nucleus would yield an even more extreme color for the AGN. This speaks to either extreme absorption, an infrared spectral cutoff, or both. In fact, the spectral cutoff implied by the V − H color is so extreme (α > 2.3 for S ν ∝ ν −α ) that no synchrotron aging mechanism can account for the observed spectral cutoff by itself, without resorting to an unphysical exponential cutoff in the particle distribution. Thus we are forced to conclude that a significant and probably dominant factor in the shape of the infrared spectrum of PKS 1413+135 is extinction. This conclusion is supported by the large column implied by the X-ray spectrum (Stocke et al. 1992, §4). However, it is also difficult to account for the observed nuclear spectrum by reddening alone, given a normal extinction law. This difficulty was first pointed out by Beichman et al. (1981) without the X-ray data or a high-resolution near-IR image; given the extreme color we find for the nucleus it is even more acute now. The most consistent explanation is a combination of a rollover due to synchrotron aging combined with extinction, as first advocated by Stocke et al. (1992). If, for example, the intrinsic nuclear spectral index were to steepen by ∆α = 0.5 in the neighborhood of 3-5 microns (∆α = 0.5 is predicted for synchrotron aging with continuous reinjection of electrons, Meisenheimer & Heavens 1989), then our data require an excess 3 mag of extinction at the position of the nucleus. If one then assumes a normal extinction law (Mathis 1991) and R V = 3.1, the required column is N H = 5.7 × 10 21 cm −2 , assuming a covering factor f = 1. A more likely value for the covering factor. however, is ∼ 0.1 − 0.2, given the value of N H we derive in §4 for the X-ray absorption (see also Stocke et al. 1992). This implies 10-30 mag of extinction along our sightline to the AGN. These conclusions would be helped significantly by an HST image in an intermediate band to confirm the spectral slope and compare in detail to extinction laws. ASCA Observations PKS 1413+135 has been the target of several X-ray observations. The source was quite faint for both Einstein and ROSAT (McHardy et al. 1994), where only photons at energies > 1 keV were observed (only about 15 above background in each case). However, neither Einstein nor ROSAT had significant sensitivity above ∼ 3 keV, where most of the observable X-ray emissions for such a heavily absorbed source would be. It was for this reason that we observed PKS 1413+135 with ASCA. For the purposes of this paper, ASCA is essentially a spectrometer only, as its angular resolution is ∼ 1 arcminute. We show in Figure 6a the X-ray spectrum of PKS 1413+135 extracted from both the SIS and GIS data. In Figure 6b we show contour ellipses in the (N H , α x ) plane at 68 %, 95% and 99% confidence. The model fitted in Figure 6 is a power law with two components of Morrison & McCammon (1992) absorption. We fix the Galactic absorption at the value indicated by the survey data of Stark et al. (1992), N H (Galactic) = 2.3 × 10 20 cm −2 , and assume that the dominant absorption comes from the optical galaxy at z = 0.24671. The best fit parameters for this model are α x = 0.66 ± 0.40 and intrinsic N H = 4.6 +2.1 −1.6 × 10 22 cm −2 , and the goodness of fit is χ 2 ν = 1.068 (errors are quoted at 90% confidence). These parameters agree well with the parameters fitted to the Einstein data by Stocke et al. (1992) as well as those quoted by Sugiho et al. (1999) for an earlier fit to these data. The X-ray spectral index is typical of those seen in both OVV quasars and low-frequency peaked BL Lacs (Urry et al. 1996, Sambruna et al. 1999. The X-ray flux was F (2-10 keV) = 9 × 10 −13 erg s −1 cm −2 , and the derived luminosity was L(2 − 10 keV) = 2.6 × 10 44 erg s −1 . Thus in this observation PKS 1413+135 was fainter by a factor ∼ 5 than seen in previous X-ray observations. It is worth noting that any 2-10 keV flux figure obtained from the ROSAT and Einstein observations is based almost completely on extrapolations, so the actual variability could be somewhat different in magnitude. However, as was noted by Stocke et al. (1992), Stevens et al. (1994), Perlman et al. (1996) and Lamer et al. (1999), this source is highly variable in both the radio and optical, so variations of a factor 5 in the hard X-rays should not be seen as surprising. The large column required by all the X-ray observations of PKS 1413+135 is quite consistent with the observed optical absorbing column ( §3.2) if the absorbing material is patchy, with covering fraction f = 0.12 +0.07 −0.05 (the errors are at 90% confidence and are dominated by the error in the column fit to the ASCA data). This suggests that the absorbing material covers only ∼ 1/2 − 1/4 of a WF/PC pixel, i.e., ∼ 50 parsecs at z = 0.24671. Such a size, patchiness and absorbing column would not be atypical of giant molecular cloud complexes in our own galaxy (e.g., Orion; Green & Padman 1993). Given that the observed CO excitation temperature (∼ 10 K; Wiklind & Combes 1995, 1997) is more typical of outer-galaxy GMC complexes than a nuclear cloud (Maloney 1990), it is then useful to point out that our sight-line to PKS 1413+135 remains highly unusual. Indeed, if one assumes projected dimensions of ∼ 1 × 15 kpc for the dust lane, the probability of observing the AGN projected behind a ∼ 50 × 50 pc GMC complex is ∼ 2 × 10 −4 . Since PKS 1413+135 has unique properties for a Parkes radio source, this low inferred probability is consistent with the very small percentage of similar sources found in bright radio surveys. If indeed a large amount of absorbing material were present in the nuclear regions, we would expect to see a bright, but narrow, Fe Kα line. As can be seen (Figure 6a) no line is observed, although due to the low signal to noise our upper limit on equivalent width is quite modest: 500 eV. Our nondetection of this line is consistent with the absorbing material being either well out in the disk of the optical galaxy, or far in the foreground compared to the AGN (the so called background AGN hypothesis, cf. §6). HI Absorption Observations The existence of a redshifted 21 cm HI line in the spectrum of PKS 1413+135, discovered by Carilli et al. (1992) was a second important link implicating significant absorption as the cause of the IR rollover. In the light of evidence indicating a patchy absorbing column and a significantly resolved radio source with a size corresponding to < ∼ 100 milliarcsec, i.e., ∼ 2 WF/PC pixels, it is quite useful to look at any spatial structure in the radio HI absorber. In Figure 7, we show contour maps of the absorbing material in four of the 256 channels in our VLBA observations, corresponding to a range of 16 km/s centered around the frequency of the HI line at z = 0.24671 (Carilli et al. 1992). All panels of Figure 7 have the channel 0 image shown in greyscale. The nucleus is shown at (0,0) in all four panels. No other significant features were found in the image for these or any other channels, although noise is a significant issue. For comparison, the single-dish observation of Carilli et al. (1992) found a FWHM of 18 km s −1 , consistent with the four channels found in these observations, but suggesting that some less-obscured regions may exist in outlying channels, below the noise level. Because of the relatively low signal to noise of the absorption maps, we do not show optical depth maps; however, an examination of those maps indicates that the optical depth within the absorber is typically ∼ 50 − 70%, i.e., optical depth τ ≈ 1. By comparison, Carilli et al. (1992) found a peak line depth of 0.34±0.04. However, those single-dish measurements did not resolve the source. If, as indicated by Figure 7, a significant amount of the source flux is not obscured by the HI absorbing screen, multiplying the observed optical depth by the fraction of flux that is behind the screen, yields good agreement with the value quoted by Carilli et al. (1992). As can be seen, the radio HI screen obscures primarily the mini-lobe 15-40 milliarcsec northeast of the nucleus. There is a possible, slight extension of the absorber to the southwest (towards the nucleus). There is also some marginal velocity structure, with the easternmost part of the screen having a velocity width somewhat narrower than the region near the minilobe's flux maximum. We do not observe significant absorption at the nucleus position, to a 2σ optical depth limit of ∼ 0.5; however as this is not much lower than the optical depth figures we observe to the eastern mini-lobe this is only a weak limit. We cannot use the total HI optical depth to improve significantly on this because of the relative faintness of the nucleus at this frequency. Interestingly, the HI absorber covers the regions with the steepest radio spectral index (α r ranging from 1.3-2.5) regions in the maps of Perlman et al. (1996). High-frequency Observations The 15 and 43 GHz VLBA images of PKS 1413+135 are shown in Figure 8. As in Figure 7, the nucleus is shown at (0,0). As can be seen, by far the brightest feature in these images is the nucleus, a stark contrast from the structure seen in Figure 7 (redshifted 21 cm), where the lobe is a factor 5-6 brighter than the nucleus. This is not terribly surprising given the spectral index maps published by Perlman et al. (1996) which show a highly inverted spectrum (α = −1.7) for the core but very steep spectra for all the extended structure (ranging from α = 0.7 to α > 2). The 15 GHz image shows a jet extending west for about 3 milliarcsec, before taking a bend. There is flux at greater distances which is just barely visible in the contours but is significant on smoothed image; the position angle of the jet in this image matches that of the jet at 5 GHz and 8.4 GHz . The western jet appears to emerge south of the nucleus's centroid, indicating a likely second bend closer in to the nucleus which we cannot resolve. Also visible on the 15 GHz image is a possible counterjet, at a PA similar to that of the eastern mini-lobe, and about 180 • from the location where the western jet emerges. At 43 GHz, all that is seen is the nucleus and a slight extension to the west (supporting the indications of a bend at submilliarcsecond scales, seen in the 15 GHz map), which unfortunately becomes too faint to see about 0.3 milliarcsec from the nucleus. We see no evidence of structure outside this fairly simple configuration, and in particular there is no evidence of a double image of the nucleus down to resolutions of about 0.2 arcseconds (see §6.2). Note that the high frequency radio emission from PKS 1413+135 is synchrotron in nature and thus physically unrelated to the high-frequency absorption features superposed along our line of sight. Discussion The data shown in § §3-5 allow us to place significant constraints on several of the outstanding mysteries (mentioned in §1) regarding the absorbing material in PKS 1413+135 and its relationship to the AGN. Based on the NICMOS image, we confirm that the optical galaxy is indeed a spiral, as found by previous workers. The spiral has a fairly large nuclear bulge, about 3 kpc in size, with a significantly flatter surface brightness profile than the outer regions of the galaxy. McHardy et al. (1994) found that the scale height of the bulge of the optical galaxy was 6.9 kpc, a figure which our data support. Two issues addressed in the foregoing section particularly bear further elucidation. Location and Properties of the Absorbing Material Our data provide strong evidence that the AGN of PKS 1413+135 is heavily reddened by absorbing dust and gas along our line of sight. The optical galaxy itself has a prominent dust lane along the disk midplane which measures 15 kpc × 1 kpc, where the V − H color is about 1 mag redder then elsewhere in the galaxy. Such features are common among edge-on spirals. However, the AGN has a far more extreme color than anything else in the image (V − H = 6.9) , implying a spectral cutoff so steep that it requires both an intrinsic break at a few microns plus 3-4 mag of extinction. The implied absorbing column is consistent with the VLBA HI and ASCA observations if the covering factor f = 0.1 − 0.2 and the HI spin temperature is a few hundred degrees. Combining this with a molecular line spin temperature of ∼ 10 K (Wiklind & Combes 1995, 1997 we can state that it is most likely that the absorber is a giant molecular cloud (GMC) complex in the outer reaches of the optical galaxy's spiral disk (a location in the galaxy's nucleus would predict a somewhat higher spin temperature and a strong Fe Kα line, which is not observed). The superposition of an outer-galaxy GMC right along the line of sight is rather unlikely (P ≈ 2 × 10 −4 , as discussed in §4), but it does appear to be the most consistent hypothesis. As discussed in §5.1, the radio HI absorber appears to be centered about 20-25 milliarcseconds east of the nucleus, along our line of sight to the eastern mini-lobe. This mini-lobe dominates the radio flux below about 2 GHz , with a surface brightness some 5-6 times that of the core at 1139 MHz. This offset of the HI absorber with respect to the nucleus (which must be covered by the X-ray absorber) is very interesting. It therefore behooves us to inquire as to the relationship of the various absorption components to one another. It would not at all be surprising, for example, to have a GMC complex or star formation region with both warm and cold components, with the warm component responsible for the X-ray absorption and the colder, dusty component responsible for the optical and radio absorption. Absorption lines from a wide variety of molecular species have been observed in the radio spectrum of PKS 1413+135. Only one of these lines is at low frequencies -the recentlydiscovered OH line found in GMRT observations (Kanekar & Chengalur 2002). The remainder of the lines, discovered by Wiklind & Combes (1995, 1997, were all at much higher frequencies (ranging from about 70-200 GHz in the observed frame; see Table 1 of Wiklind & Combes 1997). All the molecular lines are narrow, with the OH absorption width being 14 km s −1 while the other lines are < ∼ 10 km s −1 , and only very slightly offset from the HI line center (≤ 4 km s −1 compared to an HI line width of 18 km s −1 ). As its radio structure spans only 100 millarcseconds , PKS 1413+135 was unresolved for all the molecular line observations. However, given the information presented here and in Perlman et al. (1996) we have adequate information to resolve the likely location of the molecular absorbing material. At low frequencies, the eastern mini-lobe dominates the radio flux (Figure 7, §5.1). Therefore, we believe it is more likely that the material which produces the OH line covers the eastern mini-lobe, although it may also partially cover the core, a conclusion somewhat at variance with that of Kanekar & Chengalur (2002). The situation is different, however, for the higher-frequency lines. Above 20 GHz, the core accounts for nearly all the emission from PKS 1413+135 (Figure 8). Thus the molecular material accounting for the high-frequency lines observed by Wiklind & Combes must be projected within a small distance of the core (< 0.2 milliarcsec given the compactness of the 43 GHz structure, Figure 8). To sum up, then, the center of the observed HI absorber appears to be along the line of sight to the eastern mini-lobe, 20-25 milliarcseconds from the nucleus. However, the Xray absorbing material and most of the molecular absorbing material must cover the core, where our data gives a weak 2σ upper limit of τ HI ≈ 0.5, only ∼ f actor2 less than that observed against the lobe. Thus the radio molecular and HI absorbers are not necessarily all physically co-located. Looking at the small velocity difference between the HI and molecular features, however, it is still most likely that all the absorbing material is part of the same GMC complex, as much larger velocity differences would be expected if the HI and molecular clouds were at different locations within the galaxy's spiral disk. The projected location of most of the molecular absorbing material (i.e., covering the VLBI core) is logical, given the observed optical/near-IR reddening. With a linear size of at least 40×25 milliarcseconds (160 × 100 pc) and an HI velocity width of 18 km s −1 , this GMC complex must be somewhat more massive than, for example, the Orion region in our own galaxy (which spans ∼ 100×100 pc region and has an HI velocity width of a few km s −1 ), and it likely has a rather patchy and/or filamentary structure, similar to GMC complexes in our own galaxy. A patchy structure would make it rather difficult to estimate a mass or other physical parameters for the absorber. For the sake of illustration only we assume a cylindrical GMC region measuring 50 × 25 × 25 milliarcsec (200 × 100 × 100 parsecs), with the long dimension superposed near the direction of the eastern lobe but overlapping the position of the nucleus, and with one of the molecular cores located virtually along our line of sight to the nucleus (within 0.2 milliarcsecond or 0.8 parsecs impact parameter -see §6.2). Such a configuration is consistent with the covering fraction constraints from combining the optical and ASCA data. To conform with both the HI and molecular line data it would have to be centered somewhere along the line between the lobe and nucleus, at perhaps 15 milliarcsec projected distance from the nucleus. This works out to a volume of just under 2.1 × 10 6 cubic parsecs, or 6.1 × 10 61 cm 3 . For a path length of 100 parsecs (3.09 × 10 20 cm) and N H = 4.6 × 10 22 cm −2 the mean density would be n H ∼ 150 cm −3 , thus yielding a mass of ∼ 7.6 × 10 6 M ⊙ . While this estimate is of course geometry dependent, it is roughly comparable to the largest GMC complexes in our own galaxy. Deeper observations are required to further constrain the configuration of the radio absorbing material. Relationship of the AGN to the Optical Galaxy and Absorber Perhaps the most surprising result of our isophotal analysis is that the HST data seem most consistent with an AGN position very slightly offset (0.013 ′′ , just at the edge of HST's astrometrical capabilties in the near-IR) from the center of the galaxy's isophotes. This could be interpreted as the first evidence in favor of the background source hypothesis. If indeed the source were decentered, then given the very large scale length of this galaxy (6.9 kpc) and a smoothly varying projected mass distribution, we might not expect to see either double images or an arc (see Narayan & Schneider 1990 although n.b., this source has a decentering two orders of magnitude smaller than that seen for AO0235+164 and 0537−441, the sources modelled in that paper). In light of the evidence presented here, however, we cannot assume that the projected mass distribution of the optical galaxy is smooth. To the contrary, a patchy, likely filamentary GMC complex with mass ∼ 7.6 × 10 6 M ⊙ and size > ∼ 160 × 100 pc must lie along our line of sight to the AGN. This geometry is considerably more interesting with respect to the issue of possible microlensing of a background source and the AGN/absorber relationship. The Einstein radius for a 7.6 × 10 6 M ⊙ GMC is about 5.7 milliarcsec (for an AGN at z ∼ 1), compared to a likely projected location of the cloud center ∼ 25 milliarcsec from AGN position on the sky. Thus the most naive examination of the geometry would conclude that milli-lensing by the GMC complex is quite unlikely. However, the molecular line and X-ray absorption data lead us to conclude that there is a molecular core within 0.2 milliarcsecond of the line of sight to the nucleus. If the molecular cloud core is responsible for all the X-ray absorption, and if we assume a density in this region typical of molecular cores, ∼ 10 3 cm −3 , the path through the absorber must be ∼ 13 pc long. Assuming this molecular core to be spherical, then, we derive a mass of 32000 M ⊙ which must be located within at most 0.2 milliarcsec of our line of sight -well within its Einstein radius of 0.37 milliarcsec. One would expect such a lens to produce multiple images with separation ∼ 0.2 milliarcsec, which is not seen in Figure 8. Given the available data, the only way for a background AGN to not produce prominent evidence of microlensing would be to have the AGN at z < 0.3. At such a redshift (only slightly greater than that of the optical galaxy) the host galaxy of the AGN would easily be visible on the near-IR image -yet as can be seen from Figure 1 we see no evidence of it. Thus the available data now make the background AGN hypothesis even more unlikely. There are other, less exotic explanations for such a small decentering. If indeed the absorption is patchy and N H ≈ 4 × 10 22 cm −2 as suggested by our data, some regions of the nucleus might well be absorbed, even in H band. This could significantly bias the isophote centroids, since for a standard extinction law one would still expect 2-5 mag of extinction at H band for 10-30 mag of extinction at V band. Furthermore, if the nuclear "bar" is the result of structure aligned and perhaps associated with the radio source's "lobes" (i.e., a miniature alignment effect, as seen in, for example, 4C31.04, Perlman et al. 2001), such aligned emission would not be expected to be symmetrical and in fact would be expected to bias the measurement of isophote centroids. Either (or both) of these constitute a much more plausible explanation for the observed decentering given the available data. As was previously suggested by Perlman et al. (1996), one can reconcile an absorber within the optical galaxy with the observed lack of re-emitted AGN continuum by locating the absorber in the outer disk, as our data seem to suggest. One would also have to assume some beaming of the continuum (as in the models of Snellen et al. 1998) to explain the observed lack of IR emission lines; however that does not appear to be inconsistent with the known properties of PKS 1413+135 including high and variable polarization, extreme variability and extremely core-dominated high-frequency VLBI structure, although it would require some superluminal motion, which was not seen by Perlman et al. (1996) based on two-epoch data which, however, did not adequately resolve the jet's inner regions. We will return to this latter issue in Langston et al. (2002, in preparation). The same model would, however, require a somewhat underluminous broad line region compared to most Seyfert galaxies, as observed in other compact symmetric object radio sources first drew the link between PKS 1413+135 and compact symmetric objects). Indeed, the reasonability of this hypothesis is underscored by the fact that IRAS data in NED (which was not included in Perlman et al. 1996) show fluxes at 60 µm and 100µ m of 220-290 mJy (two observations with some possible variability between them) and a roughly flat spectrum between 60-100 µm. This is within 50 % of the prediction one would make by simply scaling up by a factor of 10 (appropriate to the observed N H ) the correlation of Knapp (1990) between galaxy magnitude and 60 µm flux. Conclusions The most consistent explanation to the properties we observe in PKS 1413+135 appears to be that the absorbing screen consists of a GMC complex within the outer disk of the optical galaxy. This geometry is fully consistent with our new HST, ASCA and VLBA observations and is also fully consistent with previous observations in other wavebands. The optical galaxy is most likely the true host of the AGN and radio source given the properties we observe. While it is still not possible to fully rule out the background AGN hypothesis, it now becomes significantly less likely given that the absorption data require a 3 × 10 4 M ⊙ molecular core within 0.2 milliarcsec of our line of sight. The scale shown at left, which emphasizes the nuclear point source, runs from 0 to 4 ADU/s, while the scale shown at right, which shows the galaxy better, runs from 0 to 1.3 ADU/s. The image is shown with a North-up, East-left orientation, and a scale bar is given. In addition to the dominant point source, the disk of the optical galaxy is easily seen, as is the companion galaxy located 6 ′′ to the East, which was first noted by Lamer et al. (1999). See §3.1 for discussion. Figure 3 is available in gif form only due to size Figure 1, the reader can see the vastly different appearance of PKS 1413+135 in the optical as compared to the near-IR. No nuclear point source is apparent in the optical; instead, the image is dominated by a dust lane which occupies the disk midplane, and a nuclear bar which extends perpendicular to the dust lane for about 0.5 arcsec. See §3.2 for discussion. Fig. 4.-The results of our isophotal analysis. Five quantities are plotted versus semi-major axis: count rate (Fig. 4a), Ellipticity (Fig. 4b), isophotal position angle (Fig. 4c), isophote X-centroid (Fig. 4d), and isophote Y-centroid (Fig 4e). The dotted line in Fig. 4a denotes the best-fit model, while in Fig. 4d and 4e it denotes the position of the AGN. We only show isophotes at > 0.3 ′′ , due to the brightness of the AGN. This analysis was done on the unrotated image; thus P A = −130 • here translates to P A = 0 • on Figures 1, 2, 3 To make this image, we assumed that the AGN (seen in the near-IR image, Figure 1) is located at the center of the nuclear bar seen in the optical image ( Figure 3). Darker colors refer to redder regions, and the scale runs from V − H = 1 mag (white) to V − H=4 mag (black). The nucleus, which appears as a saturated point source in this rendition, is far redder than the limit of the scale shown, at V − H = 6.9 mag. As can be seen, the disk midplane is about 1 magnitude redder than outlying regions of the galaxy, with redder colors seen closer to the nucleus. See §3.2 for discussion. Fig. 6.-Results of the ASCA X-ray spectral reduction. At top, we show the X-ray spectrum in ct/s/keV plotted versus channel energy in keV, along with the residual deviations in ∆χ 2 . Data from all four instruments are shown individually and fitted simultaneously. The bestfit model is described in §4. At bottom, we show contours of the best-fit parameters in the Γ, N H plane. Contours are shown at the 68%, 95% and 99% confidence levels. See §4 for discussion. -Results of the VLBA redshifted HI line observations. Four channels are shown, each of which is 4.1 km/s wide. These four channels correspond to the middle four channels observed in observations centered around the line frequency found by Carilli et al. (1992). The greyscale image is the channel 0 image in total flux, while the contours represent deficits due to absorption. As can be seen, the absorbing material appears to be centered around the eastern mini-lobe and does not appear to extend as far as the nucleus (at 0,0). The peak optical depth against the optical mini-lobe is approximately 0.7, although the signal-to-noise is fairly low. Formally, we can only set a 2σ upper limit to the optical depth at the position of the nucleus of 0.5. Furthermore, since the flux of the nucleus is a factor six lower than the eastern mini-lobe at this frequency, we cannot derive significantly more stringent constraints by assuming a total optical depth identical to those given in Perlman et al. (1996) and Carilli et al. (2001). See § §5, 6 for discussion.
2019-04-14T01:35:52.178Z
2002-08-07T00:00:00.000
{ "year": 2002, "sha1": "610d50a542d97d555024f82a66717f8889f2123e", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Arxiv", "pdf_hash": "610d50a542d97d555024f82a66717f8889f2123e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
24705686
pes2o/s2orc
v3-fos-license
Extended cold snare polypectomy for small colorectal polyps increases the R0 resection rate Background and study aims  Despite widespread use of cold snare polypectomy (CSP), the R0 resection rate is not well documented. We perform extended CSP, resecting polyps with a > 1 mm circumferential margin. The aim of this study is to compare the R0 resection rate of extended CSP with conventional CSP and to assess safety. Patients and methods  From April 2014 to September 2016, 712 non-pedunculated colorectal polyps, < 10 mm in size, resected using CSP from 316 patients were retrospectively analyzed. Results  We divided lesions into conventional CSP (n = 263) and extended CSP groups (n = 449). The baseline characteristics of these two groups were not significantly different in univariate or multivariate analyses. Sessile polyps comprised 94 % (668/712), and the remaining were flat-elevated polyps. Mean size of polyps (±standard deviation) was 4.2 ± 1.5 mm. The most frequent pathology was low grade adenoma (97 %, 689/712). The R0 resection rate was significantly higher in the extended CSP group (439/449 [98 %]) than in the conventional CSP group (222/263 [84 %], P  < 0.001). There was no delayed bleeding or perforation in either group (conventional CSP group, 0/263, 95 % confidence interval: 0.0 – 1.4 % and extended CSP group, 0/449, 95 % confidence interval: 0.0 – 0.8 %). Conclusions  Extended CSP results in a higher R0 resection rate compared with conventional CSP. Extended CSP did not result in a higher rate of delayed bleeding or perforation. Extended CSP is a safe and promising procedure for endoscopic resection of non-pedunculated colorectal polyps < 10 mm in size ABSTR AC T Background and study aims Despite widespread use of cold snare polypectomy (CSP), the R0 resection rate is not well documented. We perform extended CSP, resecting polyps with a > 1 mm circumferential margin. The aim of this study is to compare the R0 resection rate of extended CSP with conventional CSP and to assess safety. Patients and methods From April 2014 to September 2016, 712 non-pedunculated colorectal polyps, < 10 mm in size, resected using CSP from 316 patients were retrospectively analyzed. Conclusions Extended CSP results in a higher R0 resection rate compared with conventional CSP. Extended CSP did not result in a higher rate of delayed bleeding or perforation. Extended CSP is a safe and promising procedure for endoscopic resection of non-pedunculated colorectal polyps <10 mm in size Patients and methods Study population A retrospective review of 1304 consecutive polyps in 542 patients who underwent CSP for the treatment of colorectal polyps at the Utsunomiya Memorial Hospital (Utsunomiya, Japan) from April 2014 to September 2016 was performed. CSP was not performed for polyps with suspected carcinoma based on endoscopic findings. Data were collected from medical records, photographs and video recordings. Pedunculated polyps, ≥ 10 mm in size, undetermined resection method, hyperplastic and inflammatory polyp were excluded from this analysis. This retrospective study was approved by the Institutional Review Board prior to commencement. CSP procedure We divided the polyps into a conventional CSP group and an extended CSP group. The definition of extended CSP was a polyp resected with a more than 1 mm circumferential margin (▶ Fig. 1). We defined others as conventional CSP. An R0 resection was defined as an en bloc resection with pathologically negative resection margins [9]. Because of the low R0 resection rate using conventional CSP during the introductory period of CSP, we performed extended CSP since May 2015 to increase the R0 resection rate. Since that time, extended CSP was always attempted, however conventional CSP was occasionally performed because of technical problems due to location and peristalsis. Consequently, we mainly performed conventional CSP in the early phase of the study period, and performed extended CSP during the later period. The PCF-Q260AZI or CF-Q260AI (Olympus, Tokyo, Japan) was used. The polypectomy snare for CSP was an 11 mm, Extra Small Oval -Flexible, CAPTIVATOR TM II (Boston Scientific, Natick, MA, USA) or 10 mm, Oval, Snare-Master TM (Olympus, Tokyo, Japan). The resected polyp was suc- tioned and retrieved after CSP. Clip application was not performed. The retrieved polyp was preserved in formalin and assessed by routine histopathologic studies. The definition of delayed bleeding was overt bleeding within two weeks after the CSP. Statistical analysis Categorical data were compared by using the chi-squared test. Data with a normal distribution were compared with an independent t-test. To estimate the 95 % confidence intervals (CI) around the estimated proportions of patients with adverse events, we used the Clopper-Pearson exact method. Differences were considered to be statistically significant when P < 0.05. The above statistical analyses were performed using Bell-Curve for Excel 2015 software (Social Survey Research Information Co., Ltd. Tokyo, Japan). To evaluate potential confounding factors before resection, we used logistic regression analysis for multivariate analysis, performed using Statflex ver. 6.0 software (Artech Co. Ltd. Osaka, Japan). Study population Based on the exclusion criteria, 592 polyps in 226 patients were excluded as follows: pedunculated polyp (n = 327), ≥ 10 mm (n = 38), undetermined resection method (n = 51), hyperplastic (n = 168) and inflammatory polyp (n = 8). Consequently, data for 712 sessile colorectal polyps in 316 patients were included in this final analysis. The characteristics of these 316 patients are shown in ▶ Table 1. Seventy-eight of the patients were male and the mean age was 61. We divided them into conventional CSP (n = 119) and extended CSP groups (n = 197). Male gender was slightly more common in the conventional CSP group than in the extended CSP group (P = 0.113), and mean age was similar in both groups (P = 0.592). Adverse events The overall rate of adverse events was 0 % per procedure Discussion This study shows that extended CSP has a significantly higher rate of R0 resection compared to conventional CSP for nonpedunculated colorectal polyps < 10 mm. The safety of extended CSP is also demonstrated. CSP, which started in Western countries, is becoming more popular in Eastern countries as a simple and safe approach for resecting small colorectal polyps [5]. Since the malignant potential of small colorectal polyps is low, an almost zero rate of adverse events including delayed bleeding and perforation is important. CSP is suitable for the treatment of small colorectal polyps, because it has a shorter procedure time and lower rate of delayed bleeding than HSP or endoscopic mucosal resection [7,10]. The present study, which includes a large number of small non-pedunculated colorectal polyps, demonstrates that extended CSP is a promising method to increase the R0 resection rate. A low R0 resection rate (59 %) of small polyps resected by CSP has been reported [8]. We suggest two explanations for a low R0 resection rate. First, in preparing the specimen for pathological examination, determination of the exact margin for specimens resected by CSP is difficult because of a macroscopically unclear resection margin, since, unlike HSP, there is no cauterized area after CSP. Second, CSP makes a pathological diagnosis difficult microscopically because the resection margin of small colorectal polyps is not cauterized. Taken together, a resected polyp with adequate surrounding normal colonic mucosa is helpful to recognize the resection margin and to improve the quality of the pathological diagnosis. Although intra-procedural bleeding is generally more common in CSP compared with HSP, the colonoscopist is rarely con-cerned with intra-procedural bleeding. Unlike delayed bleeding, persistent intra-procedural bleeding is easily controlled by applying a clip. CSP rarely leads to delayed ulceration after resection because of the lack of thermal injury. Anatomically, the submucosa has larger arteries and veins than the lamina propria. Delayed bleeding after HSP is usually caused by vascular damage in the submucosal layer due to use of the electrocautery. Generally, the rate of delayed bleeding after HSP is reportedly 1 -2 % [4,11], and the benefit of prophylactic clip application after HSP remains controversial [12,13]. When performing HSP, an extended mucosal resection without injection may involve the deep submucosa or muscularis, resulting in perforation. As reported by Horiuchi et al [6], CSP is less associated with injuries to the submucosa than HSP. Tuttici et al reported the characteristics of protrusions within the cold snare defect including submucosa and muscularis mucosa without an adenomatous component or vascular structure [14]. We believe that extended CSP may not damage the vessels in the submucosal layer. Therefore, extended CSP does not increase the rate of delayed bleeding and perforation. The 95 % CI for the rate of delayed bleeding in the extended CSP group in the present study was less than 1 %. For the treatment of colorectal tumors, an en bloc R0 resection is important to determine the completeness of resection and to decrease the rate of local recurrence [5]. En bloc resection is defined as a tumor removed in one piece, and R0 resection is defined as an en bloc resection with negative pathologic margins. Although endoscopic resection of small colorectal polyps is usually straightforward, local recurrence after polypectomy necessitates additional resection performed by CSP, HSP, endoscopic mucosal resection or endoscopic submucosal dissection. Unlike the first endoscopic treatment, when treating a recurrent lesion one frequently encounters severe submucosal fibrosis due to the first resection, making the resection difficult. To decrease the need for additional resection, the extended CSP is a viable option. Further, we recognize that CSP involving surrounding normal mucosa has become a standard strategy [15]. The technique referred to here as an "extended CSP" may represent the standard technique advocated by experts used in the clinical practice of CSP. However, to date, there is no significant evidence reporting the superiority of this extended CSP strategy over conventional CSP. Although the pathological significance of extended CSP compared with conventional CSP should ideally be shown by a randomizedcontrolled trial, the superiority of extended CSP is obvious and such a clinical trial may be difficult to conduct. We recognize some acknowledged limitations of this study. First, the study is limited by its retrospective design and several potential biases. Both techniques were compared indirectly in two different periods and the impact of the learning curve of the endoscopists and pathologists cannot be excluded. The pathologists who assessed completeness of resection could not be blinded to the procedure performed. However, an extended CSP is reasonable to improve the R0 resection rate, and no adverse events occurred in this study including a large number of CSP procedures. Second, there is no evaluation of local recurrence after CSP in this study. CSP as a long-term outcome remains unclear [7,15]. Third, this study did not include pedunculated polyps but only non-pedunculated polyps < 10 mm in size. Therefore, this result should not be extrapolated to CSP for pedunculated colorectal polyps. Generally, pedunculated polyp have a large vessel with greater potential for delayed bleeding [16]. Conclusion In conclusion, this study shows that extended CSP results in a higher R0 resection rate for non-pedunculated colorectal polyps < 10 mm compared with conventional CSP. Extended CSP did not result in a higher rate of delayed bleeding or perforation. To our knowledge, this is the first report of the safety and efficacy of extended CSP in compared with conventional CSP. Extended CSP is a safe and promising procedure for endoscopic resection of non-pedunculated colorectal polyps < 10 mm in size.
2018-04-03T05:13:47.557Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "0047b40c0b7958d4c6820a7cf68db8bc58f4e05c", "oa_license": "CCBYNCND", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0043-125312.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a00aa9085fe8df3226a2b946bed7fd7b7b4cd9d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244144297
pes2o/s2orc
v3-fos-license
Is an Endorectal Balloon Beneficial for Rectal Sparing after Spacer Implantation in Prostate Cancer Patients Treated with Hypofractionated Intensity-Modulated Proton Beam Therapy? A Dosimetric and Radiobiological Comparison Study Background: The aim of this study is to examine the dosimetric influence of endorectal balloons (ERB) on rectal sparing in prostate cancer patients with implanted hydrogel rectum spacers treated with dose-escalated or hypofractionated intensity-modulated proton beam therapy (IMPT). Methods: Ten patients with localized prostate cancer included in the ProRegPros study and treated at our center were investigated. All patients underwent placement of hydrogel rectum spacers before planning. Two planning CTs (with and without 120 cm3 fluid-filled ERB) were applied for each patient. Dose prescription was set according to the h strategy, with 72 Gray (Gy)/2.4 Gy/5× weekly to prostate + 1 cm of the seminal vesicle, and 60 Gy/2 Gy/5× weekly to prostate + 2 cm of the seminal vesicle. Planning with two laterally opposed IMPT beams was performed in both CTs. Rectal dosimetry values including dose-volume statistics and normal tissue complication probability (NTCP) were compared for both plans (non-ERB plans vs. ERB plans). Results: For ERB plans compared with non-ERB, the reductions were 8.51 ± 5.25 Gy (RBE) (p = 0.000) and 15.76 ± 11.11 Gy (p = 0.001) for the mean and the median rectal doses, respectively. No significant reductions in rectal volumes were found after high dose levels. The use of ERB resulted in significant reduction in rectal volume after receiving 50 Gy (RBE), 40 Gy (RBE), 30 Gy (RBE), 20 Gy (RBE), and 10 Gy (RBE) with p values of 0.034, 0.008, 0.003, 0.001, and 0.001, respectively. No differences between ERB and non-ERB plans for the anterior rectum were observed. ERB reduced posterior rectal volumes in patients who received 30 Gy (RBE), 20 Gy (RBE), or 10 Gy (RBE), with p values of 0.019, 0.003, and 0.001, respectively. According to the NTCP models, no significant reductions were observed in mean or median rectal toxicity (late rectal bleeding ≥ 2, necrosis or stenosis, and late rectal toxicity ≥ 3) when using the ERB. Conclusion: ERB reduced rectal volumes exposed to intermediate or low dose levels. However, no significant reduction in rectal volume was observed in patients receiving high or intermediate doses. There was no benefit and also no disadvantage associated with the use of ERB for late rectal toxicity, according to available NTCP models. Introduction Despite advances in radiotherapy (RT) techniques, rectal morbidity related to prostate radiation treatment cannot be entirely avoided and carries implications for quality of life (QOL). Escalation of radiation dosage for prostate cancer patients has evolved over the past decade with the development of modern three-dimensional conformal radiotherapy (3D-CRT) and the more advanced intensity-modulated radiotherapy (IMRT) together with image-guided radiotherapy (IGRT). Several randomized studies have demonstrated that dose escalation offers improved local control and biochemical control rates compared with conventional doses [1][2][3][4][5][6]. However, the relative biological effectiveness (RBE) achieved by escalating the total dose delivered to the prostate by 8-10 Gray (Gy) has been shown to significantly increase the risk of rectal toxicity by about 10% [2,[7][8][9]. The results of several trials have been published relating rectal dose-volume characteristics to radiotherapyinduced rectal toxicity [10][11][12][13][14]. Based on these reports, the efforts of radiation oncologists in the past decade have been directed not only towards utilizing modern radiotherapy techniques for patients with prostate cancer, but also to incorporating mechanical tools to increase the separation between prostate and rectum, such as implantation of rectumprostate spacers and/or the use of endorectal balloons (ERBs). The pencil beam scanning (PBS) technique is highly sensitive to organ motion [29], therefore ERB has more frequently been used in our institution to stabilize the position and shape of the rectum and hence fix the position of the prostate during treatment. It is unclear whether the benefit of ERB is retained when decreasing the dose exposure in the rectum. Our goal was to explore the dosimetric impact of ERB on rectal dosage and normal tissue complication probability (NTCP) values in prostate cancer patients with implanted rectum spacers who were treated with dose-escalated or hypofractionated IMPT. Materials and Methods Since August 2015, a prospective single-center register evaluating proton therapy for patients with localized prostate cancer (ProRegPros) has been carried out at the West German Proton Therapy Centre Essen (WPE). Two computed tomography (CT) scans, respectively before and after the insertion of the ERB, were obtained for each of 10 consecutive patients undergoing prostate cancer treatment. All patients had been diagnosed with T1-T4, N0, M0, and PSA ≤ 50 ng/mL. All patients were treated with dose escalated or moderate hypofractionated IMPT with 72 Gy (RBE) in 30 fractions. All the patients had been diagnosed with intermediate-to high-risk prostate symptoms (T1-T4, N0, M0, PSA ≤ 50 ng/mL, Gleason score 7a-9) and had no indication of lymph node irradiation. Patients were in good general health with no life-limiting conditions, and each had a life expectancy of more than five years. All patients selected for analysis underwent hydrogel rectal spacer insertion and fiducial marker implantation one week before planned CT application. Written informed consent was obtained from all patients for their inclusion in the register. The register was approved by the ethical committee of the University of Duisburg Essen. CT-MRI Simulation All patients drank 350 mL water on an empty bladder 30 min prior to simulation. Patients were immobilized in a supine position using a thermoplastic pelvic cast. The first planning CT was acquired in 1 mm slices for each patient. Then, the thermoplastic pelvic cast was removed and the ERB catheter was inserted with the patient in a knees-raised position, then the catheter was filled with 120 cm 3 of fluid. The patient was positioned and immobilized again using the laser alignment and immobilization mask markings placed during the first CT, and the second CT was acquired in 1 mm slices. A T1/T2-weighted MRI scan was performed for each patient. Target Volumes and OARs Delineation Within our in-house standard framework, taking into account national and international recommendations and guidelines, we determined target volume and dose. Planning and contouring for each patient were performed using the same methods. For each patient in every CT, the prostate, seminal vesicles, clinical target volumes (CTV,) and organs at risk (OARs) were contoured using a combination of CT and magnetic resonance imaging (MRI) for accurate prostate delineation. Two CTVs were defined; low risk CTV1 (prostate + 5 mm peri-prostatic tissue + 2 cm of the seminal vesicles), and high risk CTV2 (prostate + 1 cm of the seminal vesicles). Margins of 5 mm in every direction were added to the CTV to create the corresponding planning target volumes (PTVs), except at the seminal vesicle region where a 7 mm margin was applied [29]. Dose prescription was 60 Gy (RBE) in 2 Gy to PTV1 and 72 Gy (RBE) in 2.4 Gy to PTV2, in 30 fractions using simultaneous integrated boost (SIB). The rectum was contoured as a solid organ extending from just above the anal verge to the sigmoid flexure. Extra contours were generated for the anterior and posterior rectum. SIB-IMPT Planning Process Dose calculation and optimization of IMPT plans were performed using a pencil beam algorithm with the RayStation treatment planning system version 6 (RaySearch Laboratories, Stockholm, Sweden). For all patients, fixed geometry plans were generated in both CTs using two laterally opposed IMPT beams with the same optimization goals. A margin of 3.5% proton beam range + 2 mm was included in the PTV in the beam direction to account for field-specific range uncertainty. For greater consistency, all contours were generated by the same senior radiation oncologist who also created all the treatment plans. For all dose concepts, a generic relative biological effectiveness (RBE) factor of 1.1 (relative to that of Co-60) was assumed. DVH Analysis and Rectal NTCP Calculation The dose-volume histogram (DVH) of the rectum was assessed and the following parameters were calculated: • For the whole rectum: RV (rectal volume in cc), Dmax, Dmean, Dmedian, and RVxGy = percentage of rectal volume received X dose in Gy (RV72Gy, RV70Gy, RV65Gy, RV60Gy, RV55Gy, RV50Gy, RV40Gy, RV30Gy, RV20Gy, and RV10Gy). NTCPs are able to predict the toxicity of radiation therapy to organs at risk. These biological models can be used to predict the risk of various complications. For the rectal NTCP calculation, the following biological models available in RayStation were employed: We compared the rectal DVH parameters and rectal NTCP values of the non-rectal balloon plans (non-ERB group) with those of the rectal balloon plans (ERB group). The differences in DVH and NTCP indices were calculated (∆ = mean value of non-ERB plans − mean value of ERB plans). Statistical analysis was conducted using the IBM SPSS Statistics program V22. The Mann-Whitney U test was applied to compare means between the non-ERB and ERB plans. DVH Analysis The 120 cm 3 fluid-filled ERBs significantly increased rectal volume in ERB patients compared to non-ERB patients. Analysis of the DVH of the whole rectum confirmed that the ERB plans could attain lower values of Dmax, D1, Dmean, and Dmedian in comparison with non-ERB plans. However, the differences in Dmax and D1 were not statistically significant. There was a minimal statistically insignificant reduction in RV72Gy in favor of non-ERB plans compared with ERB. Otherwise, the ERB plans were able to lower the rectal volumes exposed to different radiation doses compared with the non-ERB plans, with an insignificant reduction in RV70Gy, RV65Gy, RV60Gy, and RV55Gy and a significant reduction in RV50Gy, RV40Gy, RV30Gy, RV20Gy, and RV10Gy (Table 1, Figure 1). a Rectal volume in cm 3 ; b Dose in Gy; b RVXGy = Percentage of rectal volume received x dose; d Δ difference between the non-ERB plans and the ERB plans (mean value non-ERB plans -mean value ERB plans). In the results of the analysis carried out for the anterior rectum, we found that ERB reduced the values of Dmax, D1, RV72Gy, RV70Gy, RV65Gy, RV60Gy, RV55G, RV50G, RV40G, RV30Gy, RV20Gy, and RV10Gy, but no statistically significant differences were attained ( Table 2). In the results of the analysis carried out for the anterior rectum, we found that ERB reduced the values of Dmax, D1, RV72Gy, RV70Gy, RV65Gy, RV60Gy, RV55G, RV50G, RV40G, RV30Gy, RV20Gy, and RV10Gy, but no statistically significant differences were attained ( Table 2). For the posterior rectum, the Dmax and D1 were reduced in ERB plans in comparison with non-ERB plans, without statistical significance. There were no statistically significant differences between the two groups in terms of RV72Gy, RV70G, RV65Gy, RV60Gy, RV55Gy, or RV40Gy (Figure 2). Statistically significant differences were found between the two groups for rectal volumes after receiving 30 Gy, 20 Gy, and 10 Gy (Table 3). NTCP Results No statistically significant differences between the two study groups were NTCP Results No statistically significant differences between the two study groups were determined for the risk of NTCP with late rectal toxicities. Comparisons of NTCP results for late rectal bleeding ≥ 2, necrosis or stenosis, and late rectal toxicity ≥ 3 are presented in Table 4 and Figure 3. Discussion Few trials have been conducted into the use of proton therapy for prostate patients in order to investigate the effectiveness of ERB utilization to achieve rectal sparing [24], reduction of the interfraction prostate motion [26], or removal of rectal gas [25]. Our aim was to investigate whether insertion of 120 cm 3 fluid-filled ERB could spare rectal space and hence reduce rectal NTCPs in patients who had undergone prior placement of hydrogel rectal spacers and received treatment with dose-escalated or hypofractionated IMPT to the prostate and the seminal vesicle. In this study, ERB increased the rectal volume by 137.35 ± 32.58 cm 3 . The reduction in mean radiation dose received by the whole rectum in the ERB plans compared to non-ERB plans was 8.51± 5.25 Gy (RBE) (p = 0.000), and for Dmedian the reduction was 15.76 ± 11.11 Gy (RBE) (p = 0.001). Regarding the maximum dose delivered to the rectum, we recorded a 0.64 Gy (RBE) difference in Dmax, a 0.11 Gy (RBE) difference in D1 of the rectum, and a 0.21 Gy difference in D1 of the anterior rectum in favor of the ERB plans, but with no statistical significance. We found that ERB could reduce Dmax in the posterior rectum, but with no statistical significance. Furthermore, D1 was reduced in ERB plans by 11.11 ± 13.93 Gy (RBE) with a marginal statistical significance (p = 0.059). Our results are similar to those reported by Elsayed et al., who applied 3D-CRT with 59.4 Gy (RBE) + 10 Gy (RBE) high dose-rate (HDR) brachytherapy to 12 patients. The authors found that for tele-therapy applied with a PTV including prostate + 9 mm safety margins, the application of a 60 cm 3 air-filled ERB led to a decrease in Dmax of the anterior rectal wall and the Discussion Few trials have been conducted into the use of proton therapy for prostate patients in order to investigate the effectiveness of ERB utilization to achieve rectal sparing [24], reduction of the interfraction prostate motion [26], or removal of rectal gas [25]. Our aim was to investigate whether insertion of 120 cm 3 fluid-filled ERB could spare rectal space and hence reduce rectal NTCPs in patients who had undergone prior placement of hydrogel rectal spacers and received treatment with dose-escalated or hypofractionated IMPT to the prostate and the seminal vesicle. In this study, ERB increased the rectal volume by 137.35 ± 32.58 cm 3 . The reduction in mean radiation dose received by the whole rectum in the ERB plans compared to non-ERB plans was 8.51± 5.25 Gy (RBE) (p = 0.000), and for Dmedian the reduction was 15.76 ± 11.11 Gy (RBE) (p = 0.001). Regarding the maximum dose delivered to the rectum, we recorded a 0.64 Gy (RBE) difference in Dmax, a 0.11 Gy (RBE) difference in D1 of the rectum, and a 0.21 Gy difference in D1 of the anterior rectum in favor of the ERB plans, but with no statistical significance. We found that ERB could reduce Dmax in the posterior rectum, but with no statistical significance. Furthermore, D1 was reduced in ERB plans by 11.11 ± 13.93 Gy (RBE) with a marginal statistical significance (p = 0.059). Our results are similar to those reported by Elsayed et al., who applied 3D-CRT with 59.4 Gy (RBE) + 10 Gy (RBE) high dose-rate (HDR) brachytherapy to 12 patients. The authors found that for teletherapy applied with a PTV including prostate + 9 mm safety margins, the application of a 60 cm 3 air-filled ERB led to a decrease in Dmax of the anterior rectal wall and the rectum as a complete organ, but with no statistical significance. However, owing to the dose distribution obtained from the 3D-CRT, the authors demonstrated a reduction in the Dmax of the posterior rectal wall of 18.6 Gy (RBE) (47.1 Gy for non-ERB vs. 28.5 Gy for ERB), which was found to be significant (p = 0.01) [32]. Regarding rectal volumes receiving different dose levels, we found no statistically significant differences of rectal volumes at high or intermediate dose levels. Furthermore, through separate analysis of the anterior rectum, we found that the ERB plans led to no significant differences in comparison with non-ERB plans in any of the DVH parameters examined. In the case of intermediate and low dosage levels, the differences in rectal volume between non-ERB and ERB plans were found to be 4.58, 6.82, 9.57, 12.87, and 15.78% for RV50Gy (RBE), RV40Gy (RBE), RV30Gy (RBE), RV20Gy (RBE), and RV10Gy (RBE), respectively, which were statistically significant. Further analysis of the posterior rectum confirmed that the ERB reduced Post-RV30Gy (RBE) by 8.89 ± 9.92% (p = 0.019), Post-RV20Gy by 15.76 ± 12.94% (p = 0.003), and the Post-RV10Gy by 25.66 ± 14.21% (p = 0.001). Our results are in agreement with those reported by Hille et al., who used 3D-CRT and applied 72 Gy with conventional fractionation. The authors found that after inclusion of the prostate, the entire, and the proximal seminal vesicles as CTV, a 60 cm 3 air-filled ERB led to a significant decrease of the rectal wall receiving 40 Gy and 50 Gy, while no significant decrease of the rectal wall receiving 60 Gy, 65 Gy, or 70 Gy could be found [33]. Other trials applying 3D-CRT demonstrated that insertion of ERB could lower rectal volumes exposed to high doses. In an early study in 2002, Wachter et al. used 3D-CRT with 66 Gy for prostate cancer, and tested the role of a 40 cm 3 air-filled ERB on the rectal dose. The authors found that for PTV prostate-only plans, the proportion of the rectum volume receiving doses larger than 90% could be reduced from 24% without ERB to 20% with ERB. However, for PTV prostate + seminal vesicle plans, the volume increased from 41% without ERB to 48% with ERB, due to posterior displacement of the seminal vesicle resulting from application of the ERB [34]. Van Lin et al. conducted a study testing 40, 80, and 100 cm 3 air-filled ERB vs. non-ERB plans, using three-dimensional conformal radiation therapy (DCRT) and IMRT delivered to two different PTVs with and without seminal vesicle involvement. They found that in cases of 3D-CRT the application of an ERB resulted in a statistically significant reduction of the mean rectal wall dose, which was the case for rectal wall volume irradiated to a dose level of 70 Gy or more and for that irradiated to a dose level of 50 Gy or more. However, in case of IMRT, the authors reported no statistically significant reduction in the rectal wall dose parameters for any of the ERBs [16]. In contrast the results obtained by Van Lin et al., Patel et al. conducted a planning study to detect the beneficial effect of 60 cm 3 air-filled ERB on rectal dosimetry. They generated radiotherapy plans for five patients, delivering 76 Gy either with 3DCRT or IMRT to target volumes with and without inclusion of the seminal vesicle, and proved that inflation of the ERB in all cases and even in the context of IMRT resulted in significant decreases in the absolute volume of rectal wall receiving greater than 60, 65, or 70 Gy [23]. Vargas et al. published the only trial to have investigated the role of ERB in rectal sparing for patients treated with proton therapy. They analyzed 20 proton plans for 15 patients who received doses of 78-82 Gy, and found that ERB decreased the volume of the rectum radiated by doses from 10 to 65 Gy (p ≤ 0.05), while no benefit was observed for doses ≥ 70 Gy [24]. No hydrogel prostate rectum spacers were used in their trial. Based on NTCP calculations, we found that the probability of late rectal toxicity was not reduced by the application of ERB. The mean NTCP for late rectal bleeding ≥ grade 2 was 2.6 ± 0.97% for non-ERB plans vs. 3.1± 1.1% for ERB (p = 0.15). For necrosis or stenosis it was 5.5 ± 1.78% for non-ERB vs. 5.6 ± 2.22% for ERB (p = 0.72); for late rectal toxicity ≥ 3 it was 13.1 ± 1.37% for non-ERB vs. 13.3 ± 3.02% for ERB (p = 0.593). Our results are similar to those reported by Van Lin et al., who used the LKB model with Emami parameters (n = 0.12, m = 0.15, and D50 = 80 Gy) for calculation of late rectal NTCP. In their trial, no statistically significant reduction in NTCP could be demonstrated for the combination of IMRT with ERBs (40, 80, and 100 cm 3 air-filled). However, according to their analysis, ERB could improve the results of 3D-CRT plans, with a statistically significant reduction in rectal NTCP for 100 cm 3 air-filled ERB compared to non-ERB (15% vs. 24%, respectively, p < 0.0001) [16]. It has been proven that the exposure of rectal volume to intermediate and high radiation doses is associated with developing late rectal toxicities. Storey et al. reported a significant correlation between the percentage of the rectum irradiated to 70 Gy or greater and the likelihood of developing late rectal complications in patients treated with up to 78 Gy [35]. Kupelian et al. tested a short-course IMRT (70 Gy with 2.5 Gy per fraction) and demonstrated that only the volume of rectum receiving 70 Gy (with a cutoff of 15 cc) was a significant predictor of rectal bleeding [11]. Huang et al. also observed a significant effect on volume at rectal doses of 60, 70, 75.6, and 78 Gy and concluded that the risk of developing rectal complications increased exponentially as larger volumes were irradiated [36]. Zapatero et al. reported that rectal Dmean and the percentage of the rectum receiving >60 Gy were correlated with grade 2 rectal bleeding or worse [37]. Meanwhile, other investigators have demonstrated the likelihood of rectal toxicity for rectal volumes receiving an intermediate dose. Tucker and colleagues found that the incidence of grade 2 or worse late rectal bleeding increased within 2 years when ≥80% of the rectal wall was exposed to doses > 32 Gy [38]. Jackson et al. reported that rectal bleeding was significantly correlated with volumes exposed to 46 Gy in prostate cancer patients who received 70.2 or 75.6 Gy [39]. The strength of the current study is limited by the small number of patients involved. Nevertheless, since the data include internal controls, the dataset is particularly homogeneous and thus highly relevant. Conclusions Our study suggests that ERB could reduce rectal volumes exposed to intermediate or low doses of radiation treatment in prostate cancer patients with implanted rectum spacers during their treatment with hypofractionated or dose-escalated IMPT. We could not find any benefit associated with ERB in terms of reducing rectal volumes receiving high to intermediate dose levels. Supported by previous trials, these results can explain the lack of benefit obtained from ERB in reducing NTCP values for late rectal toxicity in those patients. We conclude that the application of ERB adds little benefit for patients treated with IMPT, due to high capability of this technique to conform the dose to the target, which in turn reduces the volume of the rectum exposed to high doses. Furthermore, reduction of the rectum volume receiving a high dose can be achieved using spacer implantation. However, the potential effect of ERB in reducing volumetric changes in the rectum cannot be neglected, and nor can variabilities in rectal positioning during treatment, especially in patients undergoing proton therapy due to the high sensitivity of PBS dose distribution to inter-and intrafractional motion. This issue is currently under investigation at our center, and results will be reported soon. Therefore, at our center we are currently continuing to use the endorectal balloon to reduce motion. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board University of Duisburg Essen (CT-MRI Simulation DRKS00005363l). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All data and materials can be accessed via DAK, in compliance with data protection guidelines. Conflicts of Interest: The authors declare no conflict of interest.
2021-11-17T16:27:33.773Z
2021-11-15T00:00:00.000
{ "year": 2023, "sha1": "a818db9f53fc95d9cbacac991ccac9d08df37662", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1718-7729/30/1/58/pdf?version=1672992353", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e1910b83751edd2b399802418cbe2a20b907c409", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [] }
44263129
pes2o/s2orc
v3-fos-license
Evaluation and Management of Antenatal Hydronephrosis Antenatal hydronephrosis (ANH) is one of the most common abnormal findings detected on prenatal ultrasound (US), and it has been reported in 1-5% of all pregnancies. The likelihood of significant postnatal pathologic abnormality in the urinary tract correlates with the degree of anterior-posterior diameter (APD) according to the gestational age. Detection of urologic anomalies prenatally permits fetal interventions that avoid complications in rare cases of bladder outlet obstruction with oligohydramnios even though their final benefits still remain controversial. There is no clear consensus on the extent and mode of postnatal imaging after a diagnosis of ANH. US is the mainstay of the postnatal evaluation and helps guide further testing with voiding cystourethrography (VCUG) and diuretic renography. Although most algorithms continue to recommend generous VCUG for identification of lower urinary tract anomalies, VCUG may be safely reserved for high grade ANH cases or any grade of ANH with dilated distal ureter without increasing the risk of urinary tract infection (UTI). There are conflicting studies about efficacy of postnatal prophylactic antibiotics. It still seems reasonable to consider use of a prophylactic antibiotic to prevent infant UTIs in high-risk populations, such as females and uncircumcised males with high grades of hydronephrosis, hydroureteronephrosis, or vesicouretral reflux. Introduction Increasing use of ultrasound (US) has allowed an appreciation of the true incidence of urological abnormalities, the most common of which is hydro nephrosis (HN).Antenatal hydronephrosis (ANH) is the most common urological abnormality detected on prenatal US, and it has been reported in 15% of all pregnancies.There are various urological conditions from transient dilation of the renal collecting system to severe obstructive uropathy requiring surgical interventions.The purpose of evaluating children with ANH is to distinguish clinically significant urological conditions from transient HN with little clinical significance.However, the diagnostic criteria for identifi cation of children at risk for renal damage remain a subject of debate and the clinical guideline of ANH has yet to be clearly established.This article reviews the current literature regarding the perinatal evaluation and management of ANH. Definition of ANH 1. APD (anterior-posterior diameter) APD of the renal pelvis is the most commonly studied indicator for assessing ANH and a sentinel of potential diseases, however, it does not identify specific underlying pathology 13) .Potential factors affecting APD are gesta tional age, hydration status of the mother, and the degree of bladder distention 49) .Since the dimensions of the renal pelvis may normally increase with gestational age, most investigators have adjusted threshold APD values according to the gestational age (Table 1) 10) .Limitations of APD as a tool of ANH are as follows, (1) it is single measurement of collecting system, (2) there would be the inter and intra observer reliability, and (3) APD does not consider calyceal dilation or renal parenchymal thinning that may indicate more severe obstruction.2 11) ) At Grade 0 there is no HN, so the central renal echo com plex is closely apposed.At Grade 1 the renal pelvis only is visualized with slight separation of the central renal echo complex, At Grade 2 the renal pelvis is further dilated and a single or a few but not all calices are identified in addition to the renal pelvis.Grade 3 HN requires that the renal pelvis is dilated and there are fluid filled calices throughout the kidney, but the renal parenchyma is of normal thickness.Grade 4 HN may have a similar appearance of the calices as Grade 3 but, when compared with the normal side, the involved kidney has parenchymal thinning over the calices. Significant US findings associated with ANH Loss of corticomedullary differentiation, increased pa renchymal echogenicity and the presence of renal cysts are associated with the loss of functional renal parenchyma 12 14) .Perinephric urinoma can be seen in severe urinary obstruction.ANH is more likely to be associated with post natal pathology when it is accompanied with parenchymal thinning, calyceal dilatation, ureteral dilatation, chromo somal anomalies, and multiple system malformations.Oligohydramnios (OHA) appears to be one of the most important predictive factors for postnatal pathology.Mul tivariate analyses have identified OHA and megacystis to be predictive of urethral obstruction, and OHA to be pre dictive of chronic renal failure or death 15,16) .Also, another multivariate analysis in children with posterior urethral valve (PUV) has identified OHA to be predictive of chronic renal failure 17) . Underlying pathology of ANH The most common cause of ANH is transient HN which resolves as time goes.Ureteropelvic junction (UPJ) obstruc tion is the most common underlying pathology of ANH and its incidence ranges from 10 to 30% 10) .Level of ure teral obstruction can be as low as ureterovesical junction (UVJ).UVJ obstruction usually, not always, causes dilation of the entire ureter, which is called as hydroureter.Incidence of vesicoureteral reflux (VUR) as a cause of ANH has been reported from 10 to 20%.Multicystic dysplastic kidney as well as ureteroceleectopic ureterduplex system are other causes of ANH.Relatively rare causes of ANH are PUV, prune belly syndrome, midureteric stricture, and megal ourethra. Antenatal radiological evaluation and intervention for ANH It is generally recommended that the prenatal identifica tion of HN (APD >4 mm in the 2 nd , or >7 mm in the 3 rd trimester) requires further follow up.The presence of fin dings suspicious of PUV, such as OHA, dilated bladder, bilateral hydroureteronephrosis (HUN), warrants monito ring throughout pregnancy, and any comorbid fetal ab normality should also be investigated.Depending on the severity of OHA, fetal imaging may be needed every 4 weeks.However, in the presence of increasing OHA, fetal intervention such as vesicoamniotic (VA) shunting may be offered.The ideal time period to offer prenatal intervention for suspected bladder outlet obstruction appears to be the midsecond trimester.This will allow for the return of am niotic fluid, in an effort to promote fetal lung development.A gross predictor of renal function may be obtained by per forming a fetal bladder tap and analysis of fetal urine bio chemistries and electrolytes 18) .Due to the first pass of urine into the bladder, it is recommended to make all deci sions based on a repeat fetal bladder tap within 48 hours of the initial bladder decompression.If favorable urine electrolytes are obtained, fetal intervention may be offered as an option.According to a recent randomized trial done by Morris et al. 19) , survival seemed to be higher in the fetuses receiving VA shunting.On the other hand, there also is an other recent study supporting that prenatal interventions do not improve prognosis of babies with OHA associated renal and urinary tract anomalies 20) . Postnatal radiological evaluation The initial postnatal evaluation of ANH depends in part on the degree of HN seen during fetal evaluation.Currently, no study is considered a gold standard for the evaluation of renal obstructive disorders and complete assessment typically involves a series of studies including US and diu retic renal scintigraphy (DRS).Even though there is no consensus on the optimal APD threshold for determining the need for postnatal follow up, according to a recent study, isolated HN with APD more than 16mm on US performed at 730 days after birth warrants further investigation in cluding VCUG 21) .VCUG is frequently performed in con junction with renal studies to rule out VUR as the cause of HN.It should be kept in mind that VUR may coexist with UPJ obstruction in as many as 10% of children 22) .Currently, there is no clear evidence to support or to avoid postnatal imaging for VUR.Neither the grade of the HN nor gender is a predictive factor for VUR in children with ANH.The overall incidence of VUR is up to 30% in child ren with ANH, including those with resolved HN 3,23) .It remains unproven whether the identification and treat ment of children with VUR confers any clinical benefit because most patients with VUR and low grade HN can be followed without surgical intervention.According to SFU 2010 recommendations for VCUG timing after renal US, neo nates with moderate to severe bilateral ANH need to get VCUG within a week after birth.Any episode of mo derate or severe HN on two US tests (pre and postnatal US) war rants VCUG within 4 weeks 10) .However, newer SFU recom mendations published in 2013 proposed APD 9 mm or greater and SFU grade 3 or greater at third trime ster as an independent predictors of postnatal intervention while supporting that most patients with VUR and low grade HN can be followed without surgical intervention.Patients with a SFU grade of 4 progressed to surgical inter vention at a faster rate than those with a grade of 3 24) . Biomarkers and HN Advances in the field of biomarkers have opened the pos sibility to predict the risk of obstruction and renal func tional impairment in infants with ANH.These po tential biomarkers include transforming growth factor beta 1 (TGFβ1), urinary monocyte chemotactic protein1 (UMCP 1), urinary neutrophil gelatinaseassociated lipo calin (NGAL), and beta2 microglobulin (β2M).TGF β1 is known to be associated with renal dysplasia, acquired renal damage, and obstruction 25,26) .However, this marker has not been shown to be associated with HN grade 27) .UMCP1 has also been evaluated as a potential marker of UPJ obstruction.A recent casecontrol study showed that UMCP1 is elevated in cases of urinary obstruction and positively correlates with T1/2 radiotracer clearance and impaired split renal function on DRS.However, UMCP 1 may be a nonspecific measure of renal injury rather than unique to obstructive nephropathy, as with TGF β1 28,29) .Both urinary NGAL and β2M have been studied as mar kers of preoperative obstruction and postoperative success.Madsen et al. found that both NGAL and β2M increased in patients with evidence of obstruction at the time of pyeloplasty and that these levels decreased to levels similar to healthy controls after surgical correction 30) .Cost et al. also found that NGAL levels were significantly higher in children with UPJ obstruction and returned to normal after surgical repair.Interestingly they found that NGAL levels were inversely correlated with split renal function, suggesting that NGAL may serve as a marker for both obstruction and renal impairment 31) . Urinary tract infection (UTI) and ANH The rationale for antibiotic prophylaxis in children with a history of ANH includes prevention of UTIs, as infants with HN are at increased risk 32) .The risk of UTI increases with increasing grade of HN 33) .Rates appear to be as high as 40% in children with SFU grade 4 HN 34) , with another study estimating the cumulative incidence of UTI as 39%, 18% and 11% at 36 months of age for severe, mode rate and mild renal pelvic dilation, respectively 33) .Several studies report a higher rate in girls compared to boys 32,33) .Chil dren with HN and obstructive drainage patterns on DRS are at increased risk compared to those without obstructive pat terns 34,35) .An increased risk is also as sociated with hydrou reteronephrosis (HUN) even without VUR or without an obstructive pattern on DRS 35,36) .These observations suggest that increased stasis and easier access to a urinary reservoir (such as in the case of hydr oureter) increase the chance of developing a UTI. The efficacy of antibiotic prophylaxis High rates of UTI have been noted despite prophylactic antibiotics in children with HN 33) .Similarly there was a report that there was no statistical difference in the inci dence of UTI in children with ANH on or off prophylactic antibiotics 37) .In contrast, Estrada et al. 38) observed that in children with a history of ANH with persistent grade II HN secondary to VUR, the use of prophylactic antibiotics signi ficantly reduced the risk of febrile UTIs.According to a recent systemic review 39) , offering continuous antibiotic prophylaxis (CAP) to 7 infants who have highgrade HN would prevent 1 UTI, suggesting value in this subgroup of patients.While the literature has both supportive and contradictory evidence, the growing trend not to place children with ANH on CAP has created varied clinical practice based on anecdotal individual case characteristics.However, consensus on what constitutes a risk factor for UTI warranting CAP in this population should be deter mined.SFU 2010 recommendations advocated the use of CAP to prevent infant UTIs in highrisk populations, such as with higher grades of HN, HUN, VUR, or obstructive drainage patterns 10) .Herz et al. retrospectively studied predisposing risk factors to febrile UTI by comparing 405 children with and without CAP.The presence of ureteral dilation, high grade VUR, and UVJ obstruction were in dependent risk factors for development of UTI in children with ANH 40) .Braga et al. prospectively investigated the impact of risk factors for febrile UTI in 334 infants with postnatally confirmed ANH, and they identified female gender, uncircumcised males, HUN, VUR and lack of CAP as risk factors for febrile UTI.Subgroup analysis ex cluding VUR showed that high grade ANH was also a significant risk factor 41) . Conclusions ANH, one of the most common abnormal finding on antenatal US, continues to increase as the standard of care includes 2nd trimester US.While rare cases may be asso ciated with pathology such as PUVs that require perinatal interventions, it is both safe and reasonable, in most cases, to wait for spontaneous improvement with the intensity of followup depending on the grade of ANH.US is the main stay of the postnatal evaluation and VCUG may be safely reserved for high grade ANH or dilated distal ureter.New urinary biomarkers may offer promising potential for more accurate risk stratification in the near future.While there have been conflicting studies about efficacy of postnatal CAP, it seems reasonable to consider use of CAP in children fills renal pelvis with or without major calyceal dilation SFU Grade III Grade II plus uniform dilation of minor calyces with preserved renal parenchyme SFU Grade IV Grade III plus parenchymal thinning www.chikd.org Table 1 . Degree of Antenatal Hydronephrosis (ANH) according to Renal Pelvic Anterior-posterior Diameter (APD) Adjusted for Gestational Age Table 2 . SFU Grading of Infant Hydronephrosis
2017-10-18T13:09:48.701Z
2015-04-30T00:00:00.000
{ "year": 2015, "sha1": "82f0b332eca3335178208e7d96a3fa27657a2dd0", "oa_license": "CCBYNC", "oa_url": "http://chikd.org/upload/ckd-19-1-8.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82f0b332eca3335178208e7d96a3fa27657a2dd0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16055098
pes2o/s2orc
v3-fos-license
Copy Number Variation Analysis on a Non-Hodgkin Lymphoma Case-Control Study Identifies an 11q25 Duplication Associated with Diffuse Large B-Cell Lymphoma Recent GWAS have identified several susceptibility loci for NHL. Despite these successes, much of the heritable variation in NHL risk remains to be explained. Common copy-number variants are important genomic sources of variability, and hence a potential source to explain part of this missing heritability. In this study, we carried out a CNV analysis using GWAS data from 681 NHL cases and 749 controls to explore the relationship between common structural variation and lymphoma susceptibility. Here we found a novel association with diffuse large B-cell lymphoma (DLBCL) risk involving a partial duplication of the C-terminus region of the LOC283177 long non-coding RNA that was further confirmed by quantitative PCR. For chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL), known somatic deletions were identified on chromosomes 13q14, 11q22-23, 14q32 and 22q11.22. Our study shows that GWAS data can be used to identify germline CNVs associated with disease risk for DLBCL and somatic CNVs for CLL/SLL. Introduction Non-Hodgkin lymphoma (NHL) is a common malignancy of the lymphoid system that encompasses a heterogenous spectrum of diseases, with different clinical, pathological and morphological characteristics. The most common NHL subtypes are diffuse large B-cell lymphoma (DLBCL), follicular lymphoma (FL), and chronic lymphocytic leukemia/small lymphocytic lymphoma (CLL/SLL), which account for approximately 33%, 20%, and 5-10%, respectively, of all lymphomas in the United States [1]. Despite the successes of recent genome-wide association studies (GWAS) in the identification of novel NHL loci [2][3][4][5][6][7], much of the heritable variation in NHL risk remains to be explained, and it is likely that structural variants other than SNPs might account for some of this missing heritability. Copy number variants (CNVs), detected through molecular cytogenetic techniques or high-density SNP arrays, have been associated with numerous diseases including several lymphoma subtypes. Known recurrent aberrations have been found in ,80% of CLL patients [8], with deletions in chromosomes 13q14, 17p13, 11q22-23, 6q and trisomy 12 being the most frequent [8]. CNV studies using DLBCL tumor tissue revealed that DLBCL subgroups could be segregated by the frequency of particular somatic chromosomal aberrations [9][10]. Thus, aberrations most characteristic of the activated B-cell like (ABC) DLBCL subtype, which has a poor clinical outcome, include trisomy 3, gains of 3q and 18q21-q22, and deletions of 6q21-q22 and the INK4a/ARF locus on chromosome 9, whereas the germinal center B-cell (GCB) subtype, which is more common in younger adult DLBCL cases and has a better clinical outcome, exhibit frequent amplifications of 12q12, the MIHG1 locus on chromosome 13, the REL locus on chromosome 2 and deletion of PTEN on chromosome 10 [9][10]. For FL, recurrent copy number alterations have been observed in chromosomes 1, 5-8, 10, 12, 17-19 and 22, some of them correlating with lower survival and/or risk of transformation from FL to DLBCL [11]. Whereas the presence of somatically acquired structural variants in DLBCL and FL has been previously investigated, the role of germline structural variants in NHL susceptibility is relatively unexplored and their contribution to lymphoma risk remains unclear. Although existing tools based on SNP array data have lower sensitivity to detect CNVs than standard laboratory approaches such as multiplex ligationdependent probe amplification [12], the use of SNP arrays for CNV discovery and detection has several advantages such as being a cost-effective and requiring less sample per experiment compared to other techniques such as CGH arrays [13]. Thus, numerous studies including those in colorectal cancer [14], testicular germ cell cancer [15], and breast and ovarian cancer [16] have successfully used GWAS data to identify CNVs that are associated with disease risk. Here, we extended our previous GWAS SNP analysis to report the results of a genome-wide CNV analysis in 681 NHL cases and 749 controls from the San Francisco Bay Area. In this study, we investigated the role of germline structural variants in risk of DLBCL and FL, and we also sought to determine if we could detect the presence of somatic structural variants in CLL/SLL using GWAS data generated from blood DNA. Study Population A population-based case-control study of NHL (2,055 cases, 2,081 controls) that included incident cases diagnosed from 2001 through 2006 was conducted in the San Francisco Bay Area. Details of the study design and methods have been described previously [3]. Briefly, eligible patients were identified through the cancer registry and met the following criteria at diagnosis: aged 20-85 years, resident of one of the six Bay Area counties and able to complete an in-person interview in English. Controls were identified by random digit dial and random sampling of Center for Medicare and Medicaid lists, met the same eligibility criteria as cases with the exception of NHL diagnosis, and were frequencymatched to patients by age in five-year age groups, sex and county of residence. Blood and/or buccal specimens were collected from eligible cases and controls that participated in the laboratory portion of the study (participation rates, 87% and 89%, respectively). To confirm NHL diagnosis and for consistent classification of NHL subtypes using the WHO classification, the study's expert hematopathologist re-reviewed patient diagnostic pathology materials (including diagnostic slides, pathology, immunohistochemistry and flow cytometry reports) for .98% of consenting cases, with review of diagnostic slides in addition to pathology reports conducted for 54% of cases. Approximately 23% of NHL subtypes were reclassified, and approximately 1% of cases were dropped as not NHL after expert re-review. The U.C. San Francisco Committee on Human Research and the U.C. Berkeley Committee for Protection of Human Subjects approved study protocols. All study participants provided written informed consent prior to interview and biospecimen collection. Genotyping Details of the genotyping and quality control have been published previously [3]. Briefly, DNA from 1,577 study participants was genotyped using Illumina HumanCNV370-Duo BeadChip (Illumina, San Diego, CA), which comprises over 370,000 markers, including over 14,000 CNV regions. Genotype clustering was conducted with Illumina Beadstudio software from data files created by an Illumina BeadArray reader. Individuals with call rates ,95% were excluded from analysis. We checked population stratification using multidimensional scaling (MDS) as described previously [3]. Specifically, we first merged our data with genotypes from 209 unrelated HapMap Phase II individuals from the CEU, YRI and JPT+CHB panels and we selected a subgroup of 33,838 unlinked SNPs, by pruning those with r2.0.1 using 50-SNP windows shifted at 5-SNP intervals. We ran the MDS analysis on the matrix of IBS pairwise distances and selected 100 as the number of dimensions to be extracted. Samples with evidence of non-European ancestry were identified by inspection of the MDS plots and excluded from analysis regardless of their self-reported origin, resulting in a final dataset of 681 NHL cases (213 FL, 257 DLBCL, 211 CLL/SLL [180 untreated and 31 treated]) and 749 controls that were used for CNV analysis. Among the 211 CLL/SLL samples, 180 were obtained from patients that did not receive any treatment and that are referred hereafter as CLL/SLL samples, whereas the remaining 31 were samples obtained during or after chemotherapy, and therefore were analyzed separately and referred hereafter as treated CLL/ SLL samples. CNV Calling and Analysis We used the software PennCNV [17] to call and analyze CNVs. PennCNV implements a hidden Markov model (HMM) that combines total and allelic signal intensity data obtained for each marker on the genotyping array, with SNP population allele frequency to generate CNV calls. Signal intensity data in the form of log R ratio (LRR) and B allele frequency (BAF) values were obtained directly from the Beadstudio software, and the HMM and the population frequency of the B allele were obtained from PennCNV. Quality control filtering was used at a sample level to exclude unreliable samples. This included removing samples with LRR standard deviation (LRR_SD) .0.3, B allele frequency drift (BAF_DRIFT) .0.01, or waviness factor (WF) outside of a 2 0.05,0.05 range. Individuals with more than 100 CNVs were considered low quality samples and were also eliminated from analysis. A total of 62 NHL cases and 19 controls were excluded in this filtering step. Centromeres and telomeres are known to harbor spurious CNV calls, so coordinates for these regions were downloaded from the UCSC genome browser build 36 (hg18) (http://genome.ucsc.edu/) and individual CNV calls that overlapped 50% or more with these regions were excluded from analysis. Although immunoglobulin regions are also prone to contain spurious CNV calls, we decided to not exclude them due to the significance of immunoglobulin genes in NHL. Finally, CNVs with confidence score ,10, length ,1 kb or expanding less than 5 markers were also discarded. CNV calls were considered novel if they did not overlap with any of the copy number and Indel loci catalogued in the Database of Genomic Variants, build 36 (hg18) (DGV, http://projects.tcag.ca/variation/). The UCSC Genome Browser was used to map CNVs to genes, and CNVs were assigned to cytobands if they overlapped 50% or more with the cytoband boundaries. A Fisher's exact test was used to evaluate the association of CNVs with NHL, and Fisher p-values were adjusted for multiple comparisons by FDR using the p. adjust function in R. Signal intensity data (LRR and BAF values) have been deposited in the NCBI's Gene Expression Omnibus database (http://www.ncbi.nlm.nih.gov/geo/) and are accessible through GEO Series accession number GSE58718. CNV Molecular Confirmation Copy number verification of the duplication observed in 11q25 for DLBCL was performed using quantitative PCR on the 11 DLBCL cases that presented the duplication and 11 random controls with no duplication in the region. Nine primer pairs were designed to amplify regions covering the full length of LOC283177, from the C-terminus on the right to the N-terminus on the left and referred to as P1 to P9 (Table S1), using NCBI's primer designing tool Primer-Blast (http://www.ncbi.nlm.nih. gov/tools/primer-blast/). The housekeeping gene GAPDH (forward 59-GTGAAGGTCGGAGTCAACG-39, reverse 59-TGAGGTCAATGAAGGGGTC-39) was used as the reference gene for normalization of possible variations of DNA concentration and difference in DNA quality between subjects. Real-time quantitative PCR (qPCR) was done with SYBR-Green BioRad Supermix II on a CFX-Connect instrument (BioRad, Hercules, CA). Raw qPCR data was obtained as Cq values for each primer pair minus Cq value for GAPDH for each subject (DCq). For each individual primer pair, D.DC represents the difference between the average DCq of the 11 cases and the average DCq of the 11 controls. Relative abundance, or CNV, is calculated using the formula 1/2 D.DCq in which a difference of 1 D.DCq is the result of a 2-fold difference in template copy numbers. The average number of CNVs per individual in cases ranged from 19.5 CNVs/individual in treated CLL/SLL to 22.3 CNVs/ individual in CLL/SLL, and it was slightly higher than the average number of CNVs per individual in the controls (19.2, Table 1). The average CNV size was markedly higher in CLL/ SLL in comparison with the treated CLL/SLL cases, the FL and DLBCL subtypes, as well as the controls (303 kb in CLL/SLL versus 80-98 kb in treated CLL/SLL, FL and DLBCL, and 82 kb in controls, Table 1). These data suggested that the treated CLL/ SLL samples have a CNV content more similar to FL and DLBCL than to the CLL/SLL samples. Based on the database of Genomic Variants, 9.4%, 8.2%, 7.5%, and 7.4% of the detected CNVs in CLL/SLL, treated CLL/SLL, FL and DLBCL, respectively are novel, whereas only 7.0% of the CNVs in the controls were novel. DLBCL A significant association was found for duplications in 11q25 and DLBCL (Table S2 and Figure S1), where 15 (6.2%) of the DLBCL cases showed duplications versus 6 (1.1%) of the controls. Twelve out of the 15 DLBCL cases with duplications in 11q25 shared a common duplication of approximately 340 Kb located 86 kb upstream of B3GAT1, a member of the glucoronyltransferase gene family whose gene product functions in the biosynthesis of the carbohydrate epitope HNK-1 (human natural killer-1, also known as DC57 and LEU7). The only gene in the region of the duplication is LOC283177, a long non-coding RNA (lncRNA) gene that was partially duplicated in 11 (4.5%) of the DLBCL samples ( Figure 1). LOC283177 was therefore the most significantly associated gene with DLBCL (P FDR = 2.77610 22 ), followed by DGCR6 and PRODH, both located on chromosome 22q11.21 and duplicated in 10 (4.1%) of the DLBCL cases, although the association for these genes was not significant after correction for multiple testing. Although no cytobands were significantly associated with deletions in DLBCL, we observed that a deletion of the pseudogene ZNF826P in 19p12 was approaching significance (P FDR = 7.84610 22 ). When we stratified the DLBCL cases by age, we found that, although non-significant after correction, the 11q25 duplication was more strongly associated with the young group (,60yo, n = 95, P = 7.64610 24 ), which generally corresponds to the GCB subtype, than with the older DLBCL group (. = 60, n = 144, P = 1.79610 23 ). Moreover, although the number of cases was lower, the association of LOC283177 duplications with DLBCL was stronger in the younger group (P FDR = 2.07610 22 ) than in the combined DLBCL cohort. To verify the partial duplication of LOC283177, we performed qPCR on the 11 DLBCL samples that presented the duplication and 11 random controls with no duplication in the region. The PCR data for the nine primers pairs designed to cover the gene showed that the relative abundance of the four primers on the Cterminus were, on average, 1.8-fold in the DLBCL cases compared to the controls whereas the other 5 primers on the N-terminus showed no evidence of duplication (average 0.98-fold, P = 0.004), confirming the results from the CNV analysis that suggested a partial duplication of LOC283177 in DLBCL (Figure 1). FL We did not find any CNV significantly associated with FL at an adjusted FDR p-value,0.05. The strongest association (P = 7.27610 24 ) was found for deletions of chr3q13.31, where 12 (5.9%) FL cases presented deletions versus 10 (1.4%) of the controls (Table S3). Among all the genes, ZNF658 in chromosome 9 was the one most strongly associated with aberrations in FL, with 7 (3.4%) of the FL cases presenting duplications in this gene (P = 5.35610 24 ). CLL/SLL and treated CLL/SLL Among the known CLL/SLL recurrent deletions, statistically significant associations were observed for deletions of chromosomes 13q14, 11q22-23 and 14q32 in our study (Table S4), whereas no significant associations were found for 6q and 17p deletions and our CLL/SLL cases. In agreement with previous studies, del(13q14) was the most significant aberration observed in our CLL/SLL cases (Table S4 and Figure S2). Among the 31 (20.9%) CLL/SLL cases that presented deletions in this region, 25 of them had del(13q14) as the sole abnormality, 4 cases presented also del(11q) and 2 del(17p). The next strongest associated aberrations were deletions of 14q32 (Table S4), which were observed in 36 (24.3%) of the CLL/SLL cases, followed by deletions of 11q22-23 (Table S4). Additional statistically significant associations were found in chromosome 22q11.22 (Table S4), where deletions associated with CLL in the region were previously found to be related to the rearrangement of the immunoglubulin lambda chain locus [18]. As expected, the genes most frequently deleted were located in these cytobands (Table S5). Thus, members of the DLEU family, ST13P4 and the miRNAs, MIR16-1 and MIR15, all located in 13q14 were the most commonly deleted genes in CLL/SLL, with DLUE2 being deleted in 27 (18.2%) of the CLL/SLL cases (P FDR = 2.24610 217 ). Significant gene deletions were observed also for the pseudogene ADAM6 in 14q32 (P FDR = 2.08610 25 ), the miRNA MIR650 in 22q11.22 (P FDR = 5.27610 24 ), and a cluster of genes in 11q22-23 (6.27610 23 .P FDR ,2.46610 22 ). Although some duplications were observed at a nominal p-value,0.05, none of these remained significant after correction. Similarly, in the treated CLL/SLL group (n = 24), some associations were observed that were significant before correction, with deletions in 14q32 being the most significantly associated aberration (P = 5.87610 23 ), although it did not remain significant after adjustment for multiple comparisons. Of note, as opposed to the CLL/SLL group, no deletions were observed in 13q14, 11q22-23 or 22q11.22 in any of the treated CLL/SLL cases. Discussion In this study we carried out a CNV analysis of 681 NHL cases and 749 controls to explore the relationship between common structural variation and lymphoma risk. A significant association in 11q25 was observed for DLBCL, where we found a partial duplication of the C-terminus region of the LOC283177 lncRNA that was further confirmed by qPCR. Although no previous association of this uncharacterized gene with DLBCL has been previously reported, a duplication in the region overlapping LOC283177 has been recently described in a CNV study of acute myeloid leukemia [19]. Most lncRNAs remain uncharacterized, but a significant number have been shown to exhibit cell-specific expression and association with human diseases, with several lncRNAs being dysregulated in various diseases, especially cancer [20]. Whereas the biological relevance of LOC283177 in DLBCL needs further investigation, our results suggest that this lncRNA could be a potential susceptibility locus for DLBCL. Interestingly, the association of LOC283177 with DLBCL was slightly higher in the younger DLBCL group than in the whole DLBCL cohort. Although this suggests that the duplication might be more characteristic of the GCB subtype, further studies with confirmed pathology would be needed to support this observation. In the FL subtype, deletions of chr3q13.31, a region associated with CNVs in osteosarcoma [21] and frequently deleted in human cancers [22], were observed in 5.9% of the FL cases, although the association did not remain significant after multiple testing correction. Further investigation of the association of chr3q13.31 deletions with FL risk may be warranted. On the other hand, in agreement with previous studies, strong associations were observed for CLL/SLL and deletions on chromosomes 13q14, 11q22-23, 14q32 and 22q11. 22. Although our analysis focused on germline structural variants, it is not uncommon to observe the presence of these somatic acquired structural variants in blood from CLL/SLL patients, due to the high content of circulating tumor cells present in CLL/SLL. Of note, with exception of 14q32, none of the treated CLL/SLL samples presented deletions in these regions, although this could be a power issue due to the low number of samples analyzed or due to the somatic nature of these aberrations, which are expected to be less common in CLL/SLL patients undergoing treatment. Although the sample size of our study is small and the genotyping platform used was a low density SNP array, we were able to identify with high statistical significance the most common established somatic aberrations in the CLL/SLL subtype, as well as a novel duplication in germline DNA of DLBCL cases, further confirmed by qPCR, suggesting the validity of our approach and the potential role of germline CNVs as NHL susceptibility loci. Nonetheless, our findings will require further validation in independent studies. Additionally, it is possible that our study might be underpowered to detect CNVs of lower frequencies, and larger sample sizes are necessary to further investigate the effects of CNVs in lymphoma susceptibility. Figure S1 CNV results for DLBCL cases and controls in the 11q25 chromosomal region. Deletions and duplications in the region are shown in red and green respectively. Coordinates are shown with respect to the NCBI36/hg18 assembly. (DOC) Figure S2 CNV results for CLL/SLL cases and controls in the 13q14 chromosomal region. Deletions and duplications in the region are shown in red and green respectively. Coordinates are shown with respect to the NCBI36/hg18 assembly. (DOC)
2018-04-03T02:24:23.720Z
2014-08-18T00:00:00.000
{ "year": 2014, "sha1": "224f14b4684727cb0fcb80ba5740064428eda971", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0105382&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "224f14b4684727cb0fcb80ba5740064428eda971", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
78291295
pes2o/s2orc
v3-fos-license
Clinical Outcomes of Minimally Invasive Posterior Cervical Decompression Using a Tubular Retractor for the Treatment of Cervical Spondylotic Myelopathy: Single-center Experience with a Minimum 12-month Follow-up Objective: Recently, with the use of the tubular retractor system, minimally invasive posterior cervical decompression has become possible. Improvement of surgical technique has made reduction in tissue damage during operation possible, which allows less postoperative pain, and shorter hospital stays. The objective of this study is to evaluate the safety and efficacy of a minimally invasive surgical technique using a tubular retractor system. This study is a series of consecutive mid-term follow-up reports in controlled clinical trials held at the institute of the authors using a minimally invasive surgical technique. Methods: Twenty-one patients underwent minimally invasive posterior cervical decompression. Medical records including demographic data, diagnoses, complications, and degrees of symptom relief, were recorded and evaluated. Clinical outcomes were assessed by the neurological status and visual analog scale (VAS) score for neck and arm pain. Results: Muscle weakness improved in all patients, of whom 80.9% (17/21) showed a complete resolution of sensory deficits and 19.1% (4/21) showed partial improvement. An analysis of the mean VAS and Neck Disability Index scores revealed significant improvement at the final follow-up. The mean Japanese Orthopedic Association scoring system for cervical myelopathy (C-JOA score) scores also improved from a preoperative value of 11.2±2.6 to 16.2±3.1 for the last follow-up. The recovery rate calculated using the Hirabayashi method was shown to have an average of 53.2±22.0%. Conclusion: Our short-term experiences with relatively good clinical outcomes imply that this minimally invasive technique is a valid alternative option for the treatment of cervical spondylotic myelopathy. INTRODUCTION Cervical spondylotic myelopathy (CSM) is one of the most common disorders of the cervical spine requiring surgical treatment. It is characterized by development and progression of degenerative changes associated with normal aging process. Cervical spinal stenosis patients have a tendency to suffer chronic myelopathy and high risk of acute spinal cord injury after traumatic injury. Mostly, cervical canal compromise result from two types of spinal degenerative processes, (1) ventral cord compression from bulging discs and osteophytes and (2) posterior compression by facet hypertrophy and thickening of the ligamenta flavum. The average anteroposterior diameter of a normal cervical canal on plain radiograph is 17 mm, whe-reas symptomatic stenosis generally occurs when the diameter is less than 13 mm 33) . Various anterior and posterior surgical approaches for the treatment of CSM have been introduced, and studies have shown differing results depending on approaches 8,16,[20][21][22]24,25,28) . Concerning the disadvantages of detaching the cervical paraspinal muscles from the laminae and the spinous processes in conventional posterior approaches 7,26) , anterior approach has become dominant among spinal surgeons, worldwide. Surgical trauma to the extensor cervical muscles in conventional posterior cervical operation is a major cause of various postoperative complications 2,14,18,36) , such as persistent neck and shoulder pain, postoperative kyphosis, spinal instability, etc. The current trend of favoring anterior approach surgery, even in patients with normal sagittal balance and posterior compressive pathology, is of great concern with the possibility of complications such as adjacent level disease and recurrent laryngeal and esophageal injury 1,5,9,17) . Recently, with the increasing popularity of minimally invasive techniques, the possibility of posterior cervical approach is gaining renewed interest 4,6,13) . With the aid of a tubular retractor system, minimally invasive posterior cervical decompression has become possible, even in the multi-level cervical diseases. These evolutions of surgical techniques have led to a greater reduction in tissue damage during operation, which reduces postoperative pain, shortens hospital stays, quicker return to daily living activities. The objective of this study is to introduce a novel minimally invasive surgical technique for multi-level posterior cervical decompression using a tubular retractor system and to evaluate safety and efficacy of this technique in short-term follow-up. This study is a series of consecutive mid-term follow-up reports in controlled clinical trials held at the institute of the authors using a minimally invasive surgical technique 15) . Materials and Methods Twenty-one patients suffering from CSM underwent minimally invasive posterior cervical decompression using a tubular retractor. The operations were performed between April 2012 and May 2014. The indications for surgery were (1) presence of CSM confirmed by radiologic imaging studies, (2) presence of symptomatic myelopathy for more than 6 months, (3) compression ratio less than 0.4, indicating flattening of the spinal cord, (4) transverse area of the cord less than 40 mm 2 , (5) predominant dorsal cord compressing pathology such as ossification of the ligamenta flava (OLF), and (6) failure of conservative treatment over a period of 6 weeks. The exclusion criteria were cervical myelopathy with tumor, trauma, severe ossification of the posterior longitudinal ligament (OPLL), herniated disc, rheumatoid arthritis, pyogenic spondylitis, and the presence of other combined spinal lesions. The pathologic level and extent of spinal cord compression were confirmed by magnetic resonance imaging (MRI) and post-myelography computed tomography (CT). In addition, cervical MRI was performed in three different neck positions (neutral, flexion, and extension) in all patients to determine whether the spinal canal was dominantly compressed by posterior or anterior pathology. Patients with dominant anterior compression (such as multi-level intervertebral disc bulging) were excluded from the study and underwent surgery using an alternative anterior approach. The demographic and intraoperative data of the patients are listed in Table 1. The study included 9 men and 12 women. All patients presented with symptoms of cervical myelopathy: clumsiness, numbness of the upper and lower extremities, gait disturbances, urinary disturbances, etc. The average age of the study subjects at the time of operation was 56.7±14.1 years and the average body mass index was 26.3±3.4 kg/m 2 . The mean visual analogue scale (VAS) scores of preoperative neck pain and radi-cular arm pain were 6.4±2.7 and 8.9±2.1, respectively, and the average duration of pain was 17.4±8.7 months. One of the patients had undergone previous anterior cervical fusion at a local clinic for a herniated disc. Eight patients were operated on at one segment, four patients at two segments, and nine patients at three segments. The hospital charts and follow-up medical records of all patients were carefully reviewed. Outcomes were assessed preoperatively and postoperatively using the Japanese Orthopedic Association scoring system for cervical myelopathy (C-JOA score) 34) , recovery rate as calculated by Hirabayashi's method 12,34) , a modified version of the Oswestry Disability Index called the neck disability index (NDI) 30,38) , and VAS score for neck and radicular arm pain 38) . All parameters were statistically analyzed. The data are expressed as mean±standard deviation (SD). A result was considered statistically significant if the p-value was less than 0.05. Surgical Technique Detailed surgical technique is introduced in our previous reports 15) . Before surgery, each patient underwent evaluation by dynamic radiographs to rule out obvious instability, and MRI or post-myelography CT to define the necessary extent of the surgery. The results of routine medical and laboratory evaluations were obtained. The anesthesia team was informed prior to the surgery of the possible need for fiberoptic intubation. The operation was done under general anesthesia and monitoring of interoperative somatosensory evoked potentials (SEP) was done in all patients. RESULTS Twenty-one patients underwent minimally invasive posterior cervical decompression using a tubular retractor system and a surgical microscope. The mean follow-up duration was 13± 5.3 months. Average intraoperative blood loss was 61±33 mL, average operating time was 73.3±21.4 min, and average length of hospital stay was 1.4±1.5 days (Table 1). Patients with sedentary jobs usually returned to work within a week after discharge. Muscle weakness improved in all patients. Sensory deficits resolved in 17 patients and improved in 4 patients. Analysis of the mean VAS scores for radicular pain and neck pain showed significant improvement compared to preoperative values at final follow-up (Fig. 1). The mean VAS score for neck pain decreased from 6.2±2.2 to 5.8±2.0 immediately post-operatively, reaching 2.1±1.3 at 3 months and 1.8±0.9 at the last follow-up visit. VAS scores for radicular arm pain also decreased, from 8.9±2.8 preoperatively to 3.6 ±1.4 immediately post-operatively, reaching 1.8±0.9 at 3 months and 0.9±0.4 at the last follow-up. Mean NDI decreased significantly, from 68.3±9.1 preoperatively to 13.3± 10.4 (p<0.05) at the final follow-up. Mean C-JOA score also improved from a preoperative value of 11.2±2.6 to 16.2±3.1 (p<0.05) at the last follow-up, and the recovery rate as calculated using the Hirabayashi method averaged 53.2±22.0% (Table 2). There were no significant operation-related complications such as cerebrospinal fluid (CSF) leakage, postoperative infection, instability, etc. DISCUSSION Selection of approach side for cervical spinal decompression usually depends on various factors, including extent of disease, sagittal curvature of the cervical spine, prior surgery, general condition of the patient, skill and familiarity of the surgeon, severity of canal compression, and intervertebral mobility at the level of maximum compression. The anterior approach offers a relatively simpler route to the spine and a means of decompressing the ventral spinal pathology than does the posterior approach. On the other hand, disadvantages of the anterior approach include potential complications involving anterior neck structures, dysphagia, recurrent laryngeal nerve injury, and adjacent segment degeneration following loss of one or more motion segments. Posterior decompression allows safer route for the thecal sac and avoids many of the risks of anterior exposure. However, ventral compressing pathologies such as disc herniation, osteophytes, or OPLL may be neglected when using a posterior approach. Although a simple multilevel laminectomy or laminoplasty is a relatively straightforward procedure, it often results in significant postoperative neck pain and longer hospitalization. In addition, CSF leakage, wound-related problems, postoperative kyphosis, and instability are not uncommon in conventional posterior operations. With the recent advancement of specialized surgical instruments and access devices, minimally invasive spinal surgery has proven to be a useful tool for the treatment of various spinal diseases while minimizing soft tissue damage. Application of this technique to the cervical spine followed naturally, and posterior minimally invasive cervical surgery has been performed recently in many institutions to determine the feasibility and efficacy of such procedures. Recent studies using a trans-muscular working channel to perform minimally invasive decompression for radiculopathy and myelopathy concluded that the basic technique is safe and feasible 4,32) . There are a few reports of posterior cervical decompression using different minimally invasive techniques up to date. Routine three-position cervical MRI for cervical spondylotic myelopathic patients was performed in our series to evaluate the characteristics of canal compression and to aid in the surgeon's selection of an appropriate surgical technique. Cervical dynamic MRI is useful in accurately determining the number of levels in which the spinal cord is compromised, and evaluating the degree of narrowing of the spinal canal 3,37) . Using radio-logic information, we selected cases with more dorsal than ventral compression for this series. As mentioned above, individuals with more dominant anterior compression were excluded and underwent alternative anterior surgery. There are many important factors that influence the choice of approach and surgical technique in cervical spinal surgery, and dynamic MRI may provide crucial information. A tubular retractor is able to provide a wide degree of visualization through a small skin incision and with the successive angulations of the working channel into a more medial position, the access to the contralateral dorsal spinal canal is allowed which make it superior to the unilateral open technique. Visualization of the spinal canal, ligamenta flavum, and existing nerve root interface is enhanced by an operating microscope, which provides a three-dimensional view; with the microscope-assisted procedure, we could accomplish bilateral decompression via a unilateral approach, so-called "unilateral approach for bilateral decompression (ULBD)" 19,23,29) . During the procedure, repositioning the working channel more medially enabled us to drill the base of the spinous process and the ventral surface of the contralateral lamina. Exposure of the contralateral attachment of the ligamentum flavum is critical to ensuring adequate bilateral decompression, and it is important to keep the ligament intact in order to protect the spinal cord. This minimally invasive posterior cervical decompression technique using the tubular retractor has many advantages, such as a small skin incision, gentle tissue dissection, excellent visualization, and the ability to achieve results equivalent to conventional open techniques. The open posterior cervical approach requires para-spinal muscle dissection and partial medial facetectomy. Stripping of the muscles may damage their innervation and blood supply, which may cause postoperative neck pain with temporary or persistent functional disturbances and possibly affect stability in multi-level procedures 14,36) . A minimal skin incision provides a better cosmetic effect and minimizes paraspinal muscle trauma, and contributes to a decrease in postoperative neck pain and dysfunction. Conventional laminoplasty causes cervical instability and kyphosis when more than 50% of a unilateral facet joint or 25% of bilateral facet joints are resected 10) . Our minimally invasive technique can minimize facet joint resection using ULBD, which requires only partial hemilaminectomy to enlarge the size of the canal. Moreover, the operating time, estimated blood loss, and hospital stays were also smaller in our patients compared to published data on conventional open surgeries 11,12,31,35) . On the other hand, minimally invasive decompression carries a higher risk of dura and nerve injury, CSF leakage, and postoperative seroma formation compared to conventional laminectomy or laminoplasty 4,27,32) . Because a high-speed drill is used to undercut spinous processes and contralateral lamina through a tubular retractor, the restricted operation field can lead to injury to the dura. Incidental durotomy can generally be managed by dural sealant materials, but persistent leakage may require direct repair followed by a lumbar drain. Careful use of bipolar cautery, both to minimize excessive bleeding from the venous plexus and to avoid neural injury, is an important consideration. The high-speed drill may cause local thermal injury, and careful irrigation must be ensured. Like any other minimally invasive technique, there is also a chance of postoperative seroma formation within 24 to 72 hr after surgery. Owing to the smaller canal diameter in the cervical spine, a relatively small seroma can cause cord compression even though a postoperative drain is used. Moreover, as described in our case presentation, asymptomatic spinous process fracture is possible owing to lateral angulation of the tubular retractor in cases requiring additional foraminal decompression. We experienced two cases of single level spinous process fracture out of all study subjects, but neither had significant symptoms related to it. Furthermore, spinal canal enlargement is somewhat limited compared to conventional posterior techniques in that one is not able to push down the dura to obtain a better view, as in lumbar surgeries. Decompression of canal stenosis which occurs due to posterior pathologic lesions such as OLF is very effective with this technique, but in the case of anterior cervical pathologic lesions or multi-level canal stenosis with more than three segments, and developmental canal stenosis, an anterior approach or conventional laminoplasty may be a better alternative option. Without the benefit of a wide viewing area, as is possible in conventional open surgery, the risk of incomplete decompression also exists, especially in inexperienced hands. This study demonstrates the feasibility of decompressing the cervical spinal canal using a unilateral tubular technique. Minimally invasive surgical techniques involve a very steep learning curve and considerable experience is required to decompress the neural structures adequately. The operational field of a tubular retractor is limited, making it difficult to fully ascertain the amount of bony work that has been performed. Furthermore, working under a microscopic view can be disorienting. To ensure satisfactory canal decompression while maintaining the integrity of neural elements require relatively more training and experience. Long-term follow-up studies with larger sample sizes are required to determine the benefits of minimally invasive surgery compared with traditional open laminectomy. CONCLUSION In our clinical series of minimally invasive posterior cervical decompression using a tubular retractor system, we demonstrate the safety and relatively good clinical outcomes despite a limited number of patients and a short-term follow-up period. These techniques have the theoretical advantages of reducing morbidity, blood loss, perioperative pain, and length of hospital stay compared with conventional open posterior cervical approaches. This minimally invasive posterior technique could be a useful alternative when choosing a surgical method for cervical myelopathy. However, a steep learning curve is required for such a minimally invasive technique and a risk of possible complications, such as dura and nerve injury, CSF leakage, and postoperative seroma formation do exist. Further studies with more patients and longer follow-up are required to determine the exact benefits compared with conventional open surgery.
2019-03-16T13:08:40.307Z
2016-10-31T00:00:00.000
{ "year": 2016, "sha1": "1e7dc423bfa0d7a9ae05d3b511b9a01383f8391f", "oa_license": "CCBYNC", "oa_url": "http://www.thenerve.net/upload/pdf/nerve-2016-2-2-48.pdf", "oa_status": "HYBRID", "pdf_src": "Unpaywall", "pdf_hash": "1e7dc423bfa0d7a9ae05d3b511b9a01383f8391f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16608956
pes2o/s2orc
v3-fos-license
The kinetic equation for the quark Wigner function in strong gluon field The Vlasov type quantum kinetic equation for deconfined quarks in strong quasi-classical gluon field is derived in the covariant single-time formalism. The equations system for the Wigner function components is obtained as a result of spinor and color decomposition. The field-free and vacuum solutions of the kinetic equation are found, and the conservation laws are derived. The flux-tube configuration of gluon field is discussed in detail. Introduction The creation and evolution of the quark-gluon plasma (QGP) by ultra-relativistic heavy ion collisions is a very complex process occurring on the time scale of 1f m/c and at energy density above of 1Gev/f m 3 . Parton gas probably comes to equilibrium state in the process of hadronization. Details of this transition are of great importance for interpretation of experimental data. If the QGP is really formed in experiments at the RHIC, its thermalization time must be so small, that it presents a serious problem for theoretical explanation [1]. Several theoretical tools are applied to describe the various physical phenomena accompanying the collision. The kinetic equation is the basic means for investigation of the nonequilibrium evolution [2]- [17]. The flux tube model [18] and the Schwinger mechanism of pair production are often used for the research of an early stage of the QGP formation. Vacuum pair creation is on essentially non-perturbative effect, which description requires the exact solution of the field equations. The resulting source term in the kinetic equation (KE) describes the production rate and momentum spectra of the created particles [19]. The QED example shows that the evolution of a plasma created from vacuum is rather dependent on the structure of the source term [20]- [22]. The semi-phenomenological source term based on the classical Schwinger formula (e.g., [23,24]) can result in significant inaccuracy for the fast varying electrical field. In particular, such source can not reproduce correctly the relation between the production rates of two components with different masses and statistics [25,26]. The main reason is that correct source term has a non-markovian time dependence [19,20], as against the Schwinger formula. The most interesting effects arising from these feature are the suppression of boson creation with zero kinetic momentum and the suppression of statistical factor influence. The Schwinger-like source contains the statistical factor as multiplier and produce the suppression of fermion creation (Pauli blocking) and enhancement of boson creation. The correct source term [20] contains the statistical factor as integrand on time (non-markovian property) that facilitates the fermion creation and reduces the boson creation by large particle density. The joint action of these factors may cause the effect of "statistic inversion" at the short times scale when the fermions production rate is more greater than bosons one. This effect can be so strong that heavy fermions are created more actively than light bosons. The other noteworthy feature of a proper source term is that a momentum distribution of the produced fermion pairs is close to the quasi-equilibrium one both in the transverse and in longitudinal directions. The influence of such source term can facilitate more rapid formation of a quasi-equilibrium state of QGP along with the small mean free path compared to the Compton length. The correct source term allows to separate the vacuum polarization effects which can exceed the contribution of real particles in the thermodynamical variables [27]. The kinetic equation with account of vacuum creation effect can be obtained in the Wigner function approach, which is widely used for the description of relativistic quantum systems during the past few decades. It is more convenient to use the single-time Wigner function variety [28,29] to solve the initial value problem for the collisions at RHIC. A covariant version of this approach has been recently developed in [30] for the QED case within the framework of the proper time method on space-like hyperplanes. We use this approach in the present work for the derivation of the kinetic equation for the covariant single-time Wigner function (WF) in the quark sector of QCD at presence of a strong quasi-classical background gluon field. The paper is organized as follows. We introduce the equations of motion for quarks and quasi-classical gluon fields in covariant proper time approach in Sect. 2. The single-time Wigner function is introduced here as the basic element of kinetic theory. The physical observables are bilinear compositions of quark field operators, and can be calculated via WF. The kinetic equation for the Wigner function is derived in a matrix form in the spinor and color spaces in Sect. 3. The spinor decomposition of this KE is also fulfilled for the more convenient analysis. This representation of KE is useful for construction of conservation laws, Sect. 5. The vacuum and field-free solutions are obtained in Sect. 4. The vacuum WF play a role of initial value for a solution of the Cauchy problem. The special sample of the kinetic equation is investigated in Sect. 6, which is inspired by the flux-tube [18] model of vacuum quark production under conditions of ultra-relativistic heavy ion collisions. Comparison with QED case is made in Sect. 7. Finally, Sect. 8 summarizes some results of the work. We use the system of units withh = c = 1 and the signature of metric tensor (1, −1, −1, −1). Basic Equations We start with the QCD Lagrange density (for one quark flavor, g > 0) with the quasi-classical gluon field where t a = λ a /2 are generators of SU(N) gauge group in fundamental representation. In particular, λ a are the Gell-Mann matrices for SU(3) and the Pauli ones for the SU(2) group. The covariant proper time derivative ∂ τ is defined by means of the space-time parametrization by a family of space-like hyperplanes σ(n, τ ) σ(n, τ )⊥ n µ , n µ x µ = τ, n 2 = 1. Hence there is a covariant decomposition where n µ is a unit time-like vector, ∆ µν denotes the transverse projector, and g µν is the metric tensor. Analogous decomposition is produced for any vector f µ and for any anti-symmetric tensor c µν so c ⊥ µ = c µ . In these terms, the basic equations of motion for quark field operators are The mean gluon fields obey the equations where E ν = n µ F µν and F ⊥ µν represent the color electrical and color magnetic fields, respectively, and J ν is a color current the symbol < ... > σ designates the average with statistical operator of the system on a hyperplane in the Heisenberg picture. The basic element of statistical description is the covariant Wigner function (WF) on the space-like hyperplane σ(τ ) [28,30]: here the upper and bottom indices are the color and spinor ones, respectively, ρ is the one-particle density matrix of quarks and U is the unitary link operator provides the gauge invariance of the WF [29]. The integral is taking along straight line between points x 1 and x 2 connected with space-like interval and The dynamical variables can be expressed in terms of the Wigner function. For example, the color and electromagnetic current densities are where trace Tr is carried out on spinor and color indices. The brackets < ... > denote the covariant momentum average on the hyperplane p · n = 0 [30], e.g. The integral in (16) provides the locality of corresponding observables, due to that function plays the role of the three-dimensional delta function on a hyperplane σ(n, τ ). The color WF have a rather complex matrix structure, therefore it is convenient to use the corresponding decompositions in the spinor and color space. The spinor decomposition in a complete basis of the Clifford algebra is where the coefficient functions a (scalar), b µ (vector), c µν (antisymmetric tensor), d (axial vector), e (pseudo-scalar) are the hermitian color matrices the symbol "tr" denotes the trace on spinor indices only, and γ 5 = −iγ 0 γ 2 γ 2 γ 3 . The algebraic structure of the WF in a color space of N × N matrices is represented by the observable color singlet W a and the unobservable color multiplet W a , where1 is unit matrix and tr c is the trace on color indices. Kinetic Equation We calculate the proper time derivative of Eq.(10) for a derivation of the KE for the Wigner function and substitute the time derivative of the field operators from the field equations (7). The emerging terms with perpendicular derivative ∂ ⊥ are transformed via the integration by parts. The derivative rules for link operator are follow from known formula [5,29] δU(x 1 , where where the notation for the "Schwinger string" [29] A is introduced. Then we express the variable y as the momentum derivative and we obtain after some algebra the exact equation of motion for the WF where This equation is resulted in a local form by means of the gradient expansion of a gluon field. We obtain in the first oder of this procedure the kinetic equation of the Vlasov type, which is correct for a rather slowly changing gluon field This equation despite of rather compact form is a very complicated matrix one in the direct production of 4 × 4 spinor and N × N color space. It is convenient to expand the WF in a some basis of this space for the separation of a different physical contributions. We perform at first the spinor decomposition (18) in the KE. We obtain the equations set calculating the traces of Eq.(28) with the basic matrices of Clifford algebra where ε αβγ denote the convolution of the normal n µ with the totally anti-symmetric unit tensor ε αβγδ , i.e. ε αβγ = n ρ ε ραβγ , ε 0123 = +1. It is more convenient for the analysis of concrete field configurations to rewrite that system concerning a projections on time-like and space-like directions The non-abelian character of gluon field is displayed in particular in the presence of the vector potential in right-hand side of these KEs that is necessary for the gauge invariance of the theory. It is the resulting set of KE for the description of deconfined quarks in a strong quasiclassical gluon field. Field-Free Limit and Vacuum Solution The equations system (35)-(42) can be solved exactly in the field-free limit A µ = 0. We obtain assuming that all derivatives vanish: The general solution of (43) is with an arbitrary momentum dependence of a(p). The equilibrium WF [33] or the vacuum solution follow from the Eq.(44) as a particular cases. We perform the direct calculation of the function (10) to select the vacuum case in the solutions class (44). The fermion operators on the hyperplane obey free anticommutation relations [30,31] {ψ a ( Then allows to write down the commutator (11) as where the symbol : : indicates normal ordering. We obtain the vacuum WF by carrying out the averaging on vacuum state and substituting this representation in Eq.(10) where ω(p ⊥ ) = m 2 − p 2 ⊥ . This function is degenerated in the color space as a consequence of the primary supposition about the Lagrange density structure (1). The WF (47) is the base for a solution of the Cauchy problem for the equations system (35)-(42). It can be proved that the vacuum solution, as well as the free-field ones, gives no contribution to the current densities (14), (15). The kinetic equation has in the field-free case besides the vacuum solution (47) the another solutions of the type where η a is an arbitrary real numbers. Conservation Laws We calculate the divergence of electromagnetic current (15), using the spinor and color decompositions (18) and (20) Performing the momentum averaging procedure (16) and taking the trace in Eq.(36), we obtain the electromagnetic current conservation law We multiply the equation (36) on matrix t a and repeat the same procedure to derive the equation for the color current J a µ = (1/2) < b a µ >. The result is The energy density ε q of quark matter corresponding Eq. (1) is where T µν q -energy-momentum tensor. Using the field equations (7), we have This variable can be written in terms of the WF as where ε vac = 2N < ω(p ⊥ ) > is divergent vacuum contribution. We obtain using the spinor and color decompositions The linear combination of Eqs.(35) and (37) is used to calculate the right side of this equation. The result is 6 Space-Homogeneous Color Field Flux-tube field configuration We consider here the "instant" frame of reference where n µ = (1, 0, 0, 0) and the field configuration typical for the flux tube model where the dot denotes the derivative with respect to time τ = t and the Hamilton gauge was selected. We have E = (0, 0, E), H = 0 in 3-vector representation We limit oneself to simple sample of SU(2) group (a = 1, 2, 3) below, where the totally anti-symmetric structure constants f abc coincide here with the totally antisymmetric unit 3-tensor e ijk , f 123 = +1. As a result of color decomposition of the WF, the system (35)-(42) is reduced to The important difference of these equations from their QED analogue is the structure of a force terms: the derivatives on time and on momentum act on different parts of WF (singlet and multiplet, respectively). This feature does not allow to reduce the problem to solving of ordinary differential equations even for the simple field configuration (57). The situation becomes complicated even more in case of SU(3) (a = 1, 2, . . . , 8) where and d abc are the totally symmetric structure constants. We shall write out for an illustration two first the equations (60) only: Constant Chromo-Magnetic Field Now we consider the gluon field configuration corresponding to a space-time homogeneous field In this case the system (35)-(42) is reduced to two independent groups of the equations The solution for a corresponding abelian QED case is known [28], but the derivation of a corresponding non-abelian analog is rather time-consuming work. We are limited here with a very simple case that have no analogy in QED. Assuming that A a = A, ∀a ("colour democracy"), we have H = 0 and the system (64),(65) is reduced to all other components are zero. This system allows the solutions with zero colour currents but with non-zero colour charges, for example b 0 ∼ a, a = a 11 + a 2 A, b = ap/m, where a 1 , a 2 are arbitrary constants. Chiral limit The additional simplification is possible in the chiral limit m → 0. In this case, the equations for the vector components of the Wigner function are separated from others. We suppose also the flux-tube symmetry in the electric field direction n = E/E, then because of b is polar and d axial vectors. By the initial conditions of the type (48), it follows from Eq.(60) that d 0 = 0 and b 0 = 0 (neutral system). Then we obtain The two last algebraic equations play the role of constraints and have the particular solution ("color democracy") These conditions correspond to the special case of Abelian dominance approximation [6,7] in relation to the gluon field. If we assume that the conditions (70) are carried out for all others components WF also, the solution get the form (48) W a (t, p) = η W s (t, p), a = 1, 2, 3, where η is some parameter. We find two admissible values after substituting that in the system (69) The system (69) is reduced to the three scalar equations at the account of the representation (68) This equation set can be solved numerically by the characteristics method. The remaining part of Eq.(60) is reduced to ∂ t a + η gE∂ p a = p c 1 , ∂ t c 1 + η gE∂ p c 1 = −p a, ∂ t c 2 + η gE∂ p c 2 = −p e, ∂ t e + η gE∂ p e = 4p c 2 . We find two integrals of motion combining these equations in pairs D t (a 2 + 4c 2 1 ) = 0, → a 2 + 4c 2 1 = const, D t (e 2 + 4c 2 2 ) = 0, → e 2 + 4c 2 2 = const, here D t = ∂ t +η gE∂ p . It follows for the field-free initial conditions that e = 0 and c 1 = 0. The self-consistent evolution of the mean gluon field obeys the Yang-Mills equatioṅ The equations (73), (74) and (76) are the closed system for the numerical investigation of initial value problems such as the pair creation in strong field with the account of a back-reaction of the produced particles on the evolution of the mean gluon field. The simple solutions of the type (71) do not satisfy to the Eq.(62) for the SU(3) case. But the reduction to the ordinary differential equation is possible, nevertheless, at use the more complicated representation of the type (48) We obtain the non-linear equations system for the admissible values of η a by substituting these relations in Eqs.(62) where d abc has three independent non-zero values only [34]. Comparison with QED The equations (35)-(42) can be transformed to QED case by formally setting the color matrices t a equal to the unit one. Then the commutators of the gauge fields with the spinor components vanish, whereas the anti-commutators give a factor 2: These equations are corresponds the formulae (4.9) -(4.16) of the work [30] (part 2). The system (79) can be reduced for the simple field (57) to three scalar ordinary differential equations [28], which allows the simple numerical investigation. Summary We have derived the system of KE for description of quark-antiquark plasma created from vacuum under action of a strong quasi-classical gluon field. The single-time Wigner function formalism allows in contrast to other approaches of this kind to formulate correctly the Cauchy problem. It is particularly important for non-perturbative description of vacuum particle creation. We have analyzed some special cases of obtained system of KE (vacuum solution, space-homogeneous time dependent color electric field e.t.c.). It is shown that KE is complex system of the partial differential equations even in the most simple case (chiral limit). That system is rather difficult for the numerical investigation, while the KE in QED allows the reduction to the set of ordinary differential equations [28,32] for some field configurations. Thus the transition to QCD either sets higher request to level of computer calculations or needs a some additional non-perturbative model assumptions.
2014-10-01T00:00:00.000Z
2003-01-21T00:00:00.000
{ "year": 2003, "sha1": "4cb7098dbd2a6964985d013bc769c0797331bcbb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ededa9be93ad9d92f28659c5dce637014d477344", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
23927174
pes2o/s2orc
v3-fos-license
Serine Phosphorylation and Negative Regulation of Stat3 by JNK* STATs are activated by various cytokines and growth factors via tyrosine phosphorylation, which leads to sequential dimer formation, nuclear translocation, binding to specific DNA sequences, and regulation of gene expression. Recently, serine phosphorylation of Stat3 on Ser-727 by ERK has been identified in response to epidermal growth factor (EGF). Here, we report that Ser-727 phosphorylation of Stat3 can also be induced by JNK and activated either by stress or by its upstream kinase and that various stress treatments induce serine phosphorylation of Stat3 in the absence of tyrosine phosphorylation. Inhibitors of ERK and p38 did not inhibit UV-induced Stat3 serine phosphorylation, suggesting that neither of them is involved. We further demonstrate that JNK1, activated by its upstream kinase MKK7, negatively regulated the tyrosine phosphorylation and DNA binding and transcriptional activities of Stat3 stimulated by EGF. Correspondingly, pretreatment of cells with UV reduced the EGF-stimulated tyrosine phosphorylation and phosphotyrosine-dependent activities of Stat3. The inhibitory effect was not observed for Stat1. Our results suggest that Stat3 is a target of JNK that may regulate Stat3 activity via both Ser-727 phosphorylation-dependent and -independent mechanisms. STATs 1 are activated by various cytokines and several growth factors and function as important biological links between the cell surface and the transcriptional events in the nucleus. Seven STAT genes have been identified, which contain a conserved structure of SH2 and a DNA-binding domain (reviewed in Ref. 1). Binding of cytokines to their respective receptors stimulates the Janus kinase family, which phosphorylates STAT proteins on a specific tyrosine residue (Tyr-701 in Stat1 and Tyr-705 in Stat3) at the COOH terminus. Homo-or heterodimers are formed between the phosphorylated tyrosine and its partner's SH2 domain. These dimers translocate into the nucleus and function as transcription factors by binding to their recognition sequences and regulating the target gene expression (reviewed in Refs. [1][2][3]. In addition to cytokines, growth factors such as EGF, platelet-derived growth factor, and colony-stimulating factor-1 also stimulate STAT tyrosine phosphorylation presumably through their intrinsic receptor tyrosine kinase activity (4 -7) or by the non-receptor tyrosine kinases such as Src (8,9). Serine phosphorylation of STATs has also been demonstrated. A Pro-X-Ser-Pro sequence that is a recognition site of ERKs has been found at the COOH terminus of Stat1, Stat3, and Stat4, suggesting that ERK is involved in the phosphorylation of these STATs (10). ERKs are members of the mitogenactivated protein kinase (MAPK) family that are activated by growth factor stimulation and have been shown to play a role in cell proliferation and differentiation (11)(12)(13). It has been reported that ERK2 co-immunoprecipitated with Stat1␣ in response to interferon-␤ and was involved in the regulation of interferon-␤-induced gene expression (14). Recently, it has also been reported that ERKs phosphorylate Stat3 on Ser-727 in vitro as well as in vivo in response to EGF (15). However, the existence of serine/threonine kinases other than ERKs phosphorylating STATs on serine has also been suggested. For example, Stat1 is a relatively poor substrate for ERKs (15). In addition, although phosphorylation of Stat1 on Ser-727 is induced in response to interferon-␥, ERKs are not activated by interferon-␥ and therefore are unlikely to be involved in such phosphorylation (16). Moreover, serine phosphorylation of Stat3 by IL-6 stimulation has been shown to be ERK-independent (15), and the involvement of H-7-sensitive serine kinases has also been reported (17)(18)(19). Two other subtypes of mammalian MAPKs that are activated by environmental stress and pro-inflammatory cytokines have been identified. JNKs, also known as stress-activated protein kinases, are activated by IL-1, tumor necrosis factor (TNF), UV radiation, and anisomycin (20 -22). JNKs bind to the amino terminus of c-Jun and phosphorylates it on Ser-63 and Ser-73 (23). The third group of MAPKs, p38, is activated by endotoxic lipopolysaccharide or hyperosmolarity (24). Although ERKs, JNKs, and the p38 kinase families are closely related due to their similar regulatory TXY motif for activity, they are distinguishable by unique (although sometimes overlapped) upstream activators and downstream substrates (25)(26)(27). We investigated whether Stat3 can be phosphorylated by stress and pro-inflammatory cytokines and examined which kinases are involved in such phosphorylation. We observed that Stat3 was phosphorylated on Ser-727 by TNF-␣ and various stress treatments. JNK1 activated either by UV or anisomycin or by its upstream kinase MEKK1 phosphorylated Stat3 in vitro. The major phosphorylation site was identified to be Ser-727. Stat3 can also be phosphorylated by cotransfection of JNK1 with MEKK1 in vivo. We further demonstrate that activation of JNK1 either by its upstream kinases or by UV treatment resulted in negative regulation of its activity. EXPERIMENTAL PROCEDURES Construction of Plasmids-The glutathione S-transferase (GST)-Stat3 fusion protein, containing an almost full-length Stat3, was constructed as described previously (9). The point mutant GST-S1, in which Ser-727 of GST-Stat3 was replaced by Ala, was prepared using the polymerase chain reaction-based site-directed mutagenesis kit Ex-Site TM (Stratagene) following the manufacturer's instructions. The deletion mutant GST-C2 (containing amino acids 480 -770) was generated by digesting GST-Stat3 with XmnI/XhoI, isolating the fragment, and inserting it into pGEX-KG. The expression plasmid of Stat3, pRc/CMV-Stat3 (6), was obtained from Dr. J. E. Darnell, Jr. (Rockefeller University). Substitution of the phosphorylation site Ser-727 by Ala was performed with the QuikChange TM site-directed mutagenesis kit (Stratagene). The correct construction was confirmed by sequencing. The activated MEKK1 mutant (28) was provided by Dr. R. Janknecht (Mayo Foundation). pSR␣-HA-JNK1 (26) and kinase-deficient mutant JNK1 pcDNA3.FLAG.JNK1(APF) (21) were obtained from Drs. A. Whitmarsh and R. J. Davis (University of Massachusetts). MKK7 plasmids (29) expressing active MKK7 (pcDNA3.MKK7D) and the kinasedeficient mutant (pcDNA3.MKK7A) were provided by Dr. J. Han (Scripps Research Institute). The reporter plasmid pSIE-CAT for CAT assays was prepared by inserting three copies of the hSIE consensus sequence (TTCCCGTAA) upstream of a c-fos minimal promoter followed by the CAT gene in plasmid pFOSCAT⌬56 (30). Immunoprecipitation/Western Blotting and Immune Complex Protein Kinase Assay-COS-1 cells transfected with expression plasmids were lysed in radioimmune precipitation assay buffer (150 mM NaCl, 50 mM Tris-HCl (pH 7.2), 1% deoxycholic acid, 1% Triton X-100, 0.25 mM EDTA (pH 8.0), and protease and phosphatase inhibitors (5 g/ml leupeptin, 5 g/ml aprotinin, 1 g/ml pepstatin A, 1 mM phenylmethylsulfonyl fluoride, 5 mM NaF, and 100 M sodium orthovanadate)). The cell lysates were incubated with an anti-JNK1 antibody (Santa Cruz Biotechnology) overnight at 4°C, followed by incubation with protein G PLUS/protein A-agarose (Oncogene Science Inc.) for 1 h. The immunoprecipitates were washed twice with radioimmune precipitation assay buffer and twice with cold phosphate-buffered saline and divided into two portions. One portion was subjected to Western blot analysis with an anti-phospho-JNK1 antibody (Santa Cruz Biotechnology). The other portion was subjected to in vitro kinase assay. Briefly, the immunoprecipitates were washed once with JNK kinase assay buffer containing 20 mM Hepes (pH 7.3), 20 mM MgCl 2, 20 mM ␤-glycerophosphate, 0.2 mM sodium orthovanadate, and 2 mM dithiothreitol. The GST-Stat3 fusion proteins used as substrates were produced and partially purified as described previously (9). Glutathione-Sepharose-bound GST-Stat3 was eluted by vortexing for 15 min at room temperature in an equal volume of 20 mM reduced glutathione, resuspended in 50 mM Tris-HCl (pH 8.0), concentrated using a Centriprep 10 (Amicon, Inc.), washed once in 20 mM Hepes (pH 7.3), and further concentrated using an ULTRA-FREE-MC filter unit (Millipore Corp.). Equal amounts of fusion proteins were used in the kinase assays. The fusion proteins were incubated with immunoprecipitated JNK1 in the kinase assay buffer in the presence of 5 or 10 Ci of [␥-32 P]ATP at 30°C for 15-30 min. The reaction mixture was boiled in Laemmli buffer, separated by 10% SDS-PAGE, transferred to nitrocellulose membrane, and exposed to x-ray film. The blots were also subjected to Amido Black staining to show the equal amount of GST fusion proteins used in each reaction. As for Western analysis of total cell lysates, equal amounts of lysates were separated by SDS-PAGE, transferred to a polyvinylidene difluoride, membrane, and blotted with the respective antibodies, including antiphospho-Ser-727 Stat3, anti-phospho-Tyr-705 Stat3 (both from New England Biolabs Inc.), and anti-Stat3 (Transduction Laboratories) antibodies. [ 32 P]Orthophosphate Labeling-COS-1 cells were transfected and labeled with [ 32 P]orthophosphate at a final concentration of 1 mCi/ml for 4 h before harvesting. The cells were lysed, and the lysate was immunoprecipitated with an anti-Stat3 antibody (Santa Cruz Biotechnology). The immunoprecipitates were washed, and the proteins were separated by SDS-PAGE, transferred to nitrocellulose membrane, and exposed to x-ray film as described previously (31). DNA Transfection, CAT Assay, and Mobility Shift DNA Binding Assay-COS-1 cells were grown in Dulbecco's modified Eagle's medium with 10% fetal calf serum (Life Technologies, Inc.). Transfection of plasmids into COS-1 cells was performed by the calcium phosphate-DNA coprecipitation method (32). For CAT assays, the cells were cotransfected with 4 g of CAT-containing reporter plasmids, 3-5 g of expression plasmids, and 1 g of pCMV-␤-gal containing the bacterial ␤-galactosidase gene. The cells were lysed in 0.25 M Tris-HCl (pH 8.0) with three freeze-thaw cycles after 45 h of transfection. The lysate was spun, and the supernatant was collected and used for ␤-galactosidase activity and CAT assays. The pellets were resuspended in high salt buffer (20 mM Hepes (pH 7.9), 1 mM EDTA, 1 mM EGTA, 420 mM NaCl, 20% glycerol, 1 mM Na 4 P 2 O 7 , 1 mM Na 3 VO 4 , 20 mM NaF, 1 mM dithiothreitol, and 0.5 mM phenylmethylsulfonyl fluoride) and extracted as crude nuclear extract for DNA mobility shift assay. The amount of cytoplasmic extract used in each CAT assay was normalized with equivalent ␤-galactosidase activity. Acetylated and non-acetylated forms of [ 14 C]chloramphenicol were separated by thin-layer chromatography, followed by autoradiography and quantification using a Bio-Rad GS700 imaging densitometer. The mobility shift DNA binding assay was performed using hSIE as a probe under conditions described previously (9), except in the absence of salt in the binding buffer. Stress Treatments Induce Serine Phosphorylation of Stat3 in Vivo-Stat3 is activated by tyrosine phosphorylation on Tyr-705 in response to growth factors and cytokines. In addition, serine phosphorylation of Stat3 has been observed, and the major phosphorylation site is Ser-727. We investigated whether environmental stress or inflammatory cytokines can induce phosphorylation of Stat3. COS-1 cells, which express low levels of endogenous Stat3 (10), were transfected with Stat3 expression plasmid and treated with TNF-␣ and various stresses. Phosphorylation of Stat3 was examined with antibodies specifically recognizing either Ser-727-or Tyr-705-phosphorylated Stat3 protein in Western blot analysis. Fig. 1A (upper panel) shows that Stat3 Ser-727 phosphorylation was induced by UV, anisomycin, TNF-␣, and sodium arsenite and, to a weaker extent, by NaCl, okadaic acid, and lipopolysaccharide. In contrast, Tyr-705 phosphorylation of Stat3 was undetected in cells with these treatments (middle panel). EGF, as a positive control, induced both strong tyrosine and serine phospho- rylation of Stat3. An equal expression of Stat3 is shown in the lower panel. We also tested the induction of serine phosphorylation of endogenous Stat3 by stress in NIH 3T3 cells. TNF-␣ and various stress treatments, together with a positive control (platelet-derived growth factor), stimulated the serine phosphorylation of Stat3 (Fig. 1B). These results indicate that both endogenous and exogenous Stat3 can be phosphorylated on Ser-727 by stress treatments. JNK1 Phosphorylates Stat3 on Ser-727 in Vitro-Since JNK/ stress-activated protein kinase is activated by stress and JNK1 is a major JNK, we next examined whether JNK1 was able to phosphorylate Stat3 in vitro. COS-1 cells were treated with UV, anisomycin, NaCl, or EGF; and the lysates were immunoprecipitated with an anti-JNK1 antibody, followed by an immune complex protein kinase assay using the GST-Stat3 fusion protein as a substrate. As shown in Fig. 2A, UV and anisomycin induced Stat3 phosphorylation by JNK1, whereas NaCl and EGF, which mainly activate p38 kinase and ERKs, respectively, did not stimulate Stat3 phosphorylation significantly. MEKK1 phosphorylates the JNK upstream kinase SAPK/ ERK kinase-1, which in turn phosphorylates and activates JNK1 (28). To further confirm Stat3 phosphorylation by JNKs, we transfected COS-1 cells with JNK1 in the absence or presence of constitutively activated MEKK1 and performed the immune complex kinase assay. We observed that GST-Stat3 was strongly phosphorylated by MEKK1-activated JNK1, but not by JNK1 transfected alone or by endogenous JNK1 (Fig. 2B, upper panel). The activation of JNK1 by MEKK1 was confirmed by the strong phosphorylation of GST-c-Jun-(1-79), a physiological substrate of JNK1 (second panel, lane 3). This was also verified by the strong phosphorylation of JNK1 detected with an anti-phospho-JNK1 antibody that recognizes dual-phosphorylated JNK1 in Western blot analysis (third panel, lane 3). An equal expression of exogenous JNK1 in the presence or absence of MEKK1 was shown (lower panel). These data suggest that JNK1, activated by either stress or by its upstream kinase, phosphorylates Stat3 on Ser-727 in vitro. We further tested whether Ser-727 was the only phosphorylation site of JNK1. A point mutant (GST-S1) in which Ser-727 was replaced by alanine and a deletion mutant (GST-C2) containing the COOH-terminal portion of Stat3 (amino acids 480 -770) were generated, and the phosphorylation by JNK1 was tested. As shown in Fig. 2C, MEKK1-activated JNK1 strongly phosphorylated GST-Stat3 and GST-C2, but failed to phosphorylate GST-S1. The amounts of the fusion proteins GST-Stat3 and GST-S1 used in the kinase assays were comparable (indicated by asterisks in the lower panel). These data suggest that Ser-727 is the only phosphorylation site of JNK1 in vitro. JNK1 Phosphorylates Stat3 in Vivo-Next, we examined whether JNK1 phosphorylated Stat3 in vivo. COS-1 cells were either transfected with Stat3 expression plasmid alone or cotransfected with JNK1 and MEKK1 and labeled with [ 32 P]orthophosphate. The lysates were immunoprecipitated with an anti-Stat3 antibody. A basal level of Stat3 phosphorylation was observed in cells transfected with Stat3 alone (Fig. 3, lane 1), which was strongly enhanced in cells cotransfected with JNK1 and MEKK1 (lane 2). As a positive control, EGF also stimulated Stat3 phosphorylation (lane 3). These data indicate that Stat3 can be phosphorylated by JNK1 in vivo. Effects of ERK and p38 Inhibitors on UV-or EGF-induced Stat3 Serine Phosphorylation-To ascertain the noninvolvement of other MAPK family members in the stress-induced Ser-727 phosphorylation of Stat3, the inhibitors of MEK1 (PD98059) (33) and p38 kinase (SB203580) (34) were used to pretreat cells, followed by UV or EGF treatment. The Ser-727 phosphorylation was analyzed. UV-induced phosphorylation of Stat3 was not affected by either inhibitor (Fig. 4, middle panel). In contrast, EGF-induced phosphorylation was inhibited by PD98059, but not by SB203580 (right panel). The basal level of phosphorylation in uninduced cells was slightly decreased by both inhibitors (left panel). These results suggest that whereas ERKs phosphorylate Stat3 by EGF stimulation, ERKs and p38 are unlikely to be involved in Stat3 serine phosphorylation induced by UV. JNK1 Activated by MKK7 Negatively Regulates Tyrosine Phosphorylation and DNA Binding and Transcriptional Activities of Stat3-The tyrosine phosphorylation of Stat3 by growth factors and cytokines is a prerequisite for its dimerization, DNA binding, and transactivation, whereas the Ser-727 phosphorylation alone does not stimulate Stat3 DNA binding and transcriptional activities (6,35,36). We studied the role of JNK1 in Stat3 function by testing its effect on the DNA binding and transcriptional activities of Stat3 stimulated by EGF. MKK7 (JNK kinase-2) has recently been identified to be a specific upstream activator of JNK1 without affecting ERKs or p38 (29,(37)(38)(39)(40). COS-1 cells were transfected with Stat3 alone or were cotransfected with JNK1 and/or MKK7 expression plasmids and treated with EGF. The DNA binding activity was analyzed using hSIE, a high affinity binding site for Stat3, as a probe. As reported previously, EGF induced the activation of Stat3 and Stat1 to form three complexes with SIE: SIF-A (Stat3 homodimer), SIF-B (Stat1/Stat3 heterodimer), and SIF-C (Stat1 homodimer) (6). The DNA binding activity of Stat3 was not observed in the untreated cells transfected with Stat3 (Fig. 5A, lane 3), but was induced after EGF treatment (lane 4, SIF-A and SIF-B). SIF-A was not affected by constitutively activated MKK7, JNK1, or kinase-deficient mutant JNK1 Ϫ alone (lanes 5-7), but was almost completely destroyed by cotransfection of MKK7 and JNK1 together (lane 8). The DNA binding activity was largely restored by cotransfection of either mutant MKK7 and wild-type JNK1 (lane 10) or constitutively activated MKK7 and kinase-deficient JNK1 (lane 9). It has been previously reported that the formation of SIF-C by endogenous Stat1 can be detected after EGF stimulation in COS cells (41). As shown in Fig. 5A, SIF-C was also detected upon EGF treatment (lane 4), but was unaffected by cotransfection with wild-type or mutant MKK7 and/or JNK1 (lanes 5-10). We next examined whether activated JNK1 also affected Stat3 transcriptional activity stimulated by EGF. A reporter plasmid containing three copies of hSIE followed by a CAT gene was cotransfected with Stat3 in the presence or absence of JNK1 and/or MKK7 expression plasmids, and the CAT activities were analyzed. As illustrated in Fig. 5B, CAT activity increased 10.4-fold after EGF stimulation, which slightly decreased in the presence of MKK7 (7.5-fold), but was unaffected by either wild-type or mutant JNK1 (10.3-and 11.8-fold, respectively). However, when Stat3 was cotransfected with MKK7 and JNK1 together, CAT activity was completely inhibited (1.4-fold), whereas such inhibition was not observed with cotransfection of MKK7 and mutant JNK1 or of mutant MKK7 and JNK1. These results are consistent with the DNA binding data, indicating that activated JNK1 suppresses both the DNA binding and transcriptional activities of Stat3. Src has been shown to specifically stimulate the tyrosine phosphorylation and DNA binding activity of Stat3, but not of Stat1 (8,9). To confirm the repression of Stat3 activity by JNK1 described above, we performed similar transfection experiments in which EGF treatment was replaced by cotransfection with Src, and the DNA binding activity and transactivation of Stat3 stimulated by Src were examined. Similar repression by activated JNK1 was observed (data not shown). Western blot analysis verified the activation and expression of JNK1 and Stat3 in these transfection experiments. As shown in Fig. 5C, we observed strong JNK1 phosphorylation only in the presence of MKK7, but not in its absence or in the presence of mutant MKK7 Ϫ . Notably, JNK1 Ϫ was not phosphorylated by cotransfection with MKK7 (upper panel), although an equivalent expression of JNK1 and JNK1 Ϫ was observed (second panel). The different apparent molecular masses of JNK1 and JNK1 Ϫ were due to the different constructions, as JNK1 contains three copies of HA epitope, whereas JNK1 Ϫ contains one copy of FLAG sequence (21,26). The expression of transfected Stat3 protein was comparable in all the samples (lower panel). However, a significant decrease in the tyrosine phosphorylation of Stat3 in cells cotransfected with MKK7 and JNK1 stimulated with EGF (third panel) was observed, which correlated well with its reduced DNA binding and transcriptional activities. This further indicates that the repression of Stat3 activity by JNK1 is due to a decrease in its tyrosine phosphorylation. To confirm these results, we performed similar DNA binding and CAT assays with MEKK1 instead of MKK7. The results were essentially the same (data not shown), indicating that JNK1 activated by both upstream kinases negatively regulates Stat3 DNA binding and transcriptional activities stimulated by either EGF or Src. UV Pretreatment Decreases Tyrosine Phosphorylation and DNA Binding and Transcriptional Activities of Stat3-To investigate a possible physiological role of JNK1 phosphorylation in Stat3 function, we examined whether stress affects Stat3 activity stimulated by EGF. COS-1 cells were transfected with wild-type Stat3 and pretreated with UV for various time points, followed by EGF treatment; and the DNA binding activity was measured. As shown in Fig. 6A, EGF induced the activation of transfected Stat3 and endogenous Stat1 to form SIF-A, SIF-B, and SIF-C (lane 3). Pretreatment of the cells with UV significantly decreased SIF-A formation. SIF-B was also reduced, whereas SIF-C was not significantly affected (lanes 4 -6). This indicates that UV specifically decreases the DNA binding activity of Stat3, but not of Stat1. The effect of UV pretreatment on the EGF-induced transcriptional activity of Stat3 was also tested in CAT assays and shown to be inhibitory (Fig. 6B). Finally, we analyzed the effect of UV treatment on Stat3 tyrosine phosphorylation. In agreement with the inhibition of the DNA binding and transcriptional activities, a decrease in the tyrosine phosphorylation of Stat3 by UV pretreatment was detected (Fig. 6C, upper panel, lanes 3-5), whereas the Ser-727 phosphorylation induced by EGF was not affected by UV pretreatment (middle panel), probably due to the strong Ser-727 phosphorylation induced by EGF stimulation. An equal expression of Stat3 was indicated (lower panel). These results suggest that pretreatment of UV negatively affects Stat3 tyrosine phosphorylation and the phosphotyrosine-dependent activities. This inhibitory effect could be due to the Ser-727 phosphorylation by UV-activated JNK1 that occurred prior to the tyrosine phosphorylation stimulated by EGF. Alternatively, the repression could also be independent of Ser-727 phosphorylation. The possible factors include a general toxic effect of UV irradiation or phosphorylation on other serine 1 and 2) or treated with EGF (100 ng/ml) for 15 min (lane 3). The lysates were immunoprecipitated with an anti-Stat3 antibody. The immunoprecipitates were resolved by SDS-PAGE and transferred to a membrane, followed by autoradiography. site(s) that affects Stat3 tyrosine phosphorylation and activities (see details under "Discussion"). DISCUSSION In addition to tyrosine phosphorylation, Stat1 and Stat3 are also phosphorylated on serine in response to cytokines and growth factors. ERKs, the prototype of MAPKs, were the first identified Ser/Thr kinases that phosphorylate Stat3 on serine by EGF stimulation (10, 15, and 17). In this study, we investigated whether environmental stress can induce phosphorylation of Stat3 and elucidated which kinase(s) is likely to be involved. We demonstrated that various stress treatments 4 -10). Crude nuclear extracts were prepared, and 10 g was used for the mobility shift DNA binding assay with hSIE as a probe as described under "Experimental Procedures." FP (lane 1) indicates free probe. SIF-A, SIF-B, and SIF-C are indicated by arrows. B, COS-1 cells were transfected with empty vector (Ϫ) or with Stat3 (St3) alone or with other expression plasmids as indicated, together with the reporter plasmid pSIE-CAT and pCMV-␤-gal. The cells were either left untreated or treated with EGF for 6 h before harvesting. The amount of cell lysates used for CAT assays was normalized by ␤-galactosidase assay as described under "Experimental Procedures." Acetylated and non-acetylated forms of [ 14 C]chloramphenicol were separated by thin-layer chromatography, followed by autoradiography. A representative autoradiograph is shown in the upper panel. The CAT activities from three independent transfection experiments were quantified using a Bio-Rad GS700 imaging densitometer, and the average -fold induction is indicated at the top of each bar (lower panel). C, total cell lysates from the transfected cells described above were prepared and subjected to Western blot analysis to analyze the activation and expression of HA-JNK1, mutant FLAG-JNK1 (as indicated by JNK1 Ϫ ), and Stat3 with the respective antibodies as indicated. stimulate serine phosphorylation of both endogenous and exogenous Stat3 (Fig. 1); and JNK, a subtype of MAPKs, mediates the stress-dependent serine phosphorylation of Stat3 (Figs. 2 and 3). The site of phosphorylation of Stat3 by JNK1 was identified to be Ser-727 in vitro. Our data using the inhibitors of ERK and p38 pathways (Fig. 4) further support the specificity of Stat3 serine phosphorylation by JNK. These results demonstrate that JNK is the kinase that phosphorylates Stat3 in response to stress. Since phosphorylation of Stat3 on Ser-727 was also observed upon treatment of cells with sodium chloride, okadaic acid, and lipopolysaccharide, which stimulate p38 activity, we examined if p38 kinase, the other member of the MAPK family, could phosphorylate Stat3. We were not able to detect Ser-727 phosphorylation of the GST-Stat3 fusion protein by p38 activated either by stress or by cotransfection with its upstream kinase MKK3 in vitro (data not shown). However, the possibility of Stat3 phosphorylation by p38 in vivo cannot be excluded, and whether JNK is the only kinase family that is involved in Stat3 serine phosphorylation by various stress treatments remains to be determined. In addition to MAPKs, we recently identified protein kinase C ␦ to specifically associate with and to phosphorylate Stat3 in an IL-6-dependent manner (42). Together, these data indicate that Stat3 is a target for multiple Ser/Thr kinases that are activated by distinct extracellular stimuli and suggest that Stat3 may be functionally involved in diverse cellular processes. It is generally accepted that the tyrosine phosphorylation of STATs is a prerequisite for their DNA binding and transactivation, although growth factors and cytokines induce phosphorylation of STATs on both tyrosine and serine (1-3). The question arising here is how does serine phosphorylation affect STAT activity? An initial report indicated that serine phosphorylation is required for the DNA binding of Stat3 in certain cell types (17). However, it was demonstrated later that phosphorylation on Ser-727 is not necessary for its DNA binding, but is required for the full transcriptional activity of Stat1 and Stat3 (10,36). On the other hand, a negative effect of Ser-727 phosphorylation on the tyrosine phosphorylation of Stat3 has also been suggested (15). We examined how JNK affects the DNA binding and transcriptional activities of Stat3 stimulated either by EGF or by Src and observed that JNK1, activated either by its upstream kinase MKK7 or by UV irradiation, inhibited the tyrosine phosphorylation and DNA binding and transcriptional activities of Stat3 in both cases (Figs. 5 and 6 and data not shown). Such repression is specific for Stat3 since Stat1 DNA activity was not inhibited (Fig. 5A). These results are in agreement with previous reports showing that ERK2, when activated by MEK1, represses the tyrosine phosphorylation and tyrosine phosphorylation-dependent activities of Stat3 stimulated by EGF or IL-6 (43,44). Furthermore, an inhibitory effect of protein kinase C␦ on the activity of Stat3 was also reported (42). These results suggest that activated MAPKs as well as other Ser/Thr kinases may negatively regulate STAT activity. These observations seem to be contradictory to the positive regulatory role of Ser-727 phosphorylation in STAT transcriptional activity (10). We attempted to address this question by further investigating the mechanisms of the repression. From our preliminary results, in agreement with the positive role of Ser-727 phosphorylation, we also observed a reduced transcriptional activity of the Stat3 mutant S727A stimulated by EGF. However, the DNA binding and transcriptional activities of FIG. 6. UV pretreatment decreases EGF-induced Stat3 tyrosine phosphorylation and DNA binding and transcriptional activities. A, COS-1 cells were transfected with Stat3 and left untreated (U) or treated with UV (120 J/m 2 ) for 15, 30, or 60 min, followed by EGF treatment (E; 100 ng/ml) for 15 min. Crude nuclear extracts were prepared, and 10 g was used for the mobility shift DNA binding assay with hSIE as a probe as described under "Experimental Procedures." FP (lane 1) indicates free probe. B, COS-1 cells were transfected and treated in the same manner as described for A, except that EGF treatment was for 6 h. CAT activity was analyzed, and a representative autoradiograph is shown in the upper panel. The average -fold induction indicated on top of each bar was from two independent transfections (lower panel). C, shown are the results from the inhibition of EGF-induced tyrosine phosphorylation of Stat3 by UV pretreatment. Total cell lysates from the transfected cells described above were prepared and subjected to Western blot analysis with antiphospho-Tyr-705 Stat3, anti-phospho-Ser-727 Stat3, or anti-Stat3 antibody as indicated. vec, empty vector. S727A were further inhibited by activated ERK or JNK (data not shown), suggesting that the repression is unlikely mediated by Ser-727 phosphorylation. This result is consistent with the report showing that repression of IL-6-stimulated Stat3 activity by ERK is independent of Ser-727 phosphorylation (44). From these data, we propose that ERK and JNK may have dual effects on Stat3 transcriptional activity, i.e. up-regulation by Ser-727 phosphorylation and down-regulation in a Ser-727-independent manner, a concerted contribution to the resultant regulation of Stat3 transactivation. These results also suggest a critical and complex role of the MAPK pathway in the regulation of STATs. Although the mechanisms of the negative regulation are still unknown, a few possibilities may be considered. First, activation of the MAPK pathways may negatively regulate the activity of the upstream tyrosine kinases such as EGF receptor, Src, and Janus kinases, which are involved in Stat3 tyrosine phosphorylation. Second, although we only detected phosphorylation of Ser-727 by JNK1 and ERK2 in vitro (Fig. 2C) (43), serine/threonine site(s) other than Ser-727 may be phosphorylated by these kinases in vivo (15). It is possible that phosphorylation on serine in DNA-binding domain of Stat3 may inhibit its DNA binding and transcriptional activities. Third, activated ERKs and JNKs may affect the activity of the specific inhibitors of Janus kinase/STAT pathways, namely the recently identified suppressor of cytokine signaling-1 or protein inhibitor of activated Stat3 (reviewed in Ref. 45). Finally, these kinases may modulate Stat3 activity by association since we observed a strong binding of Stat3 with activated ERK2 as well as protein kinase C-␦ (42,43). In addition to stress, emerging evidence has shown that STATs are phosphorylated exclusively on serine in the absence of tyrosine phosphorylation. Examples include insulin-induced serine phosphorylation of Stat3 in 3T3L1 adipocytes (46) and steel factor-induced phosphorylation of Stat3 in human growth factor-dependent myeloid cell lines (47). Activation of protein kinase C by phorbol esters is also reported to induce phosphorylation of Stat3 on Ser-727 in T lymphocytes (19). These data indicate that serine phosphorylation alone may play a role in cellular regulation. Although the physiological role of serine phosphorylation in STAT function is still obscure, reports suggest that Stat3 may be involved in the regulation of differentiation in macrophages and pathogenesis of chronic lymphocytic leukemia (48,49). A recent report indicated that Stat1 regulates apoptosis induced by TNF-␣ by a novel signaling pathway in which phosphorylation on serine, but not tyrosine, may be involved (50). A challenge for further studies is to determine the physiological role of serine phosphorylation in STAT function. JNK binds and phosphorylates some transcriptional activators. The best studied example is the c-Jun transactivator. The phosphorylation of c-Jun at Ser-63 and Ser-73 in the activation domain increases c-Jun transactivation (23). JNK also increases the transcriptional activity of the transactivators ATF-2 and Elk-1, a subfamily of ETS domain transcription factors (51,52). In this study, we have identified Stat3, another transcription factor, as a substrate of JNK and demonstrated that the regulation of Stat3 by JNK may be via a novel mechanism that involves Ser-727 phosphorylation-dependent and -independent mechanisms.
2018-04-03T02:20:29.574Z
1999-10-22T00:00:00.000
{ "year": 1999, "sha1": "88c12e9fc3ea8cc1c6ff8079d4d2de21c4773646", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/274/43/31055.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e4acb8de82c3cd3eb97cb44d8c5e5541337cb93a", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
261310101
pes2o/s2orc
v3-fos-license
An Automated Data-Driven Irrigation Scheduling Approach Using Model Simulated Soil Moisture and Evapotranspiration : Given the increasing prevalence of droughts, unpredictable rainfall patterns, and limited access to dependable water sources in the United States and worldwide, it has become crucial to implement effective irrigation scheduling strategies. Irrigation is triggered when some variables, such as soil moisture or accumulated water deficit, exceed a given threshold in the most common approaches applied in irrigation scheduling. A High-Resolution Land Data Assimilation System (HRLDAS) was used in this study to generate timely and accurate soil moisture and evapotranspiration (ET) data for irrigation management. By integrating HRLDAS products and the crop growth model (AquaCrop), an automated data-driven irrigation scheduling approach was developed and evaluated. For HRL-DAS ET and soil moisture, the ET-water balance (ET-WB)-based method and soil-moisture-based method were applied accordingly. The ET-WB-based method showed a 10.6~33.5% water-saving result in dry and set seasons, whereas the soil moisture-based method saved 7.2~37.4% of irrigation water in different weather conditions. Both of these methods demonstrated good results in saving water (with a varying range of 10~40%) without harming crop yield. The optimized thresholds in the two approaches were partially consistent with the default values from the Food and Agriculture Organization and showed a similar trend in the growing season. Furthermore, the forecasted rainfall was integrated into this model to see its water-saving effect. The results showed that an additional 10% of irrigation water, which is 20~50%, can be saved without harming the crop yield. This study automated the data-driven approach for irrigation scheduling by taking advantage of HRLDAS products, which can be generated in a near-real-time manner. The results indicated the great potential of this automated approach for saving water and irrigation decision making. Introduction Crop cultivation is the primary source of food, fiber, and biofuel supplies in the U.S. and the world.Crop cultivation consumes a significant amount of freshwater and energy, mainly through irrigation.According to a World Bank report [1], irrigated agriculture accounts for 17% of all the arable lands, which is about 277 million ha.However, this relatively small fraction of cropped land produces approximately 40% of the world's gross agricultural output [2].Although irrigation plays a crucial role in significantly boosting crop yields, leading to increased farmer income and improved food security, it accounts for up to 80% of water consumption in the United States and over 90% in various Western states [3].Fueled by increasing competition from the urban and industrial sectors for scarce water resources, high agricultural water consumption, and water wastage, freshwater has become a scarce resource in many parts of the U.S. and around the world.Moreover, with a steady population growth worldwide and limited land area, it will become more difficult in the future to meet food production requirements with limited water resources despite all the efforts towards sustainable agriculture [4].On the other hand, excessive irrigation frequently occurs at the field scale, leading to the wastage of valuable water and energy resources, agricultural run-off, and pollution of the surface and groundwater.Additionally, this practice results in the depletion of water sources and soil nutrients, and it can also cause soil salinization, thus harming agricultural sustainability.Given these challenges, the need for effective irrigation management becomes even more critical to enhance water use efficiency (WUE) and overall productivity, and to save water for future use [5]. Irrigation scheduling is the process of determining and optimizing the amount and timing of irrigation activities.The primary objective of irrigation scheduling is to achieve specific management goals, such as improving crop yield, reducing water wastage, and preventing environmental issues.Over the past few decades, several methods have been proposed to schedule and quantify the necessary depth of individual irrigation applications [6][7][8].According to what the scheduling rests upon, four types can be distinguished: (1) Evapotranspiration, water balance (ET-WB)-based, (2) soil moisture-based, (3) plantwater-index-based, and (4) simulated model-based.These scientific irrigation management approaches are based on timely and accurate data on crop conditions, soil properties, and weather patterns to make well-informed irrigation decisions.Such data-driven approaches have proven to be highly effective in determining the suitable timing and depth of irrigation for crop growth [6].Among these methods, ET-WB-based methods are the optimal choice due to their economical, straightforward implementation, and reasonably accurate characteristics.Other types of methods demand specific preparations before their implementation: the installation of sensors and monitoring systems in the field, the research and validation of thresholds to trigger irrigation, and/or model calibration via previous field experiments. Despite all the data-driven methods that are proposed and researched, farmers and water managers mostly adopt traditional methods in their irrigation operations.For example, only 30% of farms in Nebraska utilized scientific approaches or subscribed scheduling services in a 2018 survey [9].One of the reasons that hinders the wide application of datadriven scientific methods is that most of these methods involve a chain of data-processing steps.To make an irrigation decision using these methods, farmers must collect soil moisture or ET data from models, install field sensors or satellite sensors, determine a threshold to trigger irrigation, and keep track of the weather conditions to adjust the amount of irrigation during the crop growth season.Moreover, farmers without the required knowledge and specialized analysis skills may encounter difficulties in processing soil moisture and ET data in this manner.Our goal in this study was to automate the data-driven irrigation scheduling process based on model simulation data, thus providing stakeholders with irrigation decision-making information in a timely and easy manner. In order to automate the data-driven approach and thus promote its utilization in applications, it is important to obtain dynamic data in near-real time with regional, state, or even national coverage using either modeling or remote sensing.Root-zone soil moisture (RZSM) (~top 1 m) and ET are key parameters for irrigation scheduling.Crop growth depends on RZSM, which is depleted mainly by ET and replenished mainly by precipitation and irrigation.Even though the significance of soil moisture in crop growth and irrigation management has been acknowledged [10], obtaining accurate soil moisture data remains challenging due to the lack of routine high spatial resolution (<1 km) observation of soil moisture at the continental scale.The model-simulated soil moisture and ET are important data sources for quick decision-making support in irrigation management as they can be generated in a near-real-time manner.A High-Resolution Land Data Assimilation System (HRLDAS) [11] from the National Center for Atmospheric Research (NCAR) has been developed to fill this gap by simulating the evolution of land surface states at field scales.HRLDAS was utilized in a NASA-funded agricultural pest management decision support system to generate real-time soil moisture and temperature data [12].These forecast products were made available to farmers in the Central and Great Plains regions through the NCAR partner Meteorlogix, assisting them in making informed decisions regarding their agricultural activities. In this study, HRLDAS soil moisture and ET during the dry season (2020) and wet season (2019) were utilized to schedule irrigation activities, demonstrating its application possibility in irrigation scheduling for saving water and improving crop yields.By integrating HRLDAS products and a crop growth model (AquaCrop), it is possible to find the best threshold to trigger irrigation activity for maximum crop output.Based on the typical four crop growth stages, four thresholds were determined to represent the dynamic nature of crop water demand during the growing season.For HRLDAS ET and soil moisture, the ET-WB-based method and soil-moisture-based method were applied accordingly to examine the water-saving effect.Furthermore, short-term rainfall forecasts were integrated to prevent unnecessary irrigation from the incoming rainfall. Study Area and Materials This study uses data from a variety of sources, as documented below.In addition to the hourly updated model simulation products, annual data on crops and soil are also necessary for irrigation scheduling.The automation of irrigation scheduling in 6 agricultural sites has been performed for the growing seasons of 2019-2020.Most of the data are currently visualized and made available to the public on the WaterSmart Data Information Portal (WaterSmart DIP) [13] (https://geobrain.csiss.gmu.edu/watersmartport/web/(accessed on 13 July 2023)) covering Nebraska, as this study is mainly focused on Nebraska (Figure 1).Nebraska is selected as the study area because it is the largest irrigation state in the U.S. and one of the leading states in terms of its agricultural output.According to the 2017 Census of Agriculture [14], Nebraska had the highest amount of irrigated land among all states in the U.S., encompassing 8.6 million acres of irrigated croplands, which accounted for 14.8% of all irrigated cropland in the country.Because of its semiarid climate condition, crop yields in Nebraska farms are quite sensitive to subtle differences in irrigation scheduling, which, therefore, makes Nebraska an ideal area to test our irrigation scheduling approaches. forecast products were made available to farmers in the Central and Great Plains regions through the NCAR partner Meteorlogix, assisting them in making informed decisions regarding their agricultural activities. In this study, HRLDAS soil moisture and ET during the dry season (2020) and wet season (2019) were utilized to schedule irrigation activities, demonstrating its application possibility in irrigation scheduling for saving water and improving crop yields.By integrating HRLDAS products and a crop growth model (AquaCrop), it is possible to find the best threshold to trigger irrigation activity for maximum crop output.Based on the typical four crop growth stages, four thresholds were determined to represent the dynamic nature of crop water demand during the growing season.For HRLDAS ET and soil moisture, the ET-WB-based method and soil-moisture-based method were applied accordingly to examine the water-saving effect.Furthermore, short-term rainfall forecasts were integrated to prevent unnecessary irrigation from the incoming rainfall. Study Area and Materials This study uses data from a variety of sources, as documented below.In addition to the hourly updated model simulation products, annual data on crops and soil are also necessary for irrigation scheduling.The automation of irrigation scheduling in 6 agricultural sites has been performed for the growing seasons of 2019-2020.Most of the data are currently visualized and made available to the public on the WaterSmart Data Information Portal (WaterSmart DIP) [13] (https://geobrain.csiss.gmu.edu/watersmartport/web/(accessed on 13 July 2023)) covering Nebraska, as this study is mainly focused on Nebraska (Figure 1).Nebraska is selected as the study area because it is the largest irrigation state in the U.S. and one of the leading states in terms of its agricultural output.According to the 2017 Census of Agriculture [14], Nebraska had the highest amount of irrigated land among all states in the U.S., encompassing 8.6 million acres of irrigated croplands, which accounted for 14.8% of all irrigated cropland in the country.Because of its semiarid climate condition, crop yields in Nebraska farms are quite sensitive to subtle differences in irrigation scheduling, which, therefore, makes Nebraska an ideal area to test our irrigation scheduling approaches.HRLDAS is the merging of a data assimilation system and a land surface proc model.The underlying land model within HRLDAS is the community Noah-Multipara eterization Land Surface Model (Noah-MP LSM) [15].It uses multiple options for ma key land-atmosphere interaction processes affecting hydrology and vegetation to achie accurate surface energy and water transfer processes.Noah-MP considers the surface w ter infiltration, runoff, and groundwater transfer and storage.It predicts vegetati growth by combining a photosynthesis model and a carbon allocation model that dist guishes between C3 (e.g., soybean) and C4 (e.g., corn) plants. The HRLDAS obtained the surface forcing from the National Water Model standa analysis configuration [16].This configuration used meteorological forcing data sourc from the Multi-Radar/Multi-Sensor System (MRMS) Gauge-adjusted and Radar-only o served precipitation products, along with short-range Rapid Refresh (RAP) and Hig Resolution Rapid Refresh (HRRR) data.Additionally, stream-gauge observations from United States Geological Survey (USGS) were assimilated into the model.The initial v ues were derived from the North American Land Data Assimilation System (NLDA analysis.The HRLDAS has been running from 2019 to the present at 500 m spatial reso tion for the Nebraska region, and the output is saved in hourly intervals.The HRLD was configured for NLDAS to have 4 soil moisture layers with thicknesses (from top) 10 cm, 30 cm, 60 cm, and 100 cm, for a total soil column depth of 2 m.For assessing t HRLDAS is the merging of a data assimilation system and a land surface process model.The underlying land model within HRLDAS is the community Noah-Multiparameterization Land Surface Model (Noah-MP LSM) [15].It uses multiple options for many key landatmosphere interaction processes affecting hydrology and vegetation to achieve accurate surface energy and water transfer processes.Noah-MP considers the surface water infiltration, runoff, and groundwater transfer and storage.It predicts vegetation growth by combining a photosynthesis model and a carbon allocation model that distinguishes between C3 (e.g., soybean) and C4 (e.g., corn) plants. The HRLDAS obtained the surface forcing from the National Water Model standard analysis configuration [16].This configuration used meteorological forcing data sourced from the Multi-Radar/Multi-Sensor System (MRMS) Gauge-adjusted and Radar-only observed precipitation products, along with short-range Rapid Refresh (RAP) and High-Resolution Rapid Refresh (HRRR) data.Additionally, stream-gauge observations from the United States Geological Survey (USGS) were assimilated into the model.The initial values were derived from the North American Land Data Assimilation System (NLDAS) analysis.The HRLDAS has been running from 2019 to the present at 500 m spatial resolution for the Nebraska region, and the output is saved in hourly intervals.The HRLDAS was configured for NLDAS to have 4 soil moisture layers with thicknesses (from top) of 10 cm, 30 cm, 60 cm, and 100 cm, for a total soil column depth of 2 m.For assessing the data quality, the model soil moisture products were compared with site-based soil moisture measurements and gridded soil moisture maps from our previous study [17]. Meteorological Forcing Data The weather data used to drive the HRLDAS were obtained from the National Water Model (NWM, https://water.noaa.gov/about/nwm(accessed on 13 July 2023)) [16] and Global Forecast System (GFS, https://www.ncei.noaa.gov/products/weather-climatemodels/global-forecast(accessed on 13 July 2023) [18].Both models provide 8 forcing variables: precipitation rate, surface pressure, shortwave radiation, longwave radiation, u-wind, v-wind, temperature, and specific humidity.These variables were clipped to Nebraska and regridded to a spatial resolution of 500 m.NWM provides near-real-time data while GFS variables are forecasted 4 times a day for the following 120 h.Among the 8 variables, we focused most on the precipitation rate when making an irrigation decision.All these data are visualized and made available to the public on WaterSmart DIP. Figure 2c,d display sample maps of the hourly temperature from NWM and the daily rainfall accumulated from the hourly NWM precipitation variable, respectively. Crop Types and Soil Properties Different crop has a different crop evapotranspiration (ET c ), which requires different amounts of irrigation during the growth process.In this research, crop type is identified using the annually released Cropland Data Layer (CDL) [19,20], which contains crop and other specific land cover classifications obtained using remote sensing for the conterminous United States.A rapid in-season mapping of corn and soybean fields, which are the two major crops in Nebraska, is currently available using historical CDL data [21].This greatly promotes our irrigation scheduling for an entire state.Figure 2e illustrates the annual CDL layer for the year 2022. Soil properties are considered to be relatively static conditions for a region and are usually updated annually based on soil surveys.The physical and chemical soil properties considered here include soil texture, electrical conductivity, available water-holding capacity, and permanent wilting point.The UC Davis team has aggregated the current U.S. Department of Agriculture (USDA) National Cooperative Soil Survey (NCSS) soil survey data within 800 m grid cells to generate nationwide soil property maps, and the gridded data products are all available on the web application programming interface (API) of soil properties [22]. Automation of the Irrigation Scheduling: The Methodology When initially introducing a scientific (versus experience-based) irrigation scheduling method to a certain crop field, the ET-WB method proves to be straightforward to implement and demonstrates its effectiveness when field weather data and Food and K c curves a specific crop recommended by the Agriculture Organization (FAO) for are accessible.The ET-WB method remains feasible even in cases where soil properties are not known, as long as the accumulated daily soil water deficit calculated by ET estimates is promptly replenished.For example, irrigations can be scheduled at regular intervals (e.g., every 3 days or twice a week) to satisfy the soil water deficit calculated by ET estimates [23].In the ET-WB-based method, the daily soil water deficit is calculated using the basic soil water balance equation.On a daily basis, D c , the soil water deficit in the root zone on the current day, can be estimated using the following simplified accounting equation: where D p is the soil water deficit on the previous day, ET c is the crop ET for the current data, P is the gross precipitation for the current day, and Irr is the net irrigation amount infiltrated into the soil for the current day.Irrigation events are scheduled when accumulated D c exceeds the set threshold, which is the Management Allowed Depletion (MAD) in default.While we can calculate D c using the water balance equation, it should be noted that D c represents the discrepancy between the field capacity and current soil water content.Thus, D c is an estimation of the true deficit in the field, which can be calculated more directly by subtracting the current soil water content from the field capacity of the root zone when the measurement of current soil water content is available. The ET-WB-based method is more popular than the soil-moisture-based method because determining D c based on the current soil water content is limited by its dependency on the accuracy and reliability of the soil moisture content readings from soil moisture sensors, which are required to be installed before application in the field.This laborious requirement for farmers hampers the wide application of soil-moisture-based methods.However, if there are accurate measurements of field-scale soil moisture, the soil-moisturebased method is more straightforward.Soil-moisture-based approaches use an Available Water Content threshold (AWC th ), whereas the triggering threshold in the ET-WB-based method is defined using MAD.AWC th = 1 − MAD holds for the same field in the same growing season, as both methods describe a single real value, whether it is determined by site-specific field experiments [24] or a default value drawn from FAO-56 [25]. HRLDAS provides ET 0 based on the NWM parameters during the growing season.Thus, daily ET c and water deficit can be estimated during the growing season.Irrigation is scheduled when the accumulated water depletion exceeds the thresholds we set.The amount of irrigation water is set to refill the soil water content slightly below the field capacity to avoid percolation and increase the WUE.Furthermore, HRLDAS-derived soil moisture at a spatial resolution of 500 m can be directly used in soil-moisture-based irrigation decision making, as it effectively captures the seasonal evolution of soil moisture.Irrigation and yield information of eight corn farms during the 2019 and 2020 growing seasons with its crop growing states is provided by the University of Nebraska-Lincoln.These farm sites are located closely in Nebraska, with a similar size of 800 m × 800 m; thus, they are influenced by the same climate conditions at most times.The soil texture of these experimental sites is sandy loam, which indicates that they have a similar field capacity and wilting points.As shown in Table 1, corn was planted around the end of April and the beginning of May, and we can observe that more water was irrigated in 2020, whereas their yields in 2019 and 2020 are similar.To simulate the dynamic nature of crop water demands, an analysis of the crop growth stage and identification of patterns in them are necessary.GDD, or Growing Degree Days, is a measure of heat accumulation used in agriculture to determine the crop growth stages [26,27].It is based on the principle that plants grow and develop in response to temperature, with warmer temperatures generally accelerating their growth.By tracking the accumulation of GDD over time, farmers and researchers can determine when a crop reaches key growth stages, such as emergence, flowering, and maturity.This information can be used to plan irrigation, fertilizer application, and other management practices, and to predict yield and harvest timing.Four typical crop growth stages are identified in this paper based on GDD: initial stage, development stage, mid-season stage, and late-season stage. Based on the growing state dates and temperature record during the growing season, the accumulated GDD in different stages of corn in the test fields are calculated.These accumulated GDD values are further used to determine the start point and duration of the four stages in other fields without information on crop state dates [28]. The main focus of this study is to schedule irrigation based on model-simulated ET and soil moisture and evaluate its water-saving effect without harming crop yield.The FAO developed a crop growth model, AquaCrop [29,30], which can estimate crop yield in response to available water.Compared to other crop growth models, only a few parameters are required for yield estimation in AquaCrop.It simulates plant water stress (soil moisture) based on the input weather and ET data.Integrating AquaCrop as a yield estimator, it is possible to determine four thresholds in four crop stages instead of a single fixed threshold as in traditional methods.Although several studies have demonstrated that AquaCrop is a reliable tool capable of reasonably accurately predicting both total biomass and final yield under various irrigation strategies, ranging from no water stress to mild or severe water stress [7,[31][32][33], it is important to note that the accuracy of the yield estimates and irrigation recommendations depend on the accuracy and representativeness of the model inputs, including weather data, soil characteristics, crop parameters, and management practices [34].Therefore, it is important to carefully validate the inputs and outputs of the model before relying on them to make irrigation decisions.One approach to validation is to compare the simulated soil moisture (SM) data from AquaCrop with independent data from other models to assess the accuracy and reliability of the AquaCrop output.AquaCrop is first calibrated for these farms in Table 1 to acquire good-quality yield estimations and then used to simulate the yields of these farms when different irrigation schedules are derived.To validate the accuracy and reliability of the AquaCrop model, we use the same inputs as that of HRLDAS to compare their soil moisture outputs, assuming there is no irrigation during the season. Forecasted rainfall is also considered to determine whether and how much irrigation water should be applied.Irrigation is unnecessary if there is unexpected rainfall in the near future.This risk of wasting water due to the uncertainties in future weather conditions could be mitigated by integrating short-term rainfall forecasts into the model.The incorporation of rainfall forecasts into water management can provide farmers with valuable information on upcoming weather patterns, allowing them to adjust their irrigation schedules accordingly and avoid overwatering.This approach not only saves water but also reduces the risk of crop damage due to waterlogging, improves soil health, and reduces the energy consumption associated with pumping and distribution.In this study, the rainfall events over consecutive 5 days within the crop growth period are considered in calculating the total rainfall.The daily rainfall contributes differently to the total rainfall, with a decay factor of 0.9.Based on this, the irrigation amount is rescheduled by reducing the possible total rainfall in the weather forecast.If the total rainfall amount is larger than the originally scheduled irrigation amount, the irrigation event is postponed to a later date. In this research, we determined the thresholds of irrigation scheduling for four stages in response to fluctuations in crop water demands.To accomplish this, we followed three main steps.First, we randomly selected 100 sets of thresholds for each stage.Second, we used these sets of thresholds to obtain a starting point with the maximum yield.Finally, we optimized the thresholds using the downhill simplex algorithm for the minimum irrigation water for each stage.The downhill simplex algorithm [35] was chosen because it is a simple and efficient optimization technique that does not require knowledge of the gradient of the function being optimized.This method can account for complex relationships between soil moisture and crop yield, which may be difficult to capture when using traditional single fixed thresholds.The resulting thresholds were then used to inform irrigation decisions and optimize water use efficiency.By following these steps, we were able to identify optimal thresholds for each stage of irrigation scheduling, which can help improve crop yield and reduce water waste.Rainfall forecasts from the GFS were further integrated into the irrigation scheduling to improve the efficiency of water usage in irrigation.The overall flowchart is shown in Figure 3. Validation of AquaCrop Validating AquaCrop is crucial for reliable yield estimations and irrigation recommendations.Figure 4 illustrates the comparison result of eight corn farms between the model simulations and AquaCrop-simulated soil moisture.At the same site, AquaCrop generated very similar soil moisture to HRLDAS soil moisture, with an average Root Mean Square Error (RMSE) of around 0.013 and an average R2 of around 0.77 (Table 2).This demonstrates the ability of AquaCrop to simulate soil moisture accurately and ensure the reliability of yield estimation.With farmers reporting integrated irrigation information, the yields for the eight study sites were estimated by AquaCrop and compared with actual yields, as shown in Table 3. Soil moisture in 2019 was generally higher than that in 2020 during the growing season, as shown in Figure 4.This indicates that the weather was drier in 2020 than in 2019, possibly because there was more rainfall in 2019.The RMSE in Table 2 was smaller in 2019 than in 2020, but R2 was slightly lower in 2019, which indicates that AquaCrop simulates the time variation of the soil moisture fluctuations better in a drier year, but may be inferior in capturing the absolute magnitude of soil moisture.High accuracy of yield estimation is observed in Table 2, while in most cases AquaCrop overestimates yield slightly, which may be associated with its feature in modeling crop growth and yield under different levels of plant water stress, assuming that other conditions are all perfect (for example, nutrient management and pest control). Validation of AquaCrop Validating AquaCrop is crucial for reliable yield estimations and irrigation recommendations.Figure 4 illustrates the comparison result of eight corn farms between the model simulations and AquaCrop-simulated soil moisture.At the same site, AquaCrop generated very similar soil moisture to HRLDAS soil moisture, with an average Root Mean Square Error (RMSE) of around 0.013 and an average R2 of around 0.77 (Table 2).This demonstrates the ability of AquaCrop to simulate soil moisture accurately and ensure the reliability of yield estimation.With farmers reporting integrated irrigation information, the yields for the eight study sites were estimated by AquaCrop and compared with actual yields, as shown in Table 3. Soil moisture in 2019 was generally higher than that in 2020 during the growing season, as shown in Figure 4.This indicates that the weather was drier in 2020 than in 2019, possibly because there was more rainfall in 2019.The RMSE in Table 2 was smaller in 2019 than in 2020, but R2 was slightly lower in 2019, which indicates that AquaCrop simulates the time variation of the soil moisture fluctuations better in a drier year, but may be inferior in capturing the absolute magnitude of soil moisture.High accuracy of yield estimation is observed in Table 2, while in most cases AquaCrop overestimates yield slightly, which may be associated with its feature in modeling crop growth and yield under different levels of plant water stress, assuming that other conditions are all perfect (for example, nutrient management and pest control). Overall, the validation of the simulated soil moisture and estimated yield demonstrates that AquaCrop can provide an accurate yield estimation when reliable inputs are available.Thus, we can assess our irrigation methods based on yield estimation and optimize thresholds for triggering irrigation to maximize the yield. Threshold-Based Irrigation Scheduling Four crop stages are first determined by the accumulated GDD, and four different thresholds are set to represent the dynamic nature of crop water demands.The ET-WBbased method is first applied based on the HRLDAS-derived daily ET. Figure 5a shows the optimized thresholds in different crop stages for the eight study sites.Overall, the thresholds fluctuate around the FAO recommended value of 50% in the wet season.In the drier season (2020), thresholds are generally lower (float around 40%) than those in the wetter season, and the water demand is highest in the development stage, whereas in the wet season, crops demand more water in the mid-and late-season stages.This is reasonable because, in a dry weather pattern, a lower threshold guarantees timely and more frequent irrigation, which can efficiently prevent crops from experiencing water stress. drier season (2020), thresholds are generally lower (float around 40%) than those in wetter season, and the water demand is highest in the development stage, whereas in wet season, crops demand more water in the mid-and late-season stages.This is reas able because, in a dry weather pattern, a lower threshold guarantees timely and more quent irrigation, which can efficiently prevent crops from experiencing water stress.With the thresholds determined, irrigation is scheduled based on the water defi Compared to the actual yield and total irrigation amount, our irrigation schedule can s roughly 10% of the total irrigation water amount while maintaining a similar yield in dry season when water demands are high during crop growth, whereas in the wet sea with less water demands, roughly 20% of irrigation water can be saved, as shown in Ta 4 and Figure 6a.The highest conservation percentage of irrigation water was observed With the thresholds determined, irrigation is scheduled based on the water deficit.Compared to the actual yield and total irrigation amount, our irrigation schedule can save roughly 10% of the total irrigation water amount while maintaining a similar yield in the dry season when water demands are high during crop growth, whereas in the wet season with less water demands, roughly 20% of irrigation water can be saved, as shown in Table 4 and Figure 6a.The highest conservation percentage of irrigation water was observed in the Kelly site in 2019, with a slightly reduced yield.The decreased yield may be related to the lowest scheduled amount of irrigation.The same optimization of the thresholds is implemented using the HRLDAS soil moisture, and the results are presented in Figure 5b.The threshold ranges and their tendencies are very similar to those of the ET-WB-based method.The only difference is that in the wet season (2019), the soil-moisture-based method indicates high thresholds in the late season stage, which may be associated with a low water demand from the crop in this stage.This is reasonable because, with adequate rainfall in the wet season, the crop might not require much water during the late season stage, whereas, in the dry season, not enough water supply during the late season may cause yield loss.The low thresholds in the wet season that we obtain from the ET-based method might be caused by accumulative errors in daily soil water deficit calculation. Similar to the ET-based method, the yield and total irrigation amount are then estimated using the AquaCrop model based on the optimized thresholds (Table 4, and Figure 6b).The result is quite consistent with the previous one in the ET-based method, where more water is conserved in the wet season compared to that in the dry season.The slight difference is that in the wet season, although less irrigation water is applied, the estimated yield also decreases a little, whereas, in the dry season, the soil-moisturebased method schedules more irrigation to be applied, and the yield remains at a similar level.Overall, both the ET-WB-based method and the soil-moisture-based method utilizing model simulations of ET and soil moisture exhibit good performance, generating acceptable results for saving water and preventing yield loss. Irrigation Scheduling Integrating Short-Term Rainfall Forecasts Compared to conventional irrigation scheduling, integrating rainfall forecasts can further save irrigation water without significant yield loss. Figure 7 shows the extra amount of water saved when rainfall forecasts are integrated into irrigation scheduling.An additional 10% of irrigation water is saved during the dry season, and much more of that could be saved in the wet season, totaling around 40%.The final yield from both the ET-WBbased method and the soil-moisture-based method is not significantly reduced (Table 5), although a slight yield reduction of around 0.1~0.2ton/ha is observed.A higher amount of irrigation water in the wet season can be conserved when short-term rainfall forecasts are considered higher, and more frequent rainfall is the basic feature of wet weather patterns.Even so, integrating the rainfall forecasts still saves a considerable amount of irrigation water in the dry season, which demonstrates its necessity and reliability in irrigation scheduling. Future Work Although the thresholds are optimized for achieving the highest yield and the irrigation scheduling based on these thresholds is proved again in this study to be efficient in water saving, no significant yield improvement is observed.This may be associated with the inherent feature of threshold-based methods that basically determine irrigation time and amount based on the current status; the long-term return of yield is not in their scope.Irrigation scheduling methods based on artificial intelligent algorithms, such as reinforcement learning and deep neural networks, are a good choice for maximum seasonal yield or economic return.Meanwhile, although the irrigation scheduling is automated due to the availability improvement of high-resolution soil moisture and ET data in this study, crop information such as crop planting or emergence date still relies heavily on farmers' reports or inputs.This hampers the promotion of popularizing scientific irrigation scheduling to a larger region and a wider range of users.We used GDD to roughly estimate the crop stages, but this may be unavailable or inaccurate when the local crop stage date and temperature records are missing.Thus, integrating technologies, such as within-season crop emergence date generation [36,37] into the automation process of irrigation scheduling, is a direction for future research. Future Work Although the thresholds are optimized for achieving the highest yield and the irrigation scheduling based on these thresholds is proved again in this study to be efficient in water saving, no significant yield improvement is observed.This may be associated with the inherent feature of threshold-based methods that basically determine irrigation time and amount based on the current status; the long-term return of yield is not in their scope.Irrigation scheduling methods based on artificial intelligent algorithms, such as reinforcement learning and deep neural networks, are a good choice for maximum seasonal yield or economic return.Meanwhile, although the irrigation scheduling is automated due to the availability improvement of high-resolution soil moisture and ET data in this study, crop information such as crop planting or emergence date still relies heavily on farmers' reports or inputs.This hampers the promotion of popularizing scientific irrigation Conclusions Irrigation is an integral part of agriculture.This study proposed an automated datadriven irrigation scheduling approach that utilized HRLDAS soil moisture and ET products, which are generated in a near-real-time manner.Simulations and validations were performed at eight experiment sites in Nebraska.The findings of this study demonstrate the potential of using model simulations in conjunction with threshold-based irrigation scheduling approaches to guide irrigation management and achieve water savings without yield loss.Four dynamic thresholds were determined using a downhill simplex algorithm to represent the varying water demands of crops at different growth stages.AquaCrop was validated to ensure reliable yield estimations before the optimization of the thresholds.The results indicate that all the approaches were effective in reducing water consumption while maintaining crop productivity.Interestingly, the analysis suggests that the potential for water saving may vary depending on the season, with a greater potential for savings in wet seasons compared to dry seasons, with an approximate saving of up to 10%. To further optimize the water-saving potential of the approach, rainfall forecasts were integrated into the irrigation scheduling.The results indicated that the integration of rainfall forecasts led to even higher water savings, with an additional 20% reduction in water consumption during wet seasons and a 10% more reduction during dry seasons compared to traditional irrigation practices.This approach not only saves water but also helps to avoid invalid irrigation just before subsequent rainfall, which can improve crop health and reduce waterlogging risks.The findings of this study have significant implications for the sustainable management of water resources in agriculture and highlight the importance of incorporating model simulations and weather forecasting into irrigation scheduling. Figure 1 . Figure 1.The location of Nebraska.Figure 1.The location of Nebraska. Figure 1 . Figure 1.The location of Nebraska.Figure 1.The location of Nebraska. 2.1.Soil Moisture and ET MapSoil moisture and ET are the key parameters in most irrigation decision-making methods.The HRLDAS generates hourly maps of soil moisture and ET at a spatial resolution of 500 m covering Nebraska from 2019 to the present in a near-real-time manner. Figure Figure 2a,b show sample maps of the hourly updated HRLDAS soil moisture and the daily ET accumulated from hourly HRLDAS ET, which are visualized on the WaterSmart DIP. 2. 1 . Soil Moisture and ET Map Soil moisture and ET are the key parameters in most irrigation decision-mak methods.The HRLDAS generates hourly maps of soil moisture and ET at a spatial re lution of 500 m covering Nebraska from 2019 to the present in a near-real-time mann Figure 2a,b show sample maps of the hourly updated HRLDAS soil moisture and the da ET accumulated from hourly HRLDAS ET, which are visualized on the WaterSmart D Sustainability 2023 , 15, x FOR PEER REVIEW 9 of 17 integrated into the irrigation scheduling to improve the efficiency of water usage in irrigation.The overall flowchart is shown in Figure3. Figure 3 . Figure 3.The flow chart of threshold optimization in irrigation scheduling. Figure 3 . Figure 3.The flow chart of threshold optimization in irrigation scheduling. Figure 4 . Figure 4. Simulated soil moisture from AquaCrop (orange) and HRLDAS soil moisture (blue) for the 8 study sites in 2019 (left) and 2020 (right) growing season. Figure 4 . Figure 4. Simulated soil moisture from AquaCrop (orange) and HRLDAS soil moisture (blue) for the 8 study sites in 2019 (left) and 2020 (right) growing season. Figure 6 . Figure 6.Irrigation amount (mm) saved compared with actual irrigation amount ('IA') using (a) ET-WB ('IA_ET_R0 ) and (b) soil moisture ('IA_SM_R0 ) irrigation schedule methods.The percentage number in the figure denotes the water saved compared with the actual irrigation amount. Figure 7 . Figure 7. Irrigation amount (mm) saved using (a) ET-WB-based irrigation schedule method, where 'IA_ET_R1' is the irrigation amount when rainfall forecasts are integrated, and 'IA_ET_R0′ is the irrigation amount when rainfall forecasts are not integrated; (b) soil-moisture-based irrigation schedule method, where 'IA_SM_R1' is the irrigation amount when rainfall forecasts are integrated, and 'IA_SM_R0' is the irrigation amount when rainfall forecasts are not integrated.The percentage number in the figure denotes the water saved using the different methods. Figure 7 . Figure 7. Irrigation amount (mm) saved using (a) ET-WB-based irrigation schedule method, where 'IA_ET_R1' is the irrigation amount when rainfall forecasts are integrated, and 'IA_ET_R0 is the irrigation amount when rainfall forecasts are not integrated; (b) soil-moisture-based irrigation schedule method, where 'IA_SM_R1' is the irrigation amount when rainfall forecasts are integrated, and 'IA_SM_R0' is the irrigation amount when rainfall forecasts are not integrated.The percentage number in the figure denotes the water saved using the different methods. Table 1 . Total irrigation amount (mm) and yield (ton/ha) in the 8 study sites. Table 2 . Accuracy of soil moisture retrieved from AquaCrop compared to HRLDAS products.RMSE (root mean square error) is in m3/m3.R2 is the correlation coefficient. Table 2 . Accuracy of soil moisture retrieved from AquaCrop compared to HRLDAS products.RMSE (root mean square error) is in m3/m3.R2 is the correlation coefficient. Table 3 . Yield estimations from AquaCrop and actual yields, both in ton/ha. Table 4 . Yield estimations and total irrigation amount estimated from ET-WB-based and soil moisture (SM)-based irrigation schedule, as well as the water saved compared with the actual irrigation situation.Sustainability 2023, 15, x FOR PEERREVIEW 12 ofthe Kelly site in 2019, with a slightly reduced yield.The decreased yield may be related the lowest scheduled amount of irrigation. Table 4 . Yield estimations and total irrigation amount estimated from ET-WB-based and soil mo ture (SM)-based irrigation schedule, as well as the water saved compared with the actual irrigati situation. Table 5 . Yield estimations and total irrigation amount estimated from ET-WB-based and soil moisture (SM)-based irrigation schedule, as well as the water saved compared with the actual irrigation situation using the short-term rainfall forecasts.
2023-08-30T15:19:23.811Z
2023-08-26T00:00:00.000
{ "year": 2023, "sha1": "2a939c00cf3f2cbad9c5e0469fc5b7d6d2475b4b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/15/17/12908/pdf?version=1693038339", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bec1823aa721208895be81b590dc7c20098dd910", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
55434957
pes2o/s2orc
v3-fos-license
The MoEDAL experiment at the LHC MoEDAL is a pioneering experiment designed to search for highly ionising messengers of new physics such as magnetic monopoles or massive (pseudo-)stable charged particles, that are predicted to exist in a plethora of models beyond the Standard Model. Its ground-breaking physics program defines a number of scenarios that yield potentially revolutionary insights into such foundational questions as: are there extra dimensions or new symmetries; what is the mechanism for the generation of mass; does magnetic charge exist; what is the nature of dark matter; and, how did the big-bang develop at the earliest times. MoEDAL's purpose is to meet such far-reaching challenges at the frontier of the field. In conclusion we will briefly report on current results; discuss plans to instal a new detector designed to search for very long-lived neutral particles as well as mini-charged particles; and, briefly delineate plans for an astroparticle extension of MoEDAL called Cosmic-MoEDAL. Introduction MoEDAL (Monopole and Exotics Detector at the LHC) [1][2][3], the 7th experiment at the Large Hadron Collider (LHC) [4], was approved by the CERN Research Board in 2010. It is designed to search for manifestations of new physics through highly-ionising particles in a manner complementary to ATLAS and CMS [5]. The most important motivation for the MoEDAL experiment is to pursue the quest for magnetic monopoles and dyons at LHC energies. Nonetheless the experiment is also designed to search for any massive, stable or long-lived, slow-moving particles [6,7] with single or multiple electric charges arising in many scenarios of physics beyond the Standard Model (SM). A selection of the physics goals and their relevance to the MoEDAL experiment are described here and elsewhere [8]. For an extended and detailed account of the MoEDAL discovery potential, the reader is referred to the recently published MoEDAL Physics Review [9]. The structure of this paper is as follows. Section 2 provides a brief description of the MoEDAL detector. The physics reach of MoEDAL is discussed in Sect. 3, whilst Sect. 4 is dedicated to a discussion of recent results. Section 5 deals with the proposal to instal the MoEDAL detector for Penetrating Particles (MAPP) and a possible astroparticle extension of the MoEDAL-LHC project i.e. Cosmic-MoEDAL. The MoEDAL detector The MoEDAL detector [2] is deployed around the intersection region at Point 8 of the LHC in the LHCb experiment Vertex Locator (VELO) [10] cavern. A threedimensional depiction of the MoEDAL experiment is presented in Fig. 1. It is a unique and largely passive LHC detector comprised of three sub-detector systems, a e-mail: jpinfold@ualberta.ca with a fourth in the planning stage. MoEDAL bypasses the experimental difficulties encountered by the main LHC experiments in the detection of Highly Ionising Particle (HIP) messengers of new physics by using a passive plastic NTD technique to detect the ionisation trail of HIPs as well as a novel trapping array -called the MMT (Magnetic Monopole Trapper) for detecting HIPs that slow down and stop within its sensitive volume. Neither of these detector systems requires trigger or read-out electronics. As the MoEDAL NTD stacks are less than 1-cm-thick there is little chance that a HIP will be absorbed. Also, NTDs provide a tried-and-tested and cost-effective method to accurately measure the track of a HIP and its effective charge. Importantly, MoEDAL's exposed HIP films will be directly calibrated in a heavy-ion beam. MoEDAL's roughly one tonne of MMT detectors insures that a small but significant fraction of the HIPs produced will be trapped for further study in the laboratory. MoEDAL's ability to retain a permanent record, and even capture new particles for further study -will make it an invaluable asset in the elucidation of any Terascale BSM scenario covered by its extensive physics repertoire. There are no SM particles that can produce such distinct signaturesthus, even the detection in MoEDAL of few HIP particle messengers of new physics would herald a discovery. The only 'real-time' sub-detector system is the TimePix2 pixel array that will be used to monitor low-energy highly ionising beam-related backgrounds. The NTD detector sub-system The Low Threshold (LT-NTD) array, part of the NTD subdetector system, is the largest array of NTD detectors ever deployed at an accelerator. The NTD plastics employed are polyallyl-diglygol-carbonate (PADC) commonly known as CR39 R a transparent rigid plastic and the polycarbonate Makrofol R . Each of the roughly 300 (25 × 25 cm 2 ) LT-NTD stacks is comprised of three sheets of CR39 R and three of Makrofol as shown in Fig. 2. The CR39 layers in the LT-NTD array can detect particles with ionisation equivalent to five normal particles with charge resolution, better than 0.2 e, where e is the charge of an electron. The TDR NTD array has been enhanced by the high charge catcher (HCC) sub-detector -with threshold Z/β of approximately 50 -comprised of stacks of three Makrofol plastic sheets in an Al foil envelope. These lightweight low-mass detector stacks can be deployed in previously inaccessible areas on and around LHCb's VELO (Vertex Locator) detector increasing the geometrical acceptance for magnetic monopoles to ∼60%. Importantly, a plane of roughly 4 m 2 of HCC detectors -termed 'the shower curtain' -has been placed in the forward acceptance of the LHCb detector between the LHCb's RICH (Ring Imaging Cherenkov detector) and its first station of tracking detectors (TT) where the material budget is at a minimum. Although NTD technology was used in the past to search for monopoles [11], this is the first time at an accelerator that such a large area array of NTDs has been used to search for a wide range of magnetically and electrically charged HIPs. The passage of a highly-ionising particle through the plastic detector is marked by an invisible damage zone along the trajectory. The damage zone is revealed as a cone-shaped etch-pit when the plastic detector is etched using a hot sodium hydroxide solution. Then the sheets of plastics are scanned looking for aligned etch pits in multiple sheets using "intelligent" computer controlled optical scanning microscopes. A HIP with ionising power greater than or equal to 5 times that of a relativistic charged SM particle will leave a characteristic set of at least 6 collinear etch pits in the 3 CR39 sheets in each NTD stack. Magnetic monopoles, each with an ionising power that is thousands of times that of a SM particle will leave an unprecedented trail of 12 etch pits in a MoEDAL stack with 6 NTD sheets. These aligned etch pits, of size typically in the range 20-50 microns, accurately define a track that points towards to Interaction Point (IP). There is no known Standard Model background to this signal. The trapping detector sub-system The MMT is the newest sub-detector system to be added to the MoEDAL detector. The MMT detector consists of roughly 1 tonne of aluminium (Al) paramagnetic volumes placed at three points around the intersection point, IP8, that MoEDAL shares with the LHCb experiment. Al has an enhanced capability to trap monopoles due to its anomalously large nuclear magnetic moment. A fraction of the massive HIPs created will stop and be captured in the MMT detector as illustrated in Fig. 3. MoEDAL is the first experiment to use purpose-made trapping volumes to capture magnetic and electrically charged particles. The exposed monopole trapping volumes are monitored at the ETH Zurich SQUID facility for the presence of captured monopoles, a schematic description of the facility is given in Fig. 4. The use of SQUIDs to detect trapped magnetic charge has also been thoroughly tested in particle and astroparticle experiments where the search has been performed on 'found' or alternate use objects (such as beam-pipes) on a 'one-off' basis. The signal for a magnetic monopole in the monopole trapping detectors at the ETH facility would be a sustained current -resulting from the passage of a monopole through the SQUID detector. Test solenoids are used to calibrate the response of the SQUID to a trapped monopole. Using test solenoids we found that the SQUID can detect magnetic charges as small as 0.1 g. After the SQUID scan has been performed it is envisaged that the trapping volumes will be sent to SNOLAB -2 km underground -to be monitored for the decays of very long-lived electrically charged particles (τ > 10 7 s). The MoEDAL search for very long-lived particles using a dedicated detector deployed deep underground has some advantages over searches carried out with the LHC GPEs. For example, MoEDAL has no trigger requirement and thus can perform relatively model-independent studies. Using long test solenoids to mimic a monopole we have determined that the SQUID can detect trapped monopole with magnetic charge as small as 0.1 of a Dirac charge (0.1g D ). The TimePix radiation monitoring system An array of six TimePix2 pixel detectors are used to monitor the low energy highly-ionizing beam related backgrounds. Each pixel of the TimePix chip contains a preamplifier to enhance the signal, a discriminator to severely reduce electronic noise and a 4-bit DAC for threshold adjustment, and a 14-bit counter. MoEDAL uses the TimePix device's 'Time-over-Threshold' (ADC) and 'Time of Arrival' modes so that each pixel can supply an energy measurement. A photograph of a TimePix pixel chip is shown in Fig. 5. The TimePix detector is capable of providing a colour image of complete spallation events in its 300 micron thick silicon sensitive volume -with energy encoded in the colour.The TimePix sub-detector is read out via the web. The physics reach of MoEDAL As discussed above, the standard general-purpose Collider detectors at the LHC are not designed or optimised to detect massive slow-moving HIPs or mQPs. MoEDAL's new light on the high-energy frontier is provided by the use of massive HIPs or mQPs -for which there are no SM counterparts -as direct probes of pioneering new physics at the Terascale. Such an approach requires a new state-ofthe-art in the quest for massive HIP messengers of beyond the SM physics that is provided by the custom-designed MoEDAL experiment. MoEDAL's physics programme [9] covers many fundamentally important BSM scenarios allowing MoEDAL to significantly expand the LHC's discovery horizon in a complementary way, as illustrated in the pie chart shown in One of the main objectives for MoEDAL is the stateof-the-art search for the magnetic monopole, just as the search for the Higgs was the prime motivation for the LHC and its GPEs. The work of our Theory Board has renewed interest in the LHC search for a Terascale EW monopole [9,12] arising from the SM. If discovered, this would be the first topological particle to be observedwith the utmost consequences for our understanding of the Universe. But MoEDAL is designed to do much more, as indicated in Fig. 6 by probing BSM physics by searching for anomalously electrically charged particles. Recent MoEDAL results The first MoEDAL results utilized a 160 kg of prototype MoEDAL trapping detector exposed to 8-TeV protonproton collisions at the LHC, for an integrated luminosity of 0.75 fb −1 during LHC's Run I. No magnetic charge exceeding 0.5 gD was detected in any of the exposed samples, allowing limits to be placed on monopole production in the mass range 100 GeV ≤ M monopole ≤ 3500 GeV [13]. Model-independent cross-section limits have been presented in fiducial regions of monopole energy and direction for 1g D ≤ |g| ≤ 6g D , and modeldependent cross-section limits are obtained for Drell-Yan (DY) pair production of spin-1/2 and spin-0 monopoles for 1g D ≤ |g| ≤ 4g D . Under the assumption of Drell-Yan cross sections, mass limits are derived for jg j = 2g D and |g| = 3g D for the first time at the LHC, surpassing the previous result from the ATLAS Collaboration [14,15] which placed limits only for monopoles with magnetic charge |g| = 1g D . The first search for magnetic monopole production in 13 TeV proton-proton collisions during LHC's Run-2 using the trapping technique have extended the previous results with 8 TeV data during LHC's Run-1. In this case a total of 222 kg of MoEDAL trapping detector samples was exposed in the forward region and Future MoEDAL developments A key aspect of the MoEDAL experiment is that it is sensitive to massive slow-moving and very HIPs, messengers of new physics for which the standard LHC GPE detectors are not optimised. MoEDAL is preparing a proposal to add a new sub-detector that is sensitive to particles with charge as small as a thousandth that of the electron -a mini-charged particle (mQP). The main LHC detectors are essentially blind to such particles. This addition to its detector system is consistent with MoEDAL's ethos of extending the physics reach of the LHC by searching for anomalously charged messengers of new physics in a way that is complementary to the existing capability provided by the main LHC detectors. MoEDAL is proposing to deploy the MAPP (MoEDAL apparatus for detecting penetrating particles) in a tunnel shielded by some 30 m to 50 m of rock and concrete from the interaction point (IP8), as shown in Fig. 8. The purpose of the detector is to search for particles with fractional charge as small as one-thousandth the charge of an electron. This detector would also be sensitive to neutral particles from new physics scenarios via their interaction or decay in flight within the volume of the detector. The isolation of the detector means that the huge background from SM processes in the main detectors is largely absent. The first apparatus specifically designed to detect mini-charged particles was the SLAC (Stanford Linear Accelerator Centre) 'beam dump' type detector, comprised of scintillator bars read out by photomultiplier tubes [16]. MoEDAL's new detector, shown in Fig. 9, and another apparatus proposed for deployment near to the CMS detector [17] also designed to search for minicharged particles, both have a design that harks back to the original SLAC detector. In order to reduce backgrounds from natural radiation the photomultiplier tubes and scintillator detectors of the MoEDAL apparatus will be constructed from materials with low natural backgrounds currently utilised in the astroparticle physics arena. Its calibration system utilises neutral density filters to reduce the received light of high incident muons that manage to penetrate to the sheltered detector from the interaction point, in order to mimic the much lower light levels expected from particles with fractional charges. Cosmic-MoEDAL The MoEDAL collaboration is preparing an astroparticle extension to the MoEDAL-LHC experiment that will enable the search, for example, for magnetic monopoles to be extended from TeV scale at the LHC up to the Grand Unification (GUT) scale. In addition we propose to use the same detector technology for 'Cosmic-MoEDAL' as we use for MoEDAL-LHC. SLIM was the first experiment to use such an approach to extend the search for cosmic monopoles with masses from the GUT scale well below the GUT scale, with a high sensitivity. SLIM was necessarily deployed at high altitude at the Mt Chacaltaya lab. in Bolivia with an elevation of 5,400 m. However, SLIM's modest size (400 m 2 ) precluded it from the search for a flux of cosmic monopoles below the Parker Bound (an upper bound on the density of magnetic monopoles that is obtained from arguments based on the existence of a galactic magnetic field). Cosmic-MoEDAL is proposed as a 50,000-100,000 m 2 of plastic NTDs (CR39) deployed at high altitude. Such an array would be able to take the search for cosmic monopoles with velocities β 0.1 from the TeV scale to the GUT scale for monopole fluxes well below the Parker Bound. Possible sites for Cosmic MoEDAL include: Chacaltaya (5 km) and Tenerife-Tiede (3 km). An artists impression of Cosmic-MoEDAL on Mt Chacaltaya is shown in Fig. 10. Summary and conclusion In 2015, the LHC restarted operations at the unparalleled energy of 6.5 TeV beam energy and the enormous collision rate of around a billion collisions per second. The LHC has been compared to a time machine enabling us to recreate the conditions that occurred when the Universe was only roughly 100 picoseconds old, providing an unprecedented laboratory for the study of the cosmology of the nascent Universe at the earliest times. In parallel, non-accelerator experiments are also exploring the Tera-universe via highenergy astrophysics. The first deployment of the MoEDAL detector employing largely passive detector systems, tuned to the prospect of discovery physics, was at Point 8 on the LHC ring, in the Winter of 2014. As we have seen the novel MoEDAL detector, has a dual nature. First, it acts like a giant camera, comprised of NTDs -analysed offline by ultra fast scanning microscopes -sensitive only to new physics. Second, it is uniquely able to trap the particle messengers of physics beyond the SM for further study. MoEDAL's radiation environment is monitored by a stateof-the-art real-time TimePix pixel detector array. A new MoEDAL sub-detector, MAPP, sensitive to mini-charged particles and long-lived neutral particles, is being studied. The first MoEDAL results using only a small part of their overall detector system have provided the world's best limits on multiply charge monopoles. In 2017 and beyond we expect to exploit MoEDAL's full detector repertoire to provide further world class limits on highly ionizing magnetically and electrically charged particles from a number of well predicated theoretical scenarios. In addition, MoEDAL has an aggressive plan to extend the physics reach of MoEDAL LHC with the MAPP detector and a possible astroparticle extension to MoEDAL, called Cosmic MoEDAL, that will be capable of taking the search for, for example, magnetic monopoles up to the GUT scale. July 2012 the ATLAS and CMS experiments operating at the LHC announced the discovery of the Higgs boson. But, the LHC is only just getting started. Many fundamental questions still remain to be answered. Are there new symmetries of nature? Are there extra spatial dimensions? Is there a deeper substructure? Does magnetic charge exist? Why is gravity so weak compared to the other fundamental forces? What is the nature of dark matter? What was the physics of the earliest era of the universe? The list goes on. MoEDAL is designed to provide insights into such fundamental questions.
2018-12-05T19:34:34.415Z
2012-08-01T00:00:00.000
{ "year": 2017, "sha1": "81cd7a28f33e498f0766819f0954580c66c64d84", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2014/08/epjconf_icnfp2013_00111.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "81cd7a28f33e498f0766819f0954580c66c64d84", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
53571171
pes2o/s2orc
v3-fos-license
Formation of Cementite from Titanomagnetite Ore Titanium-containing magnetite ore (titanomagnetite ore or ironsand) is used as a source of iron in ironmaking. Reduction behaviour of titanomagnetite ore attracts attention from the viewpoint of its commercial processing and is extremely interesting from the viewpoint of the effect of the ore chemistry and morphology on reduction mechanisms and kinetics. Ironsand is found in many volcanic areas around the world. Its mineralogy has been studied extensively by geologists. It is mainly composed of homogeneous titanomagnetite particles. Titanomagnetite is a solid solution of magnetite and ulvospinel, with spinel cubic structure. The ulvospinel fraction x in the solid magnetite–ulvospinel solution (Fe3O4)1 x·(Fe2TiO4)x in the New Zealand ironsand, which was studied in this work, was 0.27 0.2. The cation (Fe , Fe and Ti ) distribution in a titanomagnetite lattice depends on the composition and temperature. There are two kinds of phase separation in titanomagnetite: 1) phase separation caused by the miscibility gap between magnetite and ulvospinel and 2) phase separation caused by partial oxidation of titanomagnetite. The latter case is called exsolution, which is due to low solubility of rhombohedral phase in a cubic phase. Chemistry and morphology of titanomagnetite ore have a profound effect on its reduction behaviour. It was well-established that carbothermal reduction of titanomagnetite ore, which is a major industrial technology for the ore processing, is slower than of hematite ore. A similar situation was observed in the gaseous reduction by carbon monoxide and hydrogen. The reduction of titanomagnetite by carbon monoxide can be presented by the following reaction: Fe3 xTixO4 (4 2x)CO (3 x)Fe xTiO2 (4 2x)CO2 Introduction Titanium-containing magnetite ore (titanomagnetite ore or ironsand) is used as a source of iron in ironmaking. Reduction behaviour of titanomagnetite ore attracts attention from the viewpoint of its commercial processing and is extremely interesting from the viewpoint of the effect of the ore chemistry and morphology on reduction mechanisms and kinetics. Ironsand is found in many volcanic areas around the world. Its mineralogy has been studied extensively by geologists. It is mainly composed of homogeneous titanomagnetite particles. [1][2][3] Titanomagnetite is a solid solution of magnetite and ulvospinel, with spinel cubic structure. The ulvospinel fraction x in the solid magnetite-ulvospinel solution (Fe 3 O 4 ) 1Ϫx ·(Fe 2 TiO 4 ) x in the New Zealand ironsand, which was studied in this work, was 0.27Ϯ0.2. 4) The cation (Fe 2ϩ , Fe 3ϩ and Ti 4ϩ ) distribution in a titanomagnetite lattice depends on the composition and temperature. [5][6][7][8][9] There are two kinds of phase separation in titanomagnetite: 1) phase separation caused by the miscibility gap between magnetite and ulvospinel 9,10) and 2) phase separation caused by partial oxidation of titanomagnetite. 1,2,[11][12][13] The latter case is called exsolution, which is due to low solubility of rhombohedral phase in a cubic phase. 2) Chemistry and morphology of titanomagnetite ore have a profound effect on its reduction behaviour. It was well-established that carbothermal reduction of titanomagnetite ore, which is a major industrial technology for the ore processing, is slower than of hematite ore. 3,14,15) A similar situation was observed in the gaseous reduction by carbon monoxide 3,14,16) and hydrogen. 17) The reduction of titanomagnetite by carbon monoxide can be presented by the following reaction 16) : It proceeds with formation of intermediate wüstite and ulvospinel phases. The reduction of titanomagnetite to wüstite was the slowest step, while wüstite transformed to metallic iron quickly. 16) Preoxidation of ironsand accelerated its reduction, which was attributed to the structural transformation of spinel cubic titanomagnetite to rhombohedral titanohematite during the oxidation. In the reduction of preoxidised ironsand, the volume increase during the transformation of titanohematite to titanomagnetite accelerates and facilitates further reduction reactions. Reduction behaviour of titanomagnetite was also found to be different from that of magnetite iron ore, although both minerals have the same crystal structure. Titanium in titanomagnetite stabilises the spinel structure and changes the thermodynamics of reduction. The slow reduction of ironsand in comparison with hematite and magnetite iron ores is due to two factors; 1) the spinel cubic structure of titanomagnetite and 2) the stability of titanomagnetite. 16,17) These features in reduction behaviour of titanomagnetite ore stimulate further interest in its study, particularly using the methane-containing gas, which has not been reported in literature. The aim of this study was to obtain data on reduction behaviour of titanomagnetite by methane-containing gas, stability of cementite formed in the reduction of titanomagnetite and elucidation of mechanisms of cementite decomposition. This paper presents results of a study of reduction behaviour and formation of cementite. Cementite decomposition will be discussed in another paper. Experimental The composition of the titanomagnetite ore (New Zealand ironsand) studied in this paper is given in Table 1. The reduction behaviour of the Mt Whaleback hematite ore was examined for a comparison. Its composition is also given in Table 1. The ore was sized, with only the 150 to 255 mm fraction being used in this study. The titanomagnetite ore was pre-oxidised in a muffle furnace at 1 000°C under air for 4 d. Reduction and cementation of iron ores by CH 4 -H 2 -Ar gas mixtures was studied using a lab-scale fixed bed reactor in a vertical tube electric furnace. The experimental set-up has been described elsewhere. 18) The gas flow rate was maintained at 1 L · min Ϫ1 , which was sufficient to neglect the external mass transfer resistance in the gas phase. A nominally 1 g sample of iron ore was lowered into the hot zone of the furnace under argon. When the sample reached the experimental temperature a reducing/carburising gas of varying composition was passed through the reactor. A summary of the experimental conditions is presented in Table 2. The sample was quenched after the experiment under argon. The specific surface area of samples was measured by a single-point BET method. The degree of reduction was defined as a fraction of oxygen removed in the reduction of iron oxides on the basis of the on-line mass-spectrometric analysis of off-gas composition. Cementite formation was analysed using quantitative XRD analysis, which was conducted using a copper Ka source. The scans were done at a rate of 1°/min with a step size of 0.2°2q. The morphology of samples was examined by both optical microscopy and SEM. Etching of the optical microscopy samples in a basic sodium picrate solution allowed differentiation between cementite and metallic iron. Reduction of Titanomagnetite Ore at Different Temperatures and Gas Compositions The effect of temperature on titanomagnetite reduction was studied in the temperature range 600 to 1 100°C using gas containing 25 vol% H 2 , 5 vol% CH 4 and 70 vol% Ar. The change in the degree of reduction with time at different temperatures is shown in Fig. 1. Increasing the temperature from 600 to 1 000°C increased the rate of reduction, which is expected as the reduction rate constant increases with temperature. Further increasing the temperature from 1 000 to 1 100°C had little effect on the rate and extent of reduction. This is due to sintering and the subsequent reduction in the surface area of the ore at high temperatures, shown in Fig. 2. The amount of carbon deposited within the sample increased as the temperature increased. The effect of the methane content of the reducing gas on the reduction of titanomagnetite ore was studied by varying the methane content of the gas from 0 to 20 vol%, with a fixed hydrogen content of 50 vol% at 900°C. The change in the degree of reduction with time for the reduction of ti- tanomagnetite ore with different methane contents is shown in Fig. 3. The methane content of the reducing gas did not have a significant effect on the reduction behaviour. The effect of the hydrogen content of the reducing gas on the reduction of titanomagnetite ore was studied by varying hydrogen content from 0 to 70 vol% with a fixed methane content of 5 vol% at 900°C. The change in the extent of reduction with time for the reaction of titanomagnetite ore with reducing gases of different hydrogen contents is shown in Fig. 4. It can be seen that when there was no hydrogen in the reducing gas, reduction by methane only was very slow. The rate of reduction increased with increasing hydrogen content to around 30 vol%. Further increase in the hydrogen content had no significant effect on the reduction rate. Comparison of Reduction Behaviour of Raw Titanomagnetite, Preoxidised Titanomagnetite and Hematite Ores In the preoxidation of titanomagnetite ore, titanomagnetite was partially oxidised to titanohematite. Change in the ore morphology and its effect on reduction behaviour using CO gas was studied in Ref. 16). It was shown that preoxidation increased the rate of reduction of titanomagnetite, approaching the reduction rate of hematite ore. Cementite formed in the reduction of hematite ore is unstable at temperatures above 900°C. To compare reduction behaviour of raw and preoxidised titanomagnetite ores with hematite ore by methane-containing gas, the reduction tem-perature was lowered down to 750°C. Reduction curves for the raw titanomagnetite, preoxidised titanomagnetite and hematite ores in reduction experiments at 750°C using gas containing 35 vol% CH 4 , 55% H 2 and 10 % Ar are shown in Fig. 5. As in the reduction by carbon monoxide, preoxidation of titanomagnetite to titanohematite sped up the reduction. The reduction curve for pre-oxidised titanomagnetite ore is close to the hematite ore. The further studies into the carburisation of the titanomagnetite ore were mainly conducted on the pre-oxidised ore. Cementite Formation in Reduction of Titanomagnetite and Hematite Ores Cementite formation was studied in the reduction/carburising experiments with hematite iron ore and preoxidised titanomagnetite ore at 750°C and 925°C, using gas containing 35 vol% methane, 55 % hydrogen and 10 % argon at a total gas flow rate of 1 L/min. The formation of cementite from pre-oxidised titanomagnetite at 750°C is shown in Fig. 6. The mass fraction of the iron-containing phases was determined by quantitative XRD. A number of samples were also analysed using Mössbauer spectroscopy to validate XRD data as described in work 19) . Results of XRD and Mössbauer analyses were in a good agreement. At 750°C, reduction of titanohematite to titano- magnetite was, practically, instant. Conversion reactions of magnetite to wüstite and wüstite to metallic iron occurred in parallel, while cementation reaction started in 5-10 min after appearance of metallic iron. While the reduction of the two ores occurs at a similar rate, the rate of formation of cementite from the two ores was quite different. The formation of cementite from the pre-oxidised ironsand was significantly slower than for the hematite ore at 750°C, taking around 30 min for full conversion to cementite, as opposed to around 12 min for hematite (Fig. 7). At 925°C, cementite started to form much faster, as shown in Fig. 8. At this temperature, conversion of iron to cementite was not complete as a result of cementite decomposition. It should be mentioned that in the reduction/carburisation of hematite ore at 925°C, conversion of iron to cementite had a much lower degree because of rapid cementite decomposition. Photomicrographs of pre-oxidised titanomagnetite ore after reduction/cementation at 750°C for 30 min are shown in Fig. 9. Although XRD analysis showed that conversion of iron to cementite at this temperature is close to completion in 30 min., a few iron grains ( Fig. 9(b)) and segments of iron rim on particle edges are clearly seen in these photomicrographs ( Fig. 9(c)). Particles have porous structure formed in reduction of titanomagnetite. Cementite Formation Cementite formation in the process of reduction of hematite ore by methane-hydrogen gas includes the following stages 18,20,21) : 1. Reduction of iron oxides to metallic iron by hydrogen. 2. Adsorption of methane on the surface of reduced iron and its decomposition with the formation of highly-active adsorbed carbon and hydrogen, which is catalysed by metallic iron. 3. Dissolution of adsorbed carbon into the metal, and diffusion of dissolved carbon into metallic iron. 4. Reaction between iron and dissolved carbon to form cementite (the activity of dissolved carbon is higher than unity relative to graphite). The overall chemical reaction on the surface of the iron is given by: .............................(1) Where C is carbon dissolved within the metallic iron. For the hematite ore, the rate of iron cementation is controlled by the chemical reaction of methane adsorption and decomposition on the iron surface. 18,20,21) This was supported by the fact that surface-active sulphur slowed down the rate of cementation. It was suggested 21) that the rate of the reaction (1) is proportional to the iron surface area A, the fraction of the iron surface area available for adsorption 1-q, and is a function of the partial pressures of methane and hydrogen, The mechanism of cementite formation from the titanomagnetite ore is similar to that from hematite. Therefore, specific surface area and fraction of the iron surface area available for adsorption are important factors in cementite formation. The two ores have significantly different surface properties in their original states. The specific surface area of the hematite ore was found to be ϳ3.6 m 2 g Ϫ1 and ϳ0.08 m 2 g Ϫ1 for the pre-oxidised titanomagnetite ore. However, in samples reduced in a 25vol%H 2 -Ar gas mixture, the specific surface area was found to be around 4.8 m 2 g Ϫ1 for the hematite ore 22) and 7.1 m 2 g Ϫ1 for the ironsand. 23) Thus, the difference in the surface area of the reduced iron is not likely to have caused the decrease in the rate of cementite formation for the pre-oxidised titanomagnetite ore. SEM images of samples obtained in reduction/cementation at 750°C of pre-oxidised titanomagnetite for 30 min and hematite ore for 15 min for hematite are shown in Fig. 10. Samples produced from both ores have a porous structure. In the sample obtained from the titanomagnetite ore, whiskers are clearly observed. Particles consist of porous cementite grains and unreduced oxides. These were identified by EDS to contain largely unreduced titanium oxides, CaO, MgO and other impurities. During pre-oxidisation, the uniform titanomagnetite present in the original ore was transformed into titanohematite. This was confirmed by XRD in the present study, as well as by Park. 23) In the core of the titanohematite, pseudobrookite was observed, which is enriched with titanium in comparison with the bulk material. 16) After reduction, titanium was present mainly in the form of oxides dispersed through the metallic iron. 17) Titanium was also detected in the cementite phase. Oxides in the iron ore affect the formation of cementite from iron ore. Egashira et al. 24) studied the effect of adding different gangue materials to iron oxide pellets on the for-mation of cementite. They found that CaO within the ore suppressed cementite formation. The hematite studied in this work contained 0.016 wt% CaO, while the ironsand apart from titanium oxides contained 0.67 wt% CaO as well as 2.94 wt% MgO. These oxides remain unreduced in the reaction of the titanomagnetite ore with methane-hydrogen gas mixture. Oxygen, which is present in the system, is a surface active element and decreases a fraction of the iron surface area available for adsorption of methane, slowing the formation of cementite. Conclusion Reduction of titanomagnetite ore and hematite ore by hydrogen-methane-argon gas mixtures was investigated in the range of 600 to 1 100°C. Iron oxides were reduced by hydrogen to metallic iron, which was carburised by methane with the formation of cementite. Increasing the temperature from 600 to 1 000°C increased the rate of reduction, while further a increase to 1 100°C had little effect. Increasing the hydrogen content of the reducing gas up to 30 vol% increased the reduction rate; above this level, the effect of hydrogen on the reduction rate was quite slight. The methane content of the reducing gas had no effect on the rate of reduction, but increased the deposition of free carbon. The hematite ore was reduced much more quickly than the titanomagnetite ore. However, preoxidation of the ironsand, in which the original titanomagnetite solid solution was transformed to titanohematite, increased the rate of reduction to a level close to that of the hematite ore. Iron oxide in the titanomagnetite ore was converted to cementite slower than in the hematite ore, despite having similar reduction rates. At 750°C, iron formed in the reduction of hematite ore was transformed to cementite in 12 min, while taking 30 min. for the pre-oxidised titanomagnetite. This was due to the chemical differences be-tween the ores, namely the high concentration of titanium, calcium and magnesium oxides in titanomagnetite ore. The presence of oxygen due to unreduced oxides may decrease the fraction of surface area of iron available for adsorption of methane which slowed the process of conversion of iron to cementite.
2018-11-13T13:47:09.103Z
2006-05-15T00:00:00.000
{ "year": 2006, "sha1": "02786b9f8a0f823f62ab7e6bf5693d4554ba1b67", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational/46/5/46_5_641/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "02786b9f8a0f823f62ab7e6bf5693d4554ba1b67", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
88524697
pes2o/s2orc
v3-fos-license
The physical parameters of neuronal cargo transport regulated by autoinhibition of kinesin UNC-104 In this review, we focus on the kinesin-3 family molecular motor protein UNC-104 and its regulatory protein ARL-8. UNC-104, originally identified in Caenorhabditis elegans (C. elegans), has a primary role transporting synaptic vesicle precursors (SVPs). Although in vitro single-molecule experiments have been performed to primarily investigate the kinesin motor domain, these have not addressed the in vivo reality of the existence of regulatory proteins, such as ARL-8, that control kinesin attachment to/detachment from cargo vesicles, which is essential to the overall transport efficiency of cargo vesicles. To quantitatively understand the role of the regulatory protein, we review the in vivo physical parameters of UNC-104-mediated SVP transport, including force, velocity, run length and run time, derived from wild-type and arl-8-deletion mutant C. elegans. Our future aim is to facilitate the construction of a consensus physical model to connect SVP transport with pathologies related to deficient synapse construction caused by the deficient UNC-104 regulation. We hope that the physical parameters of SVP transport summarized in this review become a useful guide for the development of such model. Introduction Kinesin is a molecular motor protein that moves to the plus-end of polarized microtubules, a component of the cell cytoskeleton, by obtaining energy from adenosine-triphosphate (ATP) hydrolysis. The role of kinesin in eukaryotic cells is mainly to deliver cargo vesicles anterograde as a porter along microtubules spread throughout cells. Kinesin plays a particularly significant role in neurons, where microtubules are used as railways to convey synaptic materials from the cell center to the terminal synaptic regions via a long axon (Hirokawa et al. 2009;Vale 2003). Physical models focusing on the motion of the kinesin motor domain and how force is generated as a result of ATP hydrolysis have been suggested (Hancock 2016;Hancock and Howard 1999;Kanada and Sasaki 2013;Peskin and Oster 1995;Sasaki et al. 2018) based on the results of in vitro single-molecule experiments (Nishiyama et al. 2002;Okada et al. 2003;Schnitzer et al. 2000;Tomishige et al. 2002;Vale et al. 1996;Visscher et al. 1999). However, these models do not completely explain the phenomenon of in vivo cargo transport, for which physical evaluations, such as force measurements, remain under development due to the complexity of the intracellular environment. Because the roles of kinesins in neurons are important to many neuronal processes (Hirokawa et al. 2009), and kinesin deficiency is related to neuronal diseases, such as Alzheimer's disease and Parkinson's disease (Chiba et al. 2014;Encalada and Goldstein 2014), physical models of in vivo cargo transport need to be studied in order to quantitatively understand the molecular basis of neuronal diseases. To this end, in vivo measurement of physical parameters associated with cargo vesicle transport needs to be summarized. A critical difference between in vivo and in vitro analysis of kinesin-mediated transport is the existence of regulatory proteins that control cargo transport (Fig. 1a). Because these regulatory proteins control the attachment and detachment of kinesin motors with cargo vesicles, a key physical parameter describing in vivo cargo transport is the "number" of kinesins cooperatively carrying a single cargo vesicle (Fig. 1b). Notably, maintaining an appropriate number of motors per cargo vesicle is important for healthy neuronal activity. Recent studies proposed a measurement method to quantify the number of motors per cargo unit in vivo using the fluctuation unit () (Hasegawa et al. 2019;Hayashi 2018;Hayashi et al. 2018a;Hayashi et al. 2018b). The fluctuation unit (), inspired by the fluctuation theorem of non-equilibrium statistical mechanics (Ciliberto et al. 2010;Evans et al. 1993;Seifert 2012), was used to quantify force applied to cargo vesicles. Then, because the motor protein generates force acting on a cargo, the number of motor proteins can be estimated from force values. The results of these experiments revealed that several motor proteins cooperatively carry a single cargo vesicle, with measurement of fluctuation units () as a force indicator enabling investigation of cargo transport in relation to the number of involved motor proteins. In this review, we summarize the physical parameters, including the number of motors, derived from previous studies (Hayashi et al. 2018a;Niwa et al. 2016) in order to evaluate regulatory systems involving in vivo cargo transport. Specifically, we focus on synaptic vesicle precursor (SVP) transport in the motor neurons of Caenorhabditis elegans (C. elegans), which enables green fluorescent protein (GFP)-tagged SVPs to be visualized in the living organism by fluorescence microscopy. Therefore, this represents a true in vivo cargo-transport system appropriate for the measurement of kinesis-related characteristics in a complex biological environment. Additionally, this system allows evaluation of SVP anterograde transport by one kinesin (UNC-104), whereas mammalian neurons use several kinesins to transport SVPs. In the following sections, we introduce UNC-104 and its associated regulatory mechanisms in C. elegans motor neurons. We then present the results of physical measurements involving UNC-104-mediated SVP transport and compare them between wild-type (WT) and mutant C. elegans lacking a primary regulatory protein responsible UNC-104 activity. Furthermore, we describe synapse mislocation caused by deficient UNC-104 regulation and present future perspective, which focus on the development of a physical model to potentially explain synapse mislocation as a function of the basic physical parameters involved in UNC-104-mediated SVP transport. UNC-104 function and regulation Here, we describe UNC-104 function involving SVP transport along microtubules, as well as its autoinhibition, regulated by ARL-8. Additionally, we describe fluorescence observation of UNC-104-mediated SVP transport in axons to allow analysis of UNC-104 motion. UNC-104 characteristics UNC-104 is a member of the kinesin-3 family of motor proteins and identified by genetic analysis of C. elegans. UNC-104-mediated SVP transport was determined in unc-104 mutant organisms, in which accumulation of synaptic vesicles was observed within cell bodies accompanied by dysfunctional synapse localization (Hall and Hedgecock 1991;Otsuka et al. 1991). Previous studies show that the UNC-104 homologs KIF1A and KIF1B transport SVPs in mammalian neurons (Niwa et al. 2008;Okada et al. 1995;Zhou et al. 2001), and that their tail domains bind to cargos when they transport SVPs (Klopfenstein and Vale 2004;Niwa et al. 2008) ( Fig. 2a and 2b). Although it was proposed that UNC-104/KIF1A could undertake processive movement as a monomer according to in vitro single-molecule experiments (Okada et al. 2003), the protein is considered to function as a dimer in in vivo . UNC-104 autoinhibition Several studies demonstrated UNC-104 inactivation via an autoinhibitory mechanism, which is required to avoid unnecessary cargo transport, as well as avoid unnecessary ATP hydrolysis in neurons, in situations where kinesin motors do not bind cargo (Verhey and Hammond 2009). UNC-104 contains four coiled-coil (CC) domains ( Fig. 2b) , with the neck CC domain (NC) essential for formation of a stable dimer. The CC1 and CC2 domains negatively regulate the activity of the motor domain, and the CC3 domain binds the small GTPase ARL-8. A recent structural study shows that the CC1 domain directly binds to the motor and NC domains in order to inhibit dimerization and motility; however, it remains unknown how the CC2 domain negatively regulates motor activity. A recent study described UNC-104 autoinhibition (Niwa et al. 2016) (Fig. 3). ARL-8, an SVP-bound arf-like small GTPase related to UNC-104, is essential for axonal transport of SVPs (Klassen et al. 2010) and directly binds to the UNC-104 CC3 domain in order to negate its autoinhibition (Niwa et al. 2016;Wu et al. 2013). Notably, ARL-8 is not required for UNC-104 binding to SVPs according to autoinhibition-defective UNC-104 mutants, which retain vesicle-binding ability in the absence of ARL-8 (Niwa et al. 2016). Previous reports (Niwa et al. 2008;Wagner et al. 2009) suggest that cargo-vesicle-bound ARL-8 binds to UNC-104 and releases autoinhibition by enabling binding of the tail domain to the cargo vesicle. Consequently, cargo binding induces dimerization , leading to full activation of UNC-104 . Fluorescence observation of synapses and UNC-104-mediated SVP transport SVPs in DA9 motor neurons of C. elegans can be observed by fluorescence microscopy. The C. elegans worms are anesthetized and fixed between cover glasses with highly viscous media prior to imaging in order to minimize the noise generated by fluctuations associated with their movement (Fig. 4a) (Hayashi et al. 2018a); (Niwa 2017;Niwa et al. 2016). SVPs were labelled with GFPs ( Fig. 2a) (Niwa et al. 2016). Because many SVPs accumulate at synapses, where they are unloaded from UNC-104 motors, this enables identification of synapse location by strong bright spots proximal to axonal terminal regions while SVPs can be identified as weak spots moving along the axon (Fig. 4b). Assumptions applied to the constant velocity segment (CVS) In this section, we summarize the assumptions and current model used to analyse the time courses of SVPs obtained by the fluorescence observation ( Fig. 4) (Hayashi et al. 2018a;Hayashi et al. 2018b). Time intervals exist, during which a single SVP can be tracked (Fig. 5a, red arrow), despite axons being surrounded by numerous SVPs (Fig. 5a). Then, recorded images of fluorescence observations allow acquisition of the center position (X) of an SVP along an axon as a function of time (t) for anterograde transport (Fig. 5b). Sharp changes in velocity are often observed in studies tracking cargo vesicles over longer time courses (Fig. 5c and 5d) and typical of in vivo cargo transport. In this review, such time-dependent changes in velocity are mainly considered as a result of changes in the number of motors carrying each individual cargo (Fig. 6a). Notably, this mechanism is possible through the repeated stochastic attachment and detachment of motors from microtubules. Then, velocity change for a given cargo can be explained by a change in the number of motors based on the force-velocity relationship of kinesin (Fig. 6b), when the viscosity effect is high enough. This suggests that long time courses should be divided into several constant velocity segments (CVSs) when the number of motors transporting a cargo is an object to be measured (Hasegawa et al. 2019;Hayashi 2018;Hayashi et al. 2018a;Hayashi et al. 2018b). Finally, we impose one more important assumption on CVS that there is an absence of competition (tug-of-war) between kinesin and dynein (Gross 2004;Muller et al. 2008;Welte 2004), where dynein represents a molecular motor protein that moves toward the minus-end of microtubules (Vale 2003). This is supported by a recent report suggesting that kinesins do not undergo a tug-of-war with dynein during their movement in opposite directions on the microtubule (Serra-Marques et al. 2019). It is an important future issue to further study this assumption. Measurement of the physical parameters associated with SVP transport This section describes the physical parameters involved in SVP transport measured in the previous reports (Hayashi et al. 2018a;Niwa et al. 2016), and their comparison between WT and arl-8-deletion mutant C. elegans (Niwa et al. 2016). Force In Fig. 5b, the SVP showed the directional motion transported by UNC-104 while exhibiting fluctuating behaviour originated mainly from thermal noise, stepping of the motors, and collisions of the SVP with other vesicles and cytoskeletons. We consider to quantify force acting on the SVP from its fluctuating behaviour. For each CVS (Fig. 5b), the fluctuation unit (FT) is defined as When P(X) is fitted by a Gaussian function: where the fitting parameters a and b correspond to the variance and mean of the distribution, respectively, χFT is calculated as In Fig. 7a, χ is calculated for each P(ΔX) for various intervals t from 10 to 100 ms. The converged value (χ*) is related to the drag force (F) acting on a cargo. Previous experiments (Hasegawa et al. 2019;Hayashi et al. 2018b) suggest that  * The χ value for the transport of 40 SVPs in WT C. elegans (Fig. 7a, left) was compared with that of 40 SVPs in arl-8-deletion mutant C. elegans (Fig. 7a, right) (Hayashi et al. 2018a). When Eq. (4) is valid, the groups represented by the different colours in Fig. 7a are regarded as force producing units (FPUs). The experimental results show four FPUs for WT C. elegans as compared with three FPUs for mutant C. elegans (Fig. 7a). Note that when 1 FPU is considered to be a dimer of UNC-104, χ*∼0.05 nm -1 for 1FPU corresponds to ∼ 5 pN estimated from the stall force values of UNC-104 dimer . Because the slight difference of χ* for each FPU (e.g. χ*∼0.05 nm -1 for 1FPU, χ*∼0.1 nm -1 for 2 FPUs and χ*∼0.2 nm -1 for 3 FPUs) observed between WT and mutant C. elegans indicated that arl-8 deletion did not affect the force generation of UNC-104, the decreased mean value of χ* in the mutant (Fig. 7b) was considered as a consequence of the decreased number of FPUs transporting an SVP. Number of motors When 1 FPU is considered to be a dimer of UNC-104, the number of FPUs represents the number of UNC-104 dimers. Comparison of FPUs between WT and arl-8-deletion mutant C. elegans (Fig. 8a) (Hayashi et al. 2018a) indicated a decrease in the proportion of 3 FPUs, whereas that of 1 FPU increased in the case of the mutant, with a 20% decrease in the mean number of FPUs between WT and mutant (Fig. 8b). This suggested that the absence of ARL-8 decreased the number of active UNC-104 dimers capable of SVP transport. Velocity Comparison of the velocity during a CVS between WT and arl-8-deletion mutant C. elegans (n = 40 each) ( Fig. 9) (Hayashi et al. 2018a) showed a slight reduction in the mutant relative to WT. Because both the force generated by each FPU (e.g. χ*∼0.05 nm -1 for 1FPU, χ*∼0.1 nm -1 for 2 FPUs and χ*∼0.2 nm -1 for 3 FPUs shown in Fig. 7a), and the size of the SVPs being transported (Fig. 10) changed only slightly, the difference in velocity was assumed to result from the difference in the number of UNC-104 dimers transporting individual SVPs (Fig. 8). Note that velocity can depend on the number of the motors based on the model described in Fig. 6b. SVP fluorescence intensity The fluorescence intensity (FI) of SVPs is different for each SVP, representing different sized of the SVPs (Fig. 10a). The distribution of FI compared between WT and arl-8-deletion mutant C. elegans (n = 40 each) (Hayashi et al. 2018a) suggests that SVP size was unchanged by arl-8 deletion (Fig. 10b). Run-length and -time The decreased number of UNC-104 dimers transporting SVPs in the arl-8-deletion mutant C. elegans (Fig.8) is linked to a decrease in the run length and time (duration) of SVPs defined as the persistence distance and time over which an SVP continues to move without stopping (Fig. 11a). Indeed, run length increases according to the number of kinesins (Furuta et al. 2013). The run length ( Fig. 11b and 11d) and time ( Fig. 11c and 11e) of the mutant C. elegans was shorter than that of the WT. Pause duration Comparison of pause duration, representing the time interval from SVP detachment from a microtubule until its subsequent attachment to another microtubule (Fig. 11a), between WT and arl-8-deletion mutant C. elegans (Niwa et al. 2016) was 2-fold longer for the mutant relative to WT (Fig. 12). Here, pauses in SVP movement are interpreted as detachment events from a microtubule (Fig. 11a), because axon narrowness is supposed to inhibit SVP movement following detachment. Because the probability of this attachment depends upon the number of UNC-104 dimers attached to an SVP, the pause duration should be affected by the molecular number of UNC-104, as well as the run length and time (Fig. 11b-11e). Anterograde current Anterograde current is defined as the frequency of UNC-104-mediated SVP anterograde transport (Niwa et al. 2016) and is 50% diminished in the mutant relative to WT (Fig. 13). Measurement of physical parameters associated with synapse construction Here, we summarize the results of physical measurements associated with synapse constructions in DA9 motor neurons in C. elegans (Niwa et al. 2016) (Fig. 14). It is a future issue to explain the results from the physical parameters of UNC-104-mediated SVP transport via an appropriate physical model. Distance from the cell body to synaptic regions Fluorescence micrographs of the synaptic regions of DN9 motor neurons show that synapse location differs between WT and arl-8-deletion mutant C. elegans (Fig. 14a). Moreover, the distance from the cell body to the synaptic region (measured for this review based on Ref. (Niwa et al. 2016)) is shorter in the mutant than in the WT (Fig. 14b). Additionally, it was reported that the synapses in the arl-8-deletion mutant worms localize in dendrites, as well as in the axons of DA9 motor neurons (Niwa et al. 2016). Distance between synaptic puncta Noting that the distance between synaptic puncta (Fig. 14a) characterizes synapse construction (Niwa et al. 2016), measurement of distance between synaptic puncta for neurons in WT and arl-8-deletion mutant C. elagans revealed a shorter distance in the mutant worms relative to that in WT worms. Discussion and perspective This review summarizes the physical parameters associated with UNC-104 transport of SVP in DN9 motor neurons of C. elegans (Fig. 2a). We focused on the mechanisms by which UNC-104 autoinhibition is released via ARL-8 (Niwa et al. 2016) (Fig. 3). These parameters measured in the previous studies (Hayashi et al. 2018a;Niwa et al. 2016) revealed the physical aspects related to SVP transport, including force (Fig. 7), the number of UNC-104 dimers carrying an SVP cargo (Fig. 8) and velocity (Fig. 9). We compared the parameters between WT and a deletion mutant of the UNC-104 regulatory gene arl-8. The results suggested that SVPtransport ability is weakened in the absence of ARL-8 ( Fig. 7-13). Additionally, we compared the physical parameters characterizing synapse construction in DA9 motor neurons between WT and mutant C. elegans (Fig. 14), with changes in these quantities indicating that UNC-104 autoinhibition is related to synaptic localization in these neurons (Niwa et al. 2016). Future construction of a physical model capable of quantitatively explaining these changes in synapse construction (Fig. 14) will be based on the physical parameters associated with UNC-104 transport of SVP (Fig. 7-13). We hope that this review provides a useful guide for the development of this model. Furthermore, this review addresses an issue reported at the Asian Biophysics Association Symposium 2018, specifically that defective kinesin autoinhibition is related to hereditary spastic paraplegia, and that this affects certain physical parameters associated with SVP transport; therefore, this review provided information relevant to human disease. The current findings suggest that the physical parameters associated with SVP transport are useful for understanding the molecular basis of neuronal diseases related to defective motor proteins. (Fig. 5a), GFPs are attached to each SVP, allowing fluorescence-based tracking (Niwa et al. 2016). The micrograph of a C. elegans worm was acquired using the experimental setup described in Fig. 4a respectively. When the number of motors carrying the cargo is changed, the velocity can be changed in this model. Fig. 7 Force quantified by using the fluctuation unit (χ) (Eq. (3)). (a) χ as a function of t for WT C. elegans (left) and for the arl-8-deletion mutant C. elegans (right) (Hayashi et al. 2018a). 40 SVPs were investigated for each case. χ converges to the constant value χ* as t becomes large. (b) The mean value of χ* is compared between WT and arl-8-deletion mutant C. elegans. In the right y-axis, the approximate force value is shown as a reference value, noting that χ* is converted to the force using the stall force value of UNC-104 dimer obtained in the singlemolecule experiment . The error-bars represent the standard error (n=40 for each). Fig. 7a, the population of each FPU was calculated. The population was investigated for WT C. elegans (n=40) (left) and for the arl-8-deletion mutant C. elegans (n=40) (right). (b) The mean number of FPUs is compared between WT (left) and arl-8deletion mutant C. elegans. In both cases, about two motors carry a cargo together on average. The error-bars represent the standard error (n=40 for each). Fig. 9 Mean velocity at constant velocity segments (CVSs) (Hayashi et al. 2018a). The mean value for velocity is compared between WT (n=40) (left) and arl-8-deletion mutant C. elegans (n=40) (right). The error bars represent the standard error (n=40 for each). and those of run time (c) are investigated for WT C. elegans (n=400) and the arl-8-deletion mutant C. elegans (n=400), respectively. Mean run length (d) and mean run time (e) are compared between WT and arl-8deletion mutant C. elegans. The error-bars represent the standard error (n=400). Fig. 12 Pause duration of SVPs (Niwa et al. 2016). The definition of pause duration is described in Fig. 11a (left). Pause duration is compared between WT and arl-8-deletion mutant C. elegans. The error-bars represent the standard error (n=512 for WT, n=132 for the mutant). Fig. 13 Anterograde current of SVPs (Niwa et al. 2016). Anterograde current is defined as the duration of anterograde run per second. Anterograde current is compared between WT and arl-8-deletion mutant C. elegans. The error-bars represent the standard error. Fig. 14 Physical measurement on synapses. (a) Fluorescence micrographs of the DA9 motor neurons in the cases of WT C. elegans (left) and the arl-8-deletion mutant C. elegans (right). (b) The distance from the cell body to the synaptic region is compared between WT and arl-8-deletion mutant C. elegans (n=10 for each). Note that the data was newly added for this review based on Ref. (Niwa et al. 2016). The error-bars represent the standard error. (c) The distance between synaptic puncta is compared between WT and arl-8-deletion mutant C. elegans (Niwa et al. 2016).
2019-03-29T01:59:41.000Z
2019-03-29T00:00:00.000
{ "year": 2019, "sha1": "6569a770b0b14bfabe137fd269f696745a833005", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6569a770b0b14bfabe137fd269f696745a833005", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics", "Biology" ] }
268492254
pes2o/s2orc
v3-fos-license
Unveiling the Dual Threat: How Microbial Infections and Healthcare Deficiencies Fuel Cervical and Prostate Cancer Deaths in Africa Cervical and prostate cancer account for 7.1 and 7.3 deaths per 100,000 people globally in 2022. These rates increased significantly to 17.6 and 17.3 in Africa, respectively, making them the second and third leading cause of cancer deaths in Africa, only surpassed by breast cancer. The human papillomavirus is the prime risk factor for cervical cancer infection. On the other hand, prostate cancer risks include ageing, genetics, race, geography, and family history. However, these factors alone cannot account for the high mortality rate in Africa, which is more than twice the global mortality rate for the two cancers. We searched PubMed, Embase, Scopus, and Web of Science to select relevant articles using keywords related to microorganisms involved in cervical and prostate cancer and the impact of poor healthcare systems on the mortality rates of these two cancers in Africa by carrying out a detailed synopsis of the studies on microbial agents involved and the contributory factors to the deteriorating healthcare system in Africa. It became apparent that the developed countries come first in terms of the prevalence of cervical and prostate cancer. However, more people per capita in Africa die from these cancers as compared to other continents. Also, microbial infections (bacterial or viral), especially sexually transmitted infections, cause inflammation, which triggers the pathogenesis and progression of these cancers among the African population; this has been linked to the region’s deficient health infrastructure, making it difficult for people with microbial infections to access healthcare and hence making infection control and prevention challenging. Taken together, untreated microbial infections, primarily sexually transmitted infections due to the deficient healthcare systems in Africa, are responsible for the high mortality rate of cervical and prostate cancer. Introduction Cancer-related mortality remains a worrying concern in public health, accounting for approximately 10 million deaths annually [1].Of all cancer-related deaths, cervical cancer (CC) and prostate cancer (PCa) are the sixth and fifth leading causes of cancer mortality globally but third and second in Africa, respectively [1].Except for breast cancer, CC and PCa cause more deaths than other cancer types in Africa [2].In the global context, CC and PCa account for 7.1 and 7.3 deaths per 100,000 people globally in 2022 [1].Several risk factors account for the etiology of these cancers.These factors include microbial infections, such as the human papillomavirus (HPV), with which CC is primarily associated.At the same time, age, race, geography, family history, and genetics are considered the predominant causative factors of PCa [3,4].Despite reducing incidence and mortality rates for these cancers in some developed countries, the picture looks different in Africa.Even more worrying is that the mortality rates, respectively, increased marginally to 17.6 and 17.3 in Africa, which exceeded the world mortality rates by more than 2-fold (CC; 2.5-fold and PCa; 2.4-fold) (Figure 1a,b) [1]. cancers in Africa should be re-examined.In this narrative review, we searched PubMed, Embase, and Web of Science to select relevant articles using key words such as 'cervical cancer' AND 'prostate cancer' AND bacteria AND virus AND Africa AND 'healthcare system' AND mortality.We also synthesized data from the World Health Organization's (WHO) website, together with the relevant literature, to elucidate the mechanisms linking microbial infections, poor healthcare systems, and the high mortality rates of CC and PCa in Africa.In the first section of this review, we briefly highlight the pathophysiology and epidemiology of CC and PCa.The second section discusses the microorganisms involved in the pathogenesis and mortality of these two cancers.The third section discusses the factors that contribute to the poor healthcare system in Africa, how they affect health delivery, and link to the high mortality of CC and PCa in Africa.Therefore, the driving forces that orchestrate the high mortality of patients with these cancers in Africa should be re-examined.In this narrative review, we searched PubMed, Embase, and Web of Science to select relevant articles using key words such as 'cervical cancer' AND 'prostate cancer' AND bacteria AND virus AND Africa AND 'healthcare system' AND mortality.We also synthesized data from the World Health Organization's (WHO) website, together with the relevant literature, to elucidate the mechanisms linking microbial infections, poor healthcare systems, and the high mortality rates of CC and PCa in Africa.In the first section of this review, we briefly highlight the pathophysiology and epidemiology of CC and PCa.The second section discusses the microorganisms involved in the pathogenesis and mortality of these two cancers.The third section discusses the factors that contribute to the poor healthcare system in Africa, how they affect health delivery, and link to the high mortality of CC and PCa in Africa. Prostate Cancer The prostate gland, an androgen-stimulating organ whose secretion forms part of the semen, is the target site of PCa, and mutations in the glandular cells that constitute the prostate may orchestrate nodule formation and PCa [5].This tumor may spread to the bone or lymph nodes or remain inside or close to the nearby prostatic tissue [6].Most PCa cases are diagnosed as localized illnesses, which are typically asymptomatic [7].In these circumstances, abnormal prostate-specific antigen (PSA) and abnormal digital rectal exam (DRE) levels may be the earliest indicators of malignancy, thus providing a chance for prompt intervention [8].Nonspecific lower urinary tract symptoms associated with PCa include nocturia, hematuria, dysuria, and sexual dysfunction.Also, bone pain, most commonly in the vertebrae, pelvic region, ribs, or proximal femur, erectile dysfunction, weight loss, urine retention or incontinence, and weakness, among other symptoms, may be experienced by patients with metastatic PCa [9]. PCa is ranked the second most common cancer among men, which affects one in eight men in their lifetime, per the WHO Global Cancer Observatory (GLOBOCAN) data [1].The prevalence of metastatic PCa at presentation ranged between 6.3% and 8% [10].Furthermore, around 15% of patients with localized PCa at presentation treated curatively advanced to metastatic disease [11].The significant risk factors of PCa include age, race, genetics, geography, and family history [3,12].The above risk factors are 'clear risk factors' according to the American Cancer Society.However, diet, obesity, smoking, chemical exposure, STIs, and vasectomy are considered 'less clear risk factors' of PCa [13].Men in the age range of 50-70 have a high likelihood of developing PCa.Moreover, elderly white US adults between the ages of 75 and 79 have more than 100 times the chance of developing PCa relative to 45-49-year-olds [4].In terms of race and geographic location, Australia, Europe, and North America have the highest incidences of prostate cancer, i.e., the top three globally [1]. In contrast, the lowest incidences are found in Africa, Asia, Latin America, and the Caribbean [1].Genetically, mutations in BRCA1 or BRCA2 pose a risk.Men with the androgen receptor gene, which contains a polymorphic region of CAG repeats with less than 18 in length, have a higher likelihood of developing prostate cancer compared to those with 26 repeats or longer; this explains why black Americans and people of African origin have high PCa risks [13][14][15].In the family tree, men whose close relatives, such as father, brother, or son, have cancer of the prostate also have a higher chance of developing it as well [15]. On a global scale, from 2000 to 2022, the mortality of PCa declined, but with an increasing incidence rate, probably because of PSA screening and treatment [1].However, the risk of mortality is higher among Africans affected by CC and PCa [2].As shown in Figure 1b, the GLOBOCAN data for 2022 showed that the mortality of CC and PCa in Africa is more than double the global rate (Figure 1a,b).Also, it has been conclusively reported that PCa might be underreported or underdiagnosed in Africa; however, its incidence and mortality are still a serious public health concern [16].Epidemiological and surveillance studies also reveal a high mortality rate of PCa among black people relative to their white counterparts, as proven by a comparative study between black people in West Africa and America, which confirmed a similar PCa incident rate [17]. In this section, the evidence reviewed suggests that the developed countries of Europe, Australia, and North America have the highest morbidity of PCa.Surprisingly, the mortality rate tells a different story, where lower-income countries like Africa have the highest death rates.Screening for early detection coupled with treatment options currently available, such as surgery, radiotherapy, hormone therapy, chemotherapy, and immunotherapy, among others, can help reduce the risk of PCa and consequently reduce the number of deaths [13]. Cervical Cancer On the other hand, CC is a female disease and is considered among the fourth most dominant cancer that affects women [18].It affects the cervix, specifically the squamous junction consisting of reserve cells above the basement membrane and the cervical epithelium, which are susceptible to malignant HPV [19].HPV is a double-stranded DNA virus with over 450 genotypes, and almost all cancer cases contain 1 of 13 malignant genotypes of HPV [20].It also includes approximately 150 types, and is considered the most significant risk factor for CC [21].However, factors such as smoking, sexual history, chlamydia infection, weak immune system, use of oral contraceptives, multiple full-term pregnancies, low fruit or vegetable diet, taking diethylstilbestrol, and having a family history of CC are also regarded as potential risk factors [13]. The marker of HPV infection is a growth (warts) referred to as papilloma on the surface of the anus, genitals, mouth, and throat; hence, can spread by skin contact through sex: oral, vagina, or anal [13].Genital warts are non-carcinogenic; thus, are classified as a 'low-risk' HPV type, which is caused by types 6, 11, 42, 43, and 44 [21].However, the carcinogenic HPV strains '16, 18, 31, 33, 34, 35, 39, 45, 51, 52, 56, 58, 59, 66, 68, and 70' can alter tissues in the cervix into malignant tissues upon initial HPV infection-this leads to lesions call intraepithelial neoplasia [22,23].Further, HPV genotypes also exhibit regional variability.For instance, genotype 35 has been linked to a high risk of CC among individuals of African descent as compared to other racial groups [24]. Irrespective of the genotype, any form of HPV infection should be considered virulent until confirmed otherwise.An HPV infection can be active without visible or microscopic alterations of the cervix and mostly disappears in 1 to 2 years as a result of suppression by the host immune system or biological unfitness [25].Despite evidence of immunity against secondary infection, immunity after natural infection is yet to be fully understood.In contrast, the HPV vaccine can provide approximately 90% immunity against the disease, lasting about 15 years [26]. CC is a complicated disease, and HPV is the main initiating factor.Thus, understanding the intricate interplay of the virus, host genetics, and cellular processes involved in transformation is crucial to developing effective preventive, diagnostic, and therapeutic strategies. Just like PCa, the morbidity and mortality rate of CC in Africa is also on an upward spiral, with about 34 out of every 100,000 women infected and about 70% mortality (relative to approximately 5% in developed countries) [22].Screening and vaccination are effective in preventing CC.However, available treatment modalities currently include surgery, radiotherapy, immunotherapy, chemotherapy, and targeted drug therapy [13,27]. The incidence and mortality of CC in Africa is approximately 80%.However, no evidence reviewed above indicates the cause of the high mortality rate; hence, further interrogation is needed.The following section discusses the microorganisms involved in CC and PCa. Microbial Agents and Cancer Aging, geography, ethnicity, and genetics, among others, as already noted above, are particularly predominant factors that predispose a person to cancer.However, microbial pathogens such as bacteria and viruses are implicated in cancers of the prostate and cervix.Approximately 20% of cancer incidences are caused by infectious agents, and a good number of studies have shown the presence of some bacterial and viral agents such as Escherichia coli, Cutibacterium acnes, Neisseria gonorrhea, HPV, Herpes simplex, Epstein-Barr virus, and Mycoplasmas [28,29].Lawson et al. (2022) concluded in their pooled studies that, except for HPV, the microorganisms mentioned earlier have roles in PCa oncogenesis, which is yet to be proven.They also hinted at the potential but unknown roles of Cytomegalovirus, Chlamydia trachomatis, Trichomonas vaginalis, and Polyomaviruses in chronic prostatic inflammation of the prostate [28].Herpes Simplex Virus types 1 and 2, HPV, Human Herpes Virus 8, Cytomegalovirus, and Hepatitis C (HCV) have been found in the biopsies of CC and PCa patients [30].Also, about 90% of all CC cases are instigated by HPV [31]. Most of these microbial agents involved in cancer pathogenesis are usually part of the normal microflora, involved in maintaining good gut health, that become virulent due to a breach in any part of the surface, immune weakness, or incomplete antimicrobial therapy [32,33].Even though it is expected that a harmonious balance exists between the host and its microbiota, the relationship between these active organisms and urogenital health has yet to be determined [32].That notwithstanding, various agents, for example, drugs, environmental factors, and exogenous pathogenic bacteria, may offset the balance and eventually instigate multiple disease conditions, including cancer [33].Thus, the habitat of human-dwelling microbes, including the biotic and abiotic components constituting the microbiome, could directly or indirectly influence the various stages of cancer either at the site of carcinogenesis or by regulating changes in metabolism and immunity [34]. The effects of these infectious microorganisms on cancer patients can be adverse if not treated.Research reports showed that close to 90% of cancers in developed countries are diagnosed before they become out of control, as compared to about 30% in developing countries, due to efficient and underdeveloped healthcare systems, respectively [2,35].Therefore, transmission of microbial infection tends to cause CC and PCa if immediate and prompt infection treatment is not provided.Further details of specific bacteria and viruses commonly implicated in CC and PCa pathogenesis and spread are discussed below. Bacteria Species in Cervical and Prostate Cancers Even though the role of microbial agents in cancer etiology has long been acknowledged, it took some time before the idea became established [31].Tissue culture of the prostate from about 64 men undergoing prostatectomy identified more than 85 microorganisms noted to be involved in chronic inflammation of the prostate in some studies carried out between 2005 and 2010, where the Propionibacterium spp. was reported to be the most prominent strain [36,37].Caini et al. (2014) observed that people who have previously been infected with gonorrhea have a 20% risk of developing PC and that men infected with PC had bacteria in their prostate tissues, with Neisseria gonococcus being the most common bacterial species identified [30].Other studies using 16S rDNA sequencing and PCR to examine the urine and prostate biopsies of PCa patients established the presence of Bacteroides massiliensis, Streptococcus, Bacteroides spp., Corynebacterium, Staphylococcus, Pseudomonas, Escherichia coli, Acinetobacter, Helicobacter pylori, Gardnerella vaginalis, and Propionibacterium in the samples [36,38,39].These microbes could be involved in initiating or promoting PCa progression through inflammation. On the other hand, the number of bacterial species involved in CC includes the following: Lactobacillus, Campylobacter, E. coli, Klebsiella pneumoniae, Enterococcus faecalis, Proteobacteria, Enterobacter cloacae, Pseudomonas aeruginosa, Morganell amorganii, and Enterobacter aerogenes [40,41].Also, bacterial genera such as Sneathia, Gardnella, Atopobium, Prevotella, Ureaplasma, Bacteroides, and Leptotrichia, which exist as part of the gut or vaginal microbiome, have been identified at higher levels in patients with cervical lesions [32].All these bacteria increase the risk of CC as well as PCa.They can also serve as biomarkers for identifying ulcerations in the cervix as well as identifying HPV and CC risk in women [42]. Even though some reports ruled out the possibility of other microbial involvement in PC, except for N. gonococcus, in the face of this overwhelming evidence, it will be sound to infer that those bacteria (especially sexually transmitted infection (STI) strains) play a crucial role in PCa progression and mortality.Other studies reported that Chlamydia trachomatis and Trichomonas vaginalis could also be involved in CC and PCa.Evidence suggests that STIs can increase the risk of PCa by inducing chronic inflammation within the prostatic tissue, thereby orchestrating uncontrolled cell proliferation and consequently resulting in carcinogenesis [29,43].In addition, there have been suggestions that a history of multiple STI infections or untreated infections could result in a higher possibility of PCa development [44].Also, some studies established that Mexican, American, and Asian people with a history of STIs are more prone to developing PC as compared to those without a previous STI history.Whilst 2016) explicitly stated that people with a gonorrhea infection history are two times more likely to develop cancer of the prostate [45,46]. Moreover, an observation made by a study showed that E. coli, Klebsiella pneumoniae, Enterococcus faecalis, Proteobacteria, Enterobacter cloacae, Pseudomonas aeruginosa, Morganell amorganii, and Enterobacter aerogenes were found in varying percentages in the discharges of CC patients, with E. coli being the dominant microbe (approximately 62.92%) [40].They concluded that E. coli and HPV coinfection contribute to CC development.Thus, STIs such as gonorrhea, syphilis, Trichomonas vaginalis, and Chlamydia trachomatis are among the dominant bacterial species noted for initiating and facilitating CC and PC carcinogenesis and metastasis. The high prevalence of these pathogenic bacteria in CC and PCa patients might be the driving force of inflammation of the cervix and prostate, resulting in progression and severity, respectively [29,43].Recent work in chronic inflammation asserts that microorganisms are implicated in the pathogenesis and progression of cancer since inflammation drives about 20% of cancer incidence [28].Others also believe that some microbes, such as H. pylori, interfere with cell cycle regulation, resulting in uncontrolled proliferation, which is characteristic of all cancers, enabling PCa tumorigenesis [47].Also, immune-associated gene downregulation and suppressing immune cell expression are the mechanisms by which Gardnerella vaginalis induces cancer [33]. The inflammation, disruption in the cell cycle, and downregulation of the immune system caused by microbes can increase the risk of mortality.The spike in mortality cannot only be attributed to bacteria involvement; viruses also play a role in cancer pathogenesis and mortality, which are discussed in the next section. Viruses Implicated in Cervical and Prostate Cancer Some viral strains, like bacteria, have been associated with CC and PCa infection viz Herpes Simplex Virus types 1 and 2, Human Herpes Virus 8, HPV, Cytomegalovirus, and Polyomaviruses [28,48,49].More than 90% of all CC cases have been reported to be caused by HPV [49]. The primary mechanism by which these viral pathogens drive the pathogenesis and progression of CC and PCa is inflammation induction in the cervix and the prostate that might be involved in destroying immune cells in the body, stifling their ability to check and destroy abnormal cell growth.Thus, these cells grow out of control and become malignant, leading to fatality [29,50].Further, Gao et al., 2023, reported that the F-box protein-FBXO22-enhances HPV-linked CC proliferation and reduces autophagy by blocking the liver kinase B/AMPK signaling [51].The fatalities of these cancers connected with viral sources are unusually high in developing countries like Africa, who are still battling with infection prevention.More details on viruses involved in the two cancers under consideration are summarized in good reviews found here [52,53]. Other Factors Involved in Cervical and Prostate Cancers Apart from the factors noted above, other risk factors for CC and PCa include smoking, long-term use of hormonal contraceptives, poor diet, immunosuppression, promiscuity, and HIV infection [49,54].Among these, smoking has been linked to HPV infection and progression, resulting in high CC incidence [55].Related to smoking are environmental agents, some of which include chemicals contained in cigarette smoke like coal tar, smoke inhaled from burning wood, and tar-based sanitary pads that induce signaling pathways suitable for HPV-related cervical carcinogenesis [56].The WHO has established that most inhabitants of developing countries such as Africa depend on crop residue, wood, and animal dung for cooking and heating.Since biomass stoves are noted for bio-carcinogen emission, women become exposed to smoke from these stoves and hence stand higher chances of developing CC [57,58]. Further evidence suggests that these cancers have other unusual etiological factors.For instance, working in certain occupations, namely military/law enforcement, farming, management, administrative jobs, public safety, night shift work, and toxic substances in some work environments may have elevated risks for PC [52,59,60].Additionally, people employed in managerial and military occupations are at 2 and 3 times the risk of developing PCa overall and aggressive PCa, respectively, as compared to the usual occupations of Ghanaian men, who have very low rates of PSAs in screenings [59].However, some of these studies are limited to individuals with European ancestry who have high rates of PSAs in screenings, which can lead to biased results.Thus, in brief, hormone-based contraceptives, sexual promiscuity, HIV infection, smoking, exposure to smoke, and working in specific jobs can also increase the risk of CC and PC and thus could be partly responsible for more people with these cancers dying. Factors That Militate against Good Healthcare Delivery in Africa Poor healthcare systems in Africa have negatively impacted healthcare delivery efforts.One of the contributory factors is the lack of capacity to control infections due to inadequate infrastructure, such as electricity supply, running water, and insufficient sanitation measures [61].Water is sine qua non to infection control and an efficient operation of healthcare facilities [62].Hence, the erratic water supply can dwindle the efforts to control infections, making it difficult to provide adequate patient care and implement infection control measures such as proper disposal of clinical waste and hand sanitization [63]. Similarly, the need for more sustainable power is one of the infrastructural challenges causing poor healthcare management in Africa.An analysis of healthcare service provision and access to energy in 2012 and 2013 in Senegal showed that less than 50% of health facilities have access to electricity, with 18% and 3% of facilities using fuel-powered generators and solar systems, respectively [64]. In addition, other factors, such as insufficient budgetary allocation, scanty human resources, poor management and leadership, and corruption, have contributed to the African continent's inefficient healthcare plight [65].Inadequate budgets for health facilities can cause a shortage of resources, such as medical supplies and equipment, in many African hospitals.For instance, poor infrastructure or lack of resource availability, common in low-and middle-income countries (LMIC) to which Africa belongs, has been linked to an increased burden of CC [35].An estimated 85% of worldwide cases of CC come from the LMIC, with a death rate of 70%, as compared to high-income countries with about 5%.The WHO grouped most parts of Africa, Eastern, Southern, and Middle, among the high-risk regions for cervical cancer [66].Similarly, Uganda, in Africa, has the highest PCa mortality rate, and Sub-Saharan Africa leads globally in terms of cancer-related deaths, of which specifically cervical and prostate cancer are among [2,66]. Further, there is also a shortage of trained healthcare workers, epidemiological expertise, and resources for research, which have also been partly responsible for the current status quo of health in Africa [67].These make it challenging to provide adequate supervision and training in infection control [65]. Moreover, low health insurance coverage, especially for the poor and vulnerable, is another significant factor that makes Africa's healthcare systems less effective [68].In a study by Barasa et al. (2021), only four countries in Sub-Saharan Africa had health insurance systems with more than 20% coverage, which include Rwanda, Ghana, Gabon, and Burundi.They also reported that most subscribers were people of rich backgrounds [69]. It is safe to infer that financial constraints result in inadequate budgetary allocation and infrastructure shortage, which affects the supply of the necessary resources such as power and water for health facilities.Also, bad management or lousy leadership in the government and health facilities opens doors to corruption.Hence, the depletion of the meager resources makes it challenging to fulfill most of these obligations, including providing health insurance for people experiencing poverty.All these will consequentially contribute to an increase in the disease burden, including cancer.An increase in the incidence of these cancers might not lead to deaths.However, when the necessary preventive and control measures are lacking because of the factors discussed above, it can cause an increase in mortality.For instance, some reports show that only close to 30% of all cancer cases in Africa are detected before they get to a point where treatment is no longer possible [35]. Even though a few areas in parts of Africa may have relatively good medical facilities, sociocultural beliefs could hinder access to healthcare.For example, fear, embarrassment, and lack of support from spouses are impediments to cervical screening exercises, as reported by Srinath et al. (2023) in a study on the barriers to CC and breast cancer screening in LMIC.They assessed availability, approachability, acceptability, affordability, awareness, and appropriateness.They found that the significant obstacles to screening were the need for awareness, the high cost of screening, and the distance from their places of dwelling to screening [70].They emphasized the need to understand the risk factors and improve confidence in the health system.The differences in the effectiveness of intervention programs, such as screening and vaccinations between developed and developing countries, make preventable HPV-induced CC challenging to control in the latter [71]. One potential factor linking microbial infection, CC and PCa, and poor healthcare systems in Africa is STIs, as noted under the infection agents in the cancer section.In Africa, STIs are prevalent due to multiple factors, including lack of health education, limited access to healthcare, and cultural factors that discourage people from talking about sexual health [72].Further, lack of access to preventive measures such as testing and vaccinations, for instance, for HPV, as well as limited treatment options, can exacerbate the problem [73].Poor healthcare delivery in Africa can also lead to delays, misdiagnoses and inadequate treatment for STIs and cancers, culminating in poorer health outcomes for individuals affected, thereby resulting in death, as illustrated in Figure 2. Together, inadequate human resources, low budgetary allocation, poor infrastructure, lack of health education, and bad leadership contribute to a flawed healthcare system, which is incapable of providing the necessary services like infection control; this results in increased infection rates contributing to CC and PCa, leading to an elevated mortality rate.providing health insurance for people experiencing poverty.All these will consequentially contribute to an increase in the disease burden, including cancer.An increase in the incidence of these cancers might not lead to deaths.However, when the necessary preventive and control measures are lacking because of the factors discussed above, it can cause an increase in mortality.For instance, some reports show that only close to 30% of all cancer cases in Africa are detected before they get to a point where treatment is no longer possible [35]. Even though a few areas in parts of Africa may have relatively good medical facilities, sociocultural beliefs could hinder access to healthcare.For example, fear, embarrassment, and lack of support from spouses are impediments to cervical screening exercises, as reported by Srinath et al. (2023) in a study on the barriers to CC and breast cancer screening in LMIC.They assessed availability, approachability, acceptability, affordability, awareness, and appropriateness.They found that the significant obstacles to screening were the need for awareness, the high cost of screening, and the distance from their places of dwelling to screening centers [70].They emphasized the need to understand the risk factors and improve confidence in the health system.The differences in the effectiveness of intervention programs, such as screening and vaccinations between developed and developing countries, make preventable HPV-induced CC challenging to control in the latter [71]. One potential factor linking microbial infection, CC and PCa, and poor healthcare systems in Africa is STIs, as noted under the infection agents in the cancer section.In Africa, STIs are prevalent due to multiple factors, including lack of health education, limited access to healthcare, and cultural factors that discourage people from talking about sexual health [72].Further, lack of access to preventive measures such as testing and vaccinations, for instance, for HPV, as well as limited treatment options, can exacerbate the problem [73].Poor healthcare delivery in Africa can also lead to delays, misdiagnoses and inadequate treatment for STIs and cancers, culminating in poorer health outcomes for individuals affected, thereby resulting in death, as illustrated in Figure 2. Together, inadequate human resources, low budgetary allocation, poor infrastructure, lack of health education, and bad leadership contribute to a flawed healthcare system, which is incapable of providing the necessary services like infection control; this results in increased infection rates contributing to CC and PCa, leading to an elevated mortality rate.Microbial infection can also become rampant and contribute to the increase in the incidence of disease; this can lead to an increase in CC and PCa incidences and eventually more deaths. Conclusions Considering these insights, it will not be out of place to conclude that microbial infections, which are primarily STIs, are the main drivers of the high mortality rate of cervical cancer and prostate cancer in Africa.Poor healthcare systems worsen the condition of the patients of these cancers since timely diagnosis and treatment cannot be possible because of poor health facilities.These findings imply that combating microbial disease can reduce the number deaths associated with CC and PCa, which is only feasible by providing proper healthcare through an improved medical care system.Therefore, improving access to preventive measures and testing, improving health education, and strengthening healthcare systems through increased budgetary allocation, reliable power and water supply to health facilities, and good leadership is imperative.Together, these will mitigate the impact of microbial infections and reduce the incidence and mortality associated with CC and PCa. Figure 1 . Figure 1.The distribution of global cancer incidence and mortality rates in 2020 (a).The distribution of cancer incidence and mortality rates in Africa in 2020 (b).Numbers are given per 100,000 people.(Data source: [1]). Figure 1 . Figure 1.The distribution of global cancer incidence and mortality rates in 2020 (a).The distribution of cancer incidence and mortality rates in Africa in 2020 (b).Numbers are given per 100,000 people.(Data source: [1]). Figure 2 . Figure 2. A scheme linking contributory factors to poor healthcare systems and to increases in CC and PCa incidences and mortality.Figure 2. A scheme linking contributory factors to poor healthcare systems and to increases in CC and PCa incidences and mortality. Figure 2 . Figure 2. A scheme linking contributory factors to poor healthcare systems and to increases in CC and PCa incidences and mortality.Figure 2. A scheme linking contributory factors to poor healthcare systems and to increases in CC and PCa incidences and mortality. Cheng et al. (2010) expressed a high probability of STIs in PCa oncogenesis, Vazquez-Salas et al. (
2024-03-17T17:19:11.142Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "de09b3580722d3120505ebbd78c649e080b42937", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/13/3/243/pdf?version=1710137944", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a5a7a1d015b9d316ae8a29eea46bdbdeded4fd2e", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [] }
250280862
pes2o/s2orc
v3-fos-license
Machine learning predicts clinically significant health related quality of life improvement after sensorimotor rehabilitation interventions in chronic stroke Health related quality of life (HRQOL) reflects individuals perceived of wellness in health domains and is often deteriorated after stroke. Precise prediction of HRQOL changes after rehabilitation interventions is critical for optimizing stroke rehabilitation efficiency and efficacy. Machine learning (ML) has become a promising outcome prediction approach because of its high accuracy and easiness to use. Incorporating ML models into rehabilitation practice may facilitate efficient and accurate clinical decision making. Therefore, this study aimed to determine if ML algorithms could accurately predict clinically significant HRQOL improvements after stroke sensorimotor rehabilitation interventions and identify important predictors. Five ML algorithms including the random forest (RF), k-nearest neighbors (KNN), artificial neural network, support vector machine and logistic regression were used. Datasets from 132 people with chronic stroke were included. The Stroke Impact Scale was used for assessing multi-dimensional and global self-perceived HRQOL. Potential predictors included personal characteristics and baseline cognitive/motor/sensory/functional/HRQOL attributes. Data were divided into training and test sets. Tenfold cross-validation procedure with the training data set was used for developing models. The test set was used for determining model performance. Results revealed that RF was effective at predicting multidimensional HRQOL (accuracy: 85%; area under the receiver operating characteristic curve, AUC-ROC: 0.86) and global perceived recovery (accuracy: 80%; AUC-ROC: 0.75), and KNN was effective at predicting global perceived recovery (accuracy: 82.5%; AUC-ROC: 0.76). Age/gender, baseline HRQOL, wrist/hand muscle function, arm movement efficiency and sensory function were identified as crucial predictors. Our study indicated that RF and KNN outperformed the other three models on predicting HRQOL recovery after sensorimotor rehabilitation in stroke patients and could be considered for future clinical application. patient based on his/her responses to that rehabilitation therapy. Building accurate prediction models for forecasting patients' HRQOL improvements after rehabilitation interventions and identifying predictors relevant for HRQOL improvements in stroke patients are thus imperative for providing insights to healthcare professionals on making accurate clinical decision. Machine learning (ML) has become a popular prediction analytic approach. Machine learning uses automatic computerized algorithms to discover patterns in the data and builds prediction models to forecast future events. Machine learning is particularly suitable for predicting health outcomes because it can process large volumes of data, analyze the complex relationship between various different features/variables and easily incorporate new variables into prediction models without re-adjusting the preprogrammed rules 6 . In addition, the feature selection procedure can be incorporated into machine learning procedures to help identify important predictors 7 . These advantages make machine learning a potentially ideal tool for realizing accurate outcome prediction in patient populations. In stroke, machine learning has been primarily used for predicting motor and activities of daily living (ADL) recovery and has achieved an overall positive result [8][9][10][11][12] . However, to our knowledge, only one study to date has applied machine learning algorithms in predicting stroke-specific HRQOL recovery 13 . In that study, the authors incorporated six demographic factors into machine learning models and built a preliminary system to forecast HRQOL changes of chronic stroke patients. Small prediction errors (i.e., the root mean square errors) were found between the data derived from the prediction model and the actual data collected from the patient, suggesting that machine learning might be feasible for predicting HRQOL changes in chronic stroke patients 13 . Despite this positive evidence, the previous study only included demographic attributes into the machine learning prediction model 13 ; nevertheless, HRQOL has been shown to be affected by factors across multiple domains including demographic as well as health-related domains such as physical and functional domains 4,5,14 . Including only demographic attributes in the machine learning model may not be sufficient for optimizing prediction accuracy. In addition, the previous study only examined prediction errors (e.g., the mean squared error) of the machine learning model 13 . Important clinical performance metrics such as prediction accuracy and the ability of machine learning models to distinguish between responders and non-responders to rehabilitation interventions remain largely unexplored 15 . A comprehensive examination of machine learning prediction performance along with factors across health domains is required for determining the efficacy of machine learning on predicting HRQOL recovery of stroke patients after rehabilitation interventions. Stroke sensorimotor rehabilitation interventions including the robot-assisted therapy (RT), mirror therapy (MT) and transcranial direct current stimulation (tDCS) have become popular approaches for improving stroke recovery in the recent decade. These three approaches (i.e., RT, MT and tDCS) use modern equipment/modalities (e.g., robotic arms, mirror boxes and electrical stimulators) to modulate peripheral and/or central sensorimotor systems (e.g., visuomotor and sensorimotor systems and cortical areas) to augment stroke recovery [16][17][18] . Several studies have demonstrated that these three sensorimotor interventions (i.e., RT, MT and tDCS) not only facilitated functional recovery but also improved participation and HRQOL in stroke patients [19][20][21][22][23][24][25] . The rationale of why these three sensorimotor interventions (i.e., RT, MT and tDCS) could improve HRQOL is that these interventions could reduce arm/hand impairment, restore arm/hand function, which would allow stroke patients to participate in daily activities and accomplish essential daily tasks [19][20][21][22][23][24][25] . Most daily tasks such as bathing, dressing, dining and grocery shopping all involve use of the arm/hand to manipulate objects to accomplish tasks. Good arm/hand function would lead to successful participation in daily tasks and subsequently may increase stroke patients' subjective feeling of well-beings and satisfaction toward daily life [19][20][21][22][23][24][25] . Thus, these three interventions (i.e., RT, MT and tDCS) may have potentials to be incorporated into current clinical practice to facilitate not only functional recovery but also HRQOL in stroke patients. Machine learning may be a potentially useful tool for predicting HRQOL changes after these three interventions, which may help identify responders to these three interventions and facilitate clinical application 6,7 . Therefore, the purpose of this study was to determine the performance of machine learning algorithms on predicting clinically significant HRQOL improvements of chronic stroke patients after stroke sensorimotor rehabilitation interventions including the RT, MT and tDCS. We examined the performance of five commonly used machine learning algorithms and identified important predictors for building machine learning prediction models. Methods Study design. This study was an observational cohort study that used secondary analysis of data from our previous randomized controlled or cluster-controlled trials and ongoing projects 24,[26][27][28] . Data screening was done by three investigators (Liao WW, Wu CY and Hsieh YW). The three investigators determined the eligibility and completeness of the data. Patients that completed the interventions and outcome measurements at pre-and postintervention were included for analysis. Participants. One hundred and thirty-two chronic stroke patients (N = 132) were included. Participants were recruited from three hospitals in the northern part of Taiwan. Table 1 outlines the characteristics of participants. The inclusion criteria were (1) a first-ever unilateral ischemic or hemorrhagic stroke, (2) more than 6 months post stroke, (3) Fugl-Meyer assessment scale of upper extremity (FMA) scores between 18 and 60, suggesting mild to moderate arm hemiparesis 29 , (4) no excessive spasticity in upper limb joints (Modified Ashworth Scale, MAS ≤ 3) 30 , (5) ability to follow study instructions (Mini-Mental State Examination ≥ 22), and (6) no concomitant neurological disorders (e.g., brain tumor and dementia). The exclusion criteria were (1) participation in any drug or rehabilitation projects/experiments in the past 6 months, (2) had Botulinum toxin injections in the past 3 months, (3) severe vision or visual perception impairments (e.g., neglect and poor visual field) as www.nature.com/scientificreports/ assessed by the National Institutes of Health Stroke Subscale, and (4) any contradictions to non-invasive brain stimulation (for participants receiving tDCS) 31 . The institutional review boards of participating hospitals including the Linkou Chang Gung Memorial Hospital and Taipei Tzu Chi Hospital approved the trials. All participants provided written informed consents before enrolled into clinical trials. All study procedures were conducted in accordance with the Declaration of Helsinki. Stroke sensorimotor rehabilitation interventions. All participants received interventions for 1.5 to 2 h per session with a total of approximately 30 h of training across 3 to 4 weeks. The frequency and duration of training were similar to those of most rehabilitation interventions studies [19][20][21][22][23][24][25] . Participants received interventions at the hospitals where they were recruited from. The interventions were administered by certified occupational therapists that were properly trained by the senior therapists and the principal investigators (Wu CY). Among these participants, 70 received RT, 32 received MT and 30 received tDCS/MT. For the RT, participants practiced unilateral paretic movements by using the InMotion robotic systems (the InMotion ARM and InMotion WRIST) 28 . For the MT, participants imagined that the mirror reflection of the nonparetic arm was the paretic arm and performed bilateral movements as simultaneously as possible 26,27 . For the tDCS/MT, participants received 2 mA anodal tDCS on the ipsilesional primary motor cortex for 20 min followed by another 20 min of MT 24 . For all trainings (i.e., RT, MT and tDCS/MT), participants performed an additional 15-30 min of functional task training in each session. Participants were assessed within 1 week before and after interventions by the evaluators that were blinded to the study purpose and treatment allocation of participants. Test-Paretic representing the paretic hand function (total scores = 150 (150 cubes), RNSA Revised Nottingham Sensation Assessment assessing the tactile, proprioception, and stereognosis sensation of the paretic side of the body (item score range: 0-2; unable to test = 9), FIM functional independence measure assessing participants' levels of disability (item score range: 1-7; total score:18-126), SIS stroke impact scale. Value is mean ± standard deviation. www.nature.com/scientificreports/ Classification of HRQOL improvement. The Stroke Impact Scale (SIS) 3.0 was selected as the major outcome for classifying HRQOL improvements 32 . It is a self-reported questionnaire used for evaluating HRQOL in stroke patients. The reliability and validity of SIS have been well established 33,34 . The SIS consists of two parts. The first part is the main scale of SIS that assesses multidimensional HRQOL including the strength, hand function, ADL/instrumental ADL, mobility, communication, emotion, memory/thinking, and social participation 32 . It has 59 items and each item was scored subjectively by stroke patients based on the difficulty they perceived in that item during the past two weeks. The scores of each domain were transformed to a score out of 100 and the mean scores of all domains were used to represent multidimensional HRQOL of stroke patients 35,36 . The second part is a global rating scale that evaluates stroke patients' self-perceived HRQOL recovery. The scores range from 0 (no recovery) to 100 (full recovery). Both parts were included in this study to comprehensively represent multi-dimensional HRQOL and global self-perceived HRQOL changes of chronic stroke patients. To facilitate clinical use of our ML prediction models, the minimal clinical important differences (MCID) were selected as the criterion to classify participants into high and low responders. The MCID is the smallest change in scores that were considered clinically important and meaningful in health status perceived by the patient 37 . Previous studies have set the MCID as 10-15% of total scores in patient populations and received clinically beneficial results [38][39][40] . As a result, based on the literatures, we defined the MCID as 10% of changes in the scores of main SIS scale and global rating scale. Participants that had SIS change scores greater than or equal to 10 were classified as high responders and participants with SIS change scores less than 10 were classified as low responders to stroke sensorimotor rehabilitation interventions. Candidate predictors. We selected thirty-two potential predictors based on stroke HRQOL literatures and the International Classification of Functioning, Disability and Health (ICF) framework to include "Body function and structures", "Activity" and "Participation" attributes [41][42][43] . These predictors included (1) 32 . These attributes are commonly used in research and clinical settings to represent the motor/sensory impairment, functional ability and participation of stroke patients, and therefore were selected as potential predictors 41-43 . Machine learning algorithms. Five ML algorithms, which were the random forest (RF), k-nearest neighbors (KNN), artificial neural network (ANN), support vector machine (SVM), and logistic regression (LG) were used for developing prediction models. The RF uses the ensemble learning method for outcome prediction. It combines the results of multiple decision trees and generates a final overall result to augment prediction accuracy 53 . It is a flexible method that can be used in categorical and continuous data. In addition, the RF has a low probability of overfitting and is therefore suitable for use in the clinical setting 53 . The KNN is a distancebased method. It predicts that similar objects would exist in close proximity. As a result, it labels the class of the target based on the majority of classes of its surrounding k neighbors 54 . The KNN classification pattern is similar to the clinical decision-making process made by the clinicians/therapists, where similar treatments would be prescribed to patients with similar responses and characteristics 55 . The ANN is inspired by the neurological network of the brain 56 . It consists of several neurons/nodes in layers including the input, hidden and output layers. The input layer receives the data, and transfers it to the hidden layer, where the computations (e.g. the activation function) are primarily taken places. After the computations are done, the hidden layer generates the final output to the output layer and finishes the prediction model. The ANN can process complex health informatics data and is therefore a potentially useful tool for outcome prediction in stroke patients 57 . The feedforward back propagation method was used in this study. The SVM uses the binary classification technique, where it projects data onto a high-dimensional plane by using the kernel function first, and then finds the maximum-margin hyperplane that best separates data into two classes 58 . The SVM is efficient in high dimensional planes and suitable for modeling complicated medical data. The LG is a binary classification technique that uses the logistic sigmoid function to predict the probability of observed data that would belong to one of the two possible classes 59 . It is a commonly used algorithm for building prediction models in stroke patients. These 5 ML algorithms were selected because they are widely used modeling techniques for outcome prediction in patients and have been shown to have good prediction performance 6 . Feature selection procedure. The feature selection procedure was performed to remove redundant attributes and identify the essential ones for prediction accuracy 7 . A popular feature selection method called "information gain ratio" was implemented 11 . This feature selection method evaluates the influence (i.e., the information gain) of each attribute to the output classes (i.e., the SIS classes) using the ranker search method [60][61][62] . A higher gain ratio of the attribute indicates a greater contribution of this attribute to prediction accuracy 62,63 . In this study, attributes with gain ratio greater than zero were used for developing ML prediction models. Figure 1 shows the model development and testing process. Data were randomized and divided into a training data set (70%) and a test data set (30%) 64 . The training data set was used for training and developing the model. The test data set was used for final evaluation of model performance. The tenfold cross validation procedure was performed to train the models 65 . During the tenfold cross validation process, the training data set was split into 10 groups, where 9 of them were used for training the model while the remaining one was used for validating the model. This process was repeated until all groups of data had been trained and validated. After the model was built, the testing data set was entered into the model to assess the model performance. The hyper-parameters of the prediction models were determined according to the procedures used in the ML literatures. For the RF model, the numbers of trees to build were 100 and the numbers of features to consider at a node were the first integer less than log2M + 1 (M is the number of inputs) 66 . For the KNN and ANN models, the tenfold cross validation was employed to tune the value of hyper-parameters (i.e., the k value of the KNN; the numbers of hidden neurons in the hidden layer of the ANN) 10 . We found that k = 5 and the numbers of hidden neurons = 3 in one hidden layer (the main SIS scale) and k = 9 and the numbers of hidden neurons = 2 in one hidden layer (the SIS global rating scale) had the best prediction accuracy. As a result, these hyper-parameters were used in the KNN and ANN models. For the SVM model, the polynomial kernel function was employed because it provided the best prediction accuracy 58,59 . Model performance metrics. The performance of ML models was evaluated using the standard ML performance metrics including (1) accuracy, (2) recall, (3) precision, (4) F1 scores, and (5) area under the receiver operating characteristic curve (AUC-ROC) 15 . Accuracy is an overall index of prediction performance. Accuracy was computed as the sum of true positive (TP) and true negative (TN) divided by the sum of TP, TN, false positive (FP) and false negative (FN). Recall is the ratio of participants that were correctly identified as positive by the model to those whom were actually positive. Recall was computed as the TP divided by the sum of TP and FN. Precision is the ratio of participants that were correctly identified as positive by the model to those were labelled as positive by the model. Precision was calculated as TP divided by the sum of TP and FP. F1 scores is a combined index of precision and recall. It was calculated as the harmonic mean of precision and recall. The AUC-ROC is the ratio of area under the ROC curve to the total area. It represents the ability of the model to distinguish between classes. Statistical analysis. The continuous variables were standardized and the categorical variables were coded before developing the ML models. The Waikato Environment for Knowledge Analysis (Weka) 3.8.3 developed by the University of Waikato, New Zeeland was employed for model development, training and testing 67 . The Weka has been extensively used for constructing ML prediction models in various fields and in different patient populations in the medical field 11,68,69 . Figure 1. The flow chart of model development and validation process. Subject data were randomized into a training set and a test set. The training set was 70% of the data and the test set was 30% of the data. For the training data set, the tenfold cross validation procedure was used to train and build 5 machine learning models (i.e., the RF, KNN, ANN, SVM and LG) in which the data was randomly split into 10 groups (9 groups for training and 1 group for validation). The tenfold cross validation process repeated until all 10 groups of data were trained and validated. The tenfold cross validation process was performed for all 5 machine learning models. After the 5 models were built, the test data set was entered into the 5 models to determine the model performance. Results Five most important attributes were identified by the feature selection procedure for the SIS HRQOL main scale, which were the baseline SIS mean scores (gain ratio = 0.1), baseline MRC finger metacarpophalangeal (MP) extensors (gain ratio = 0.15) and flexors (gain ratio = 0.06) scores, baseline MAS wrist flexors (gain ratio = 0.1) scores and baseline WMFT time (gain ratio = 0.14). The gain ratio of the other 31 attributes was 0. As a result, these 5 attributes were used for developing the SIS multidimensional HRQOL prediction model. Four most important attributes were identified by the feature selection procedure for the SIS global rating scale, including age (gain ratio = 0.16), gender (gain ratio = 0.06), baseline RNSA stereognosis (gain ratio = 0.1) and proprioception (gain ratio = 0.07) scores. The gain ratio of the other 32 attributes was 0. Therefore, these 4 attributes were used for developing the stroke global self-perceived HRQOL recovery prediction model. Table 2 summarizes the performance metrics of the five ML models. For the SIS multidimensional HRQOL scale, the prediction performance was the best in the RF model. The accuracy of the RF model was 85%, precision was 0.88, recall was 0.85 the F1 scores were 0.85 and the AUC-ROC was 0.86. The prediction performance was similar between the other 4 models (KNN, ANN, SVM and LG). The accuracy ranged from 72 to 75%, the precision was from 0.73 to 0.77, the recall was from 0.73 to 0.75, the F1 scores were from 0.72 to 0.75 and the AUC-ROC was from 0.71 to 0.87. For the SIS global rating scale, the prediction performance was the best and similar between the RF and KNN model. The accuracy of the RF model was 80%, the precision was 0.78, the recall was 0.8, the F1 scores were 0.78 and the AUC-ROC was 0.75. The accuracy of KNN model was 82.5%, the precision was 0.82, the recall was 0.83, the F1 scores were 0.81 and the ACU-ROC was 0.76. The prediction performance was similar between the other three models (ANN, SVM and LG). The accuracy was all 77.5%, the precision was from 0.77 to 0.78, the recall was all 0.78, the F1 scores were from 0.77 to 0.78 and the AUC-ROC was from 0.68 to 0.75. Discussion Our results demonstrated that machine learning could accurately predict HRQOL improvements after stroke sensorimotor rehabilitation interventions in chronic stroke patients. In particular, the RF and the KNN models had better performance than the other three algorithms (i.e., ANN, SVM and LG). The RF model had 85% accuracy on predicting multidimensional HRQOL changes and 80% accuracy on forecasting global self-perceived recovery. It could also accurately distinguish between high and low responders on the multidimensional HRQOL outcome with 86% chances and on the global self-perceived recovery with 75% chances. The KNN model had good prediction performance on global self-perceived recovery only, where it had 82.5% prediction accuracy and could distinguish between high and low responders with 76% chances. Furthermore, we identified important attributes for predicting multidimensional HRQOL improvements, which were the baseline HRQOL, baseline paretic finger muscle strength, wrist muscle tone and arm movement efficiency, and also key attributes for forecasting global self-perceived recovery including age, gender, baseline hand stereognosis and limb proprioception. To our knowledge, this study is the first to comprehensively evaluate the performance of ML algorithms on predicting HRQOL improvements after stroke sensorimotor rehabilitation interventions in stroke patients. In addition, we identified the two more effective algorithms, which were the RF and KNN among 5 commonly used ML algorithms. The RF algorithm had good prediction performance on both multidimensional HRQOL and global self-perceived recovery. This good prediction performance may be contributed by the unique ensemble method that it employed in the modeling process. During the ensemble modeling process, the RF creates as many base models (i.e., the decision trees) as many as possible and combines these base models into a final one 66 . Therefore, instead of creating one model and hoping this model would be the best, the RF takes a myriad of multiple models into account to optimize its prediction accuracy. Furthermore, these base models (i.e., the decision trees) are designed to be uncorrelated with each other, thus reducing the probability of overfitting 66 . Our finding of the superior performance of RF was in line with several previous studies demonstrating that ML Table 2. Performance metrics of SIS prediction models. SIS Stroke Impact Scale, RF random forest, KNN k-nearest neighbors, ANN artificial neural network, SVM support vector machine, LG logistic regression, AUC-ROC area under the receiver operating characteristic curve. www.nature.com/scientificreports/ algorithms with ensemble methods such as the RF and the Adaboost algorithms outperformed other types of ML algorithms, for example the decision trees or LG 63,[70][71][72][73] . Future studies could examine the performance of ML algorithms with different types of ensemble methods such as the boosting, bagging and stacking on HRQOL outcome prediction to determine the best one for use in stroke patients. To our surprise, the KNN algorithm has similar and slightly better prediction performance than the RF algorithm on predicting global self-perceived HRQOL recovery in chronic stroke patients. Compared to the RF, the KNN is a simpler and more straightforward model because it is a distance-based method and does not involve ensemble learning processes 54,55 . Our study revealed that despite its simplicity, the KNN may be as powerful as the RF model when predicting a single-item outcome such as the SIS global self-perceived HRQOL recovery scale in stroke patients. In contrast, the KNN may not be the best algorithm for processing multidimensional health outcome data due to its weaker prediction performance on multidimensional than single item SIS HRQOL outcomes. As a result, we recommend using the KNN model for predicting simple HRQOL outcome changes in chronic stroke patients. Future studies could compare and contrast the performance of KNN on single domain and multidimensional health outcome data to validate findings of this study. Our study showed that 5 attributes related to initial HRQOL, muscle function (the muscle strength and muscle tone) and movement efficiency were important predictors for forecasting multidimensional HRQOL improvements after sensorimotor rehabilitation interventions. Indeed, studies have found that baseline HRQOL was associated with HRQOL restoration in stroke patients 74,75 . It is thus not surprising to find baseline SIS scores important for HRQOL improvements after interventions. In addition, studies have shown that paretic arm/hand muscle strength and muscle tone significantly affected functional recovery after stroke 43,76,77 . The movement efficiency of the paretic arm was also associated with HRQOL recovery post stroke 5 . Similarly, in the present study, we found that muscle function (i.e., baseline MRC finger MP extensors/flexors, MAS wrist flexors) and the movement efficiency (i.e., baseline WMFT time scores) of the arm associated with prediction of HRQOL improvements. However, compared to previous studies, we further identified the most important components, which were the muscle function of "the wrist/hand" and the movement efficiency of "the whole arm". These components are highly involved in daily routines. For example, stroke patients have to be able to appropriately open/close their hands, grasp/release objects and move their arms efficiently in time to perform most essential daily tasks such as dressing, bathing and cooking. The inability to accomplish essential daily tasks may result in a source of distress and consequently affect participation and life satisfaction after stroke 78 . Therefore, even though muscle function and movement efficiency are commonly regarded as the basic level (i.e., body functions and structures) of the ICF model, they could substantially affect the multi-dimensional HRQOL recovery after stroke sensorimotor interventions and therefore should be considered when assigning these interventions to stroke patients. In addition, our study also found another four attributes imperative for forecasting global self-perceived HRQOL recovery, which were age, gender and the paretic hand stereognosis and limb proprioception. Age and gender have been demonstrated to be associated with HRQOL recovery post stroke in previous studies 5,79 . Our results support theses previous findings and further suggest that additional somatosensory impairments, especially the sensory components including the hand stereognosis and limb proprioception are also important for predicting global self-perceived HRQOL recovery. Indeed, reduced sensation has been found to be related to slower recovery, decreased motor function (e.g., motor control and activity) and lesser rehabilitation outcomes 43,80,81 . Sensory deficits, such as impaired hand stereognosis and limb proprioception may cause difficulties in sensing arm/hand position and recognizing objects in the hand and hard to control arm/hand movements. This may result in decreased confidence in using the arm/hand in daily activities, fear of safety, thus affecting stroke patients' participation and satisfaction in daily life [80][81][82] . Our results were also in line with one previous study that found paretic limb proprioception associated with reduced HRQOL and increased feeling of social isolation in stroke patients 14 . Nonetheless, most studies did not assess the contributions of sensory impairments on HRQOL outcome prediction after rehabilitation interventions in stroke patients 83 . Our study suggested that sensory impairment may contribute to global self-perceived HRQOL recovery post stroke intervention and could be considered in the data collection and HRQOL model building process. Taken together, we found that attributes in the three major categories: the personal characteristics (i.e., age and gender), initial life satisfaction (baseline SIS mean scores) and arm/hand sensorimotor components (i.e., muscle function, movement efficiency, hand stereognosis and limb proprioception) were important predictors for HRQOL restoration after stroke sensorimotor rehabilitation. Based on our findings, healthcare professionals could at least assess these attributes in the three categories before assigning the three sensorimotor interventions to stroke patients. In addition, these attributes may have potential to be used as indicators to determine which stroke patient may be suitable for receiving stroke sensorimotor rehabilitation interventions to improve HRQOL. This may help to improve clinical rehabilitation efficacy and potentially save workload in the hospitals/clinics. Study limitations. Six limitations should be considered. First, our HRQOL outcome prediction was focused on stroke sensorimotor rehabilitation interventions that share similar rehabilitation principles. Future studies could examine whether the identified predictors, such as the sensorimotor components could generalize to HRQOL outcome prediction in other types of rehabilitation interventions. Second, our predictions were based on changes immediately after interventions. Future studies could investigate the prediction performance of ML algorithms in the follow-up period. This will help identify patients that could retain improvements after sensorimotor interventions. Third, we examined 5 commonly used ML algorithms. Future studies could evaluate the performance of other types of ML or deep learning algorithms such as those with ensemble methods (e.g., the Adaboost) 84 or the deep learning algorithms (e.g., the deep neural network) 85 and compare the results to our findings to determine the optimal method for predicting HRQOL restoration after rehabilitation interventions in www.nature.com/scientificreports/ stroke patients. Fourth, this study included thirty-two potential predictors based on the results of previous prediction model studies. Nevertheless, some stroke-related factors were not included, for example, the size/severity of lesion, due to the incapability to retrieve those data from some of our patients. Future studies could examine if these stroke related factors would also be important predictors for forecasting HRQOL recovery in chronic stroke patients, in addition the predictors identified in this study. Fifth, we used SIS to assess multi-dimensional HRQOL and self-perceived global HRQOL changes in stroke patients. Although SIS is a widely used HRQOL assessment in stroke rehabilitation field, it is an ordinal questionnaire that may still have measurement errors. In addition, the SIS may not cover all aspects of HRQOL related factors such as environmental factors (e.g., the context and time), personal factors (e.g., personality) and social indicators (e.g., economic status). We encourage future researchers to include the above factors into machine learning prediction models and examine whether inclusion of these additional factors would optimize prediction accuracy on HRQOL. Sixth, the machine learning models built in this study were preliminary models that focused on predicting HRQOL recovery. Therefore, these models should not be used for excluding patients from receiving the three stroke sensorimotor interventions at this moment. Future studies could include more participants and develop machine learning models that comprehensively predict motor, functional and HRQOL recovery in stroke patients. Conclusion Machine learning may predict clinically significant HRQOL improvements after stroke sensorimotor rehabilitation interventions in chronic stroke patients. In particular, the RF and the KNN algorithms may be more effective than the other 3 ML algorithms (DT, LG and SVM) and could be considered for use in clinical settings. We suggest including at least three categories of predictors, which are age/gender, initial HRQOL and sensorimotor components including sensory function, muscle function and movement time into the ML HRQOL prediction model for optimizing prediction accuracy. Future studies with a different sample of stroke patients are warranted to validate our findings and improve model generalizability. Data availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
2022-07-06T06:16:44.152Z
2022-07-04T00:00:00.000
{ "year": 2022, "sha1": "15bdc2f349563417bd7014b3870e1309648406d1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "49e6bb8d3a101ea353ee2f5a38b91e0594adce4c", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198981114
pes2o/s2orc
v3-fos-license
Anatomy and Histochemistry of the Roots and Shoots in the Aquatic Selenium Hyperaccumulator Cardamine Hupingshanensis (Brassicaceae) Abstract The perennial selenium (Se) hyperaccumulator Cardamine hupingshanensis (Brassicaceae) thrives in aquatic and subaquatic Se-rich environments along the Wuling Mountains, China. Using bright-field and epifluorescence microscopy, the present study determined the anatomical structures and histochemical features that allow this species to survive in Se-rich aquatic environments. The roots of C. hupingshanensis have an endodermis with Casparian walls, suberin lamellae, and lignified secondary cell walls; the cortex and hypodermal walls have phi (Φ) thickenings; and the mature taproots have a secondary structure with a periderm. The stems possess a lignified sclerenchymal ring and an endodermis, and the pith and cortex walls have polysaccharide-rich collenchyma. Air spaces are present in the intercellular spaces and aerenchyma in the cortex and pith of the roots and shoots. The dense fine roots with lignified Φ thickenings and polysaccharide-rich collenchyma in the shoots may allow C. hupingshanensis to hyperaccumulate Se. Overall, our study elucidated the anatomical features that permit C. hupingshanensis to thrive in Se-rich aquatic environments. Materials The C. hupingshanensis samples were collected from the Hupingshan National Natural Reserve in Hunan Province, and from regions of the Yutangba and Liziping in Hubei Province along the Wuling Mountains, China. One-hundred twenty collected samples of C. hupingshanensis were preserved in the germplasm resource center of the Hubei Selenium Industry Technology Research Institute, China. Five mature specimens exhibiting normal growth from each collected site, namely Hupingshan, Yutangba, and Liziping, were observed under the microscope. The freshly sampled roots and shoots were fixed in formaldehyde-alcohol-acetic acid (FAA) [43] following collection. After fixing the tissues, freehand sections were cut using a two-sided blade. The sections were made at 5 mm, 15 mm, 30 mm, and 50 mm from the tip to the base with the cortex sloughed off (Fig. 1); through the middle of the stem and the base internode; and through the middle stem petioles and leaves. The sections were about 10 to 25 μm thick. Little information exists on the anatomical and histochemical features of confirmed Se accumulators across various families, including Amaranthaceae, Asteraceae, Brassicaceae, Fabaceae, Rubiaceae, and Orobanchaceae [1,10]. While various biochemical and physiological analyses have confirmed that C. hupingshanensis hyperaccumulates Se and Cd [5,42], the associated structural and histochemical features of this species are yet to be elucidated. Accordingly, in this study we focused on determining the anatomical features of the roots and shoots of wild-type C. hupingshanensis that enable it to hyperaccumulate Se and survive its aquatic lifestyle. walls have obvious Φ thickenings in the 30 mm section. Narrow intercellular spaces can be observed between the cortex and hypodermis ( Fig. 2A-F). The stele has a few secondary xylems; and the endodermis has obvious and lignified Casparian bands and is heavily suberized at the root base (Fig. 2G, H, I). The inner and radial cortex and hypodermal walls possess both large and small lignified Φ thickenings (Fig. 2G, I); the cortex and hypodermis are partially sloughed off; and the periderm appears suberized (Fig. 2H, I). Taproots The stele possesses diarch protoxylem poles; the endodermis has faint Casparian bands and almost complete suberin lamellae with a few passage cells; and the cortex and hypodermal walls have slightly lignified Φ thickenings at 5 mm from the root tip ( Fig. 3A, B). The endodermis has almost complete suberin lamellae in the 15 mm section (Fig. 3C). The stele has a metaxylem and few secondary xylems; the endodermis has Casparian bands and lignin, becoming heavily suberized (Fig. 3D, E, F); and the inner and radial cortex walls have obviously suberized and lignified Φ thickenings at 30 mm. Intercellular spaces and aerenchyma can be observed between the cortex and hypodermis ( Fig. 3A-F). The stele has a secondary xylem; the endodermis has lignified Casparian bands and is heavily suberized at 50 mm ( Fig. 4A, B, C). The inner and radial cortex and hypodermal walls possess both large and small lignified Φ thickenings; the cortex and hypodermis have been partially sloughed off; and the periderm is suberized and lignified ( Fig. 4A, B, C). The cortex and hypodermis have been sloughed off in the mature taproots; the stele has a secondary xylem and spacious parenchyma; and the periderm has suberized and lignified Casparian bands Here we demonstrated that the fine adventitious roots and primary structure of the taproots in the aquatic Se hyperaccumulator C. hupingshanensis exhibit similar anatomical and histochemical features. The roots have an endodermis and hypodermis with large cells. The cortex and hypodermal walls have lignified Φ thickenings that are greater near the endodermis. The taproot cortex has more cell layers than the fine adventitious roots, and the mature taproots have a secondary structure containing a periderm, as commonly observed in eudicots. The young roots of C. hupingshanensis are similar in structure to another Se accumulator, O. javanica [6,32]. sections were stained with Sudan red 7B (SR7B) for the suberin lamellae [31,44], phloroglucinol-HCl (Pg) for lignin [43], berberine hemisulfate-aniline blue (BAB) for the Casparian bands and thickened cell walls [31,[45][46], and toluidine blue O (TBO) for the other structures including polysaccharides [31,37]. The specimens were examined using bright-field microscopy on a Leica DME microscope and photographed with a digital camera (Nikon E5400, Japan). Specimens stained with BAB were viewed under ultraviolet light on an Olympus IX71 epifluorescence microscope and photographed with a digital camera (RZ200C-21, China). General morphology Morphologically, C. hupingshanensis is characterized by erect stems (Fig. 1A) and taproots with a mass of fine adventitious roots (Fig. 1B). The fine adventitious roots contain one or two cell cortex layers (Fig. 2). The thick taproots possess three cell cortex layers in the primary structure (Fig. 3), with the cortex sloughed off in the secondary structure ( Fig. 1C; Fig. 4). The fine adventitious roots and young taproots possess a diarch stele with a differentiating proto-and metaxylem; a cortex with an endodermis; and an enlarged outer ring with a distinct hypodermis. The cortex and hypodermal walls have lignified Φ thickenings. Mature taproots possess a typical secondary structure with a periderm. The stems possess a lignified sclerenchymal ring enclosed within a central cylinder with scattered vascular bundles internal to the cortex, which possesses an endodermis. The pith and cortex walls contain polysaccharide-rich collenchyma. Aerenchyma and intercellular spaces are present in the cortex and pith of the roots and shoots. The Φ thickenings of the roots of C. hupingshanensis are similar to those of some other brassicaceous species, The hypodermis of O. javanica has more cell layers and a cortex with spacious aerenchyma, though the cortex lacks lignified walls and Φ thickenings. Around the roots of O. javanica there are aerenchyma, and the walls possess suberin lamellae. Stems and leaves The stems possess a thickened lignified sclerenchymal ring enclosed within a central cylinder with scattered vascular bundles internal to the cortex. The sclerenchymal ring generally has vascular bundles inside it, and a spacious pith is present in the center of the sclerenchymal ring (Fig. 5A, B, C). The cortex has an endodermis with Casparian bands (Fig. 5D, E) and suberin (Fig. 5F) and lignin (Fig. 5C). The outer surface has a cuticle that reaches the inside of the epidermis (Fig. 5D, F). The pith and cortex have aerenchyma, and the walls have unlignified collenchyma that contain polysaccharides (Fig. 5A, B, C, D, F; see also [34]). Beneath the epidermis in the mature stems there is a peripheral mechanical ring (Fig. 5F). The petioles possess one large and four small vascular bundles with lignified sclerenchymal rings and a spacious cortex with collenchyma and aerenchyma (Fig. 6A, B, C). including B. oleracea and B. napus, and act as a barrier to ion transport [34][35]. Pelargonium hortorum has larger Φ thickenings at the hypodermis, which is opposite to what is observed in C. hupingshanensis [37]. Myrica rubra, P. malus, and G. biloba possess Φ thickenings near the endodermal radial walls but lack lignified walls [33,36,38]. Organelle-rich cytoplasm is present in the roots of the nickel (Ni) hyperaccumulator Senecio coronatus [11][12][13], and a Cd hyperaccumulating Arabidopsis thaliana genotype was found to possess dense root hairs [14]. We believe that lignified Φ thickenings in the roots might trap Se ions and contribute to the Se hyperaccumulation of C. hupingshanensis. The dense fine roots and lignified Φ thickenings may allow C. hupingshanensis to hyperaccumulate Se in a manner that differs from the Ni hyperaccumulator S. coronatus and the Cd hyperaccumulating A. thaliana genotype [11][12]14]. The role of air spaces in plant organs is to retain oxygen under hypoxic and anoxic conditions in order to enhance survival [18-19, 21-23, 26]. The fine roots and primary root structures of the taproots of C. hupingshanensis possess fewer intercellular air spaces and aerenchyma than O. javanica, P. distichum, P. arundinacea, A. A cuticle is present on the surface (Fig. 6A). Cross-sections of the leaf blade reveals vascular bundles, palisade tissue, a cuticle, and vascular bundles that are slightly lignified (Fig. 6D, E, F). The petioles of H. sibthorpioides and R. trichophyllus have an endodermis, but lack the lignified sclerenchymal ring around the vascular bundles and the cortex with collenchyma that are present in C. hupingshanensis [40][41]. The leaf blade structure of C. hupingshanensis is common to eudicots. We speculate that the polysaccharide-rich collenchyma walls of the pith and cortex in the shoots might enhance the tolerance of C. hupingshanensis to Se stress. The cortex and pith of the stems of C. hupingshanensis possess a few aerenchyma lacunae, which is similar to that observed in H. sibthorpioides following submersion [41]. In contrast, H. altissima, P. distichum, A. lavandulaefolia, and A. selengensis have spacious pith cavities that might facilitate survival in heavily submerged conditions [28-30, 40, 47]. The stems of C. hupingshanensis are similar to those of the typical wetland-and aquatic-adapted plants H. sibthorpioides and R. trichophyllus, which possess an endodermis [40][41]. Conclusion In summary, C. hupingshanensis possesses apoplastic barriers consisting of an endodermis, lignified Φ thickenings, and a cuticle, which is consistent with what has been found in studies on the effects of water stress on oxygen loss and solute transport in plants [18-19, 22-25, 32, 41, 45]. The lignified Φ thickenings in the roots and polysaccharide-rich collenchyma in the shoots might have evolved as key structural and histochemical features of
2019-07-31T13:10:10.925Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "524ffc4ff3660759f309476db3e067ff481e5f7e", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7874794", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "524ffc4ff3660759f309476db3e067ff481e5f7e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249523651
pes2o/s2orc
v3-fos-license
Metal complexes of thiosemicarbazones derived by 2-quinolones with Cu(I), Cu(II) and Ni(II); Identification by NMR, IR, ESI mass spectra and in silico approach as potential tools against SARS-CoV-2 Substituted thiosemicarbazones derived by 2-quinolone were synthesized to investigate their complexation capability towards Cu(I), Cu(II) and Ni(II) salts. The structure of the complexes was established by ESI, IR and NMR spectra in addition to elemental analyses. Monodetate Cu(I) quinoloyl-substituted ligands were observed, whereas Ni(II) and Cu(II) formed bidentate-thiosemicarbazone derived by 2-quinolones. Subsequently, molecular docking was used to evaluate each analog's binding affinity as well as the inhibition constant (ki) to RdRp complex of SARS-CoV-2. Docking results supported the ability of the tested complexes that potentially inhibit the RdRp of SARSCov-2 show binding energy higher than their corresponding ligands. Additionally, ADMET prediction revealed that some compounds stratify to Lipinski's rule, indicating a good oral absorption, high bioavailability good permeability, and transport via biological membranes. Therefore, these metals-based complexes are suggested to be potentially good candidates as anti-covid agents. Thiosemicarbazones (TSCs) have shown the formation of metal complexes towards various transition metals including Cu, Pd, and Ni. The impact of TSCs on topoisomerase αII (Top2) has been noted several times, and the exact mechanism was initially unclear, though it appeared to involve increases in DNA cleavage [16] . The ability to impact Top2 function was also used to design Cu-sites like sulfur [25] . It is also well known that thiosemicarbazones coordinate as bidentate ligand through azomethine nitrogen and thione/thiolate sulfur [25] . The COVID-19 pandemic poses an unprecedented challenge for the rapid discovery of drugs against this life-threatening disease. Owing to the peculiar features of the metal centers that are currently used in medicinal chemistry, metallodrugs might offer an excellent opportunity to achieve this goal. Copper in different formats has been used in research and clinical settings to reduce the risk of bacterial and viral contamination [26] . In the last three decades, several effort s have been made to develop suitable antiviral by using thiosemicarbazide scaffold. It s hybridization with other pharmacophores has been used as a strategy to enhance safety and efficacy [27] . Recently, it has reported that complexes of cobalt(III) with 2-acetylpyridine-N (4)-Rthiosemicarbazones showed activity against Mycobacterium tuberculosis , whereas their minimal inhibitory and minimal bactericidal concentrations (MIC and MBC) were determined. Some of these complexes of cobalt-thiosemicarbazones also showed in vitro and cell viability as antiviral potential reagents against chikungunya virus infection (CHIKV). They revealed promising MIC and MBC values which ranged from 0.39 to 0.78 μg/mL found in two tested strains and presented high potential against CHIKV by reducing viral replication up to 80%. In addition, the molecular docking analysis was performed whereas, the relative binding energy of the docked compound with five bacteria strains was found in the range of E = 3.45 and E = 9.55 kcal/mol [28] . In the global pandemic caused by SARS-CoV-2, there is an emergence of the need for a treatment modality. Repurposing of the antimalarial/antiviral drugs such as chloroquine, Galidesivir, Remdesivir, Tenofovir, Sofosbuvir, Ribavirin, etc. was recommended for the treatment of COVID-19. These drugs would inhibit the RdRp complex of SARS-CoV-2 whereas, the molecular docking studies revealed the binding energy between RdRp and the drugs to be -5.1, -7.0, -7.6, -6.9, -7.5, -7.8 kcal mol −1 , respectively [29] . Comparison of the available drugs, the antiviral metal complexes have been reported to exhibit great potential acting as a better alternative of these drugs with binding energies in the range of -4.4 to -10.24 kcal mol −1 [29] Current molecular docking studies revealed that complex of ferroquine derivative of Fe(II), Ni (II) and Pt-based thiosemicarbazone complexes showed better binding affinity of -10.24, -8.95, -8.09, and -8.6 kcal mol −1 , respectively. These complexes displayed RdRp of SARS-CoV-2 to probably work better than the current drugs to prevent SARS-CoV-2. Therefore, they displayed further in vitro/in vivo application of the metal-based compounds against SARS-CoV-2. Suitable design of the ligands could result in robust transition metal complexes that are thermodynamically, kinetically and redox stable and suitable for in vivo applications against a wide range of diseases [30] Utilizing by our previous synthesis of thiosemicarbazones 1a-f [31] , metal complexes of their Cu(I), Ni (II) and Cu (II) were designed, synthesized and tested computationally as anti -SARS-CoV-2 candidates using molecular docking calculations as well as in silico ADMET prediction software. Experimental section Melting points were recorded using a Gallenkamp melting point apparatus (Gallenkamp, UK) and open capillaries. Melting points are uncorrected. 1 H and 13 C NMR spectra were measured utilizing by Bruker Avance (400 MHz for 1 H, and 100 MHz for 13 C) at Institute of Technology, Karlsruhe University, Karlsruhe, Germany. The 1 H and 13 C chemical shifts were recorded relative to internal standard TMS. Mass spectrometry was performed using electron im- Table 1 Some physical data and yield in g (%) of complexes 5a-c and 7a-h . Preparation of ligands; hydrazinecarbothioamide derivatives 3a-f A mixture of thiosemicarbazide ( 2) (0.83 g, 9.1 mmol) and the appropriate aldehydes 1a-f (9.1 mmol) in 100 mL of a mixture of ethanol 50 mL together with equivalent amount of glacial acid (1: 1) was heated under reflux with stirring for 6-8 h. The yellow precipitate was allowed to stand, filtered off, washed with ethanol, dried and recrystallized from the stated solvents. The data of the formed products were confirmed as those reported in literature [31] . General procedure for synthesis of Cu(I)-thiosemicarbazones 5a-c A mixture of 0.1 mmol of 3a,b and 0.1 mmol of Cu(I) salts 4a,b in 20 mL CH 3 CN was stirred at room temperature for 24 h. The formed precipitate was then washed with H 2 O (100 mL) and followed by washing with 100 mL EtOH. The obtained complexes 5a-c were well-dried. Physical and analytical data of 2-quinolylthiosemicarbazones and their Cu(I) were illustrated in Tables 1 and 2 . General procedure for synthesis of Ni(II) and Cu(II)-thiosemicarbazones 7a-h A mixture of 0.01 mmol of 3a-f and 0.01 mmol of Ni (II) or Cu(II) salts 6a-c in 50 mL CH 3 OH was refluxed for 6-10 h. The formed precipitate was then rinsed with H 2 O (100 mL) followed by washing with 100 mL EtOH. The obtained complexes were welldried. The physical and analytical data of the obtained products 7a-h were illustrated in Tables 1 and 2 . Molecular docking study Docking simulation study was carried out using Molecular Operating Environment (MOE®) version 2014.09, Chemical Comput- Table 2 Physical and analytical data of 2-quinolyl-thiosemicarbazones complexes of Cu(I) 5a-c , Ni(II) 7a-d and Cu(II) 7e-h . ing Group Inc., Montreal, Canada. The computational software operated under "Windows XP" installed on an Intel Pentium IV PC with a 1.6 GHz processor and 512 MB memory. The target compounds were constructed into a 3D model using the builder interface of the MOE program and docked into the active site of RdRp complex of SARS-CoV-2. (PDB ID: 6M71). After that checking their structures and the formal charges on atoms by 3D depiction, were carried out. Results and discussion Scheme 1 outlines the preparation of ligands 3a-f as previously shown in literature [31] . Thus, upon mixing equimolar amounts of 3a,b and Cu(I) salts 4a,b in CH 3 CN and the mixture was stirred for 24 h, metal complexes 5a-c were obtained ( Scheme 2 ). Similarly, on refluxing 3a-f with Ni(II) and Cu (II) salts 6a-c in CH 3 OH, the bidentate metal complexes 7a-h ( Scheme 3 ) were obtained. Some physical data and yield in g (%) of complexes 5a-c and 7a-h . are shown in Table 1 . Assignment of the complexes 5a-c and 7a-h by elemental analyses and mass spectra Elemental analyses of the obtained metal complexes 5a-c and 7a-h are as shown in Table 2 . ESI-MS is used to analyze metal species in a variety of samples. Here, we describe an application for identifying metal species by tandem mass spectrometry (ESI-MS/MS) with the release of free metals from the corresponding metal-ligand complexes. This method can be used for identifying different metal-ligand complexes, especially for metal species whose mass spectra peaks are clustered close together. The complexation of these donor groups suffered significant negative shifts confirming the effective coordination of these groups with the metal center showed the positive exact masses and isotope distribution patterns of the ESI of complexes of compounds and the corresponding formed simulated metal complex between 3a with CuI ( 4a ) and 3b with 4a (Figs 1 and 2). For example, positive exact masses and isotope pattern, both experimentally and simulated of the formed metal complex 5a formed between 3a and 4a , as an example ( Fig. 1 ). The simulated molecular formula of C 11 H 8 ClN 4 O 2 SCu appeared exactly with the experimental ESI of the complex. In case of 5b , the molecular peak was found as the base peak and appeared at m/z = 403.88 ( Fig. 2 ). In case of the mass spectrum of metal complex 7c ( Fig. 3 ), the molecular ion peak supported the chemical formula as C 22 H 20 N 8 O 4 S 2 Ni 2 with an exact mass: 639.98. Assignment by IR spectra The assignments of the IR bands are useful for determining the ligand's mode of coordination are listed in Table 3 . The bands for compounds 5a-c and 7a-h were absorbed in the region at ν = 3450-3400 cm −1 for the OH stretching vibration. Two stretching vibration bands at ν = 3356-3260 and 3150-3235 cm −1 , attributed to the symmetrical NH 2 and NH, respectively. The shift in functional groups from the ligands to the corresponding complexes supported the chelating process. Coordination occurred via the sulfur of thione, together with the oxygen of the carbonyl and the nitrogen of azomethine. The center of coordination was supported by the appearance of a strong bending vibration band for 3a as an example, at ν = 850 cm -1 in the spectrum of thiosemicarbazone is mainly due to the C = S bending frequency. This band is found to be shifted to ν = 860 cm -1 in the spectra of complex 5a ( Table 3 , Fig. 4 ). The increasing of the wave number upon complexation indicates a considerable change in bond order and the formation of a metal-sulfur bond [27] . This was obviously observed with the variation of functional groups corresponding to nitrogen of azomethine, oxygen of the carbonyl and sulfur of thione group ( Table 3 ) along with the appearance of bending vibration band corresponding to M-N group e.g. for 5a at λ max = 560 nm. The same trend was also noted for ligands 3a-f as well as their corresponding metal complexes 7a-h ( Table 3 ). For example, the band of quinolonyl-CO bands of complex 3a appeared at stretching vibration frequency ν = 1665 cm −1 , which was shifted to ν = 1678 cm −1 ( Fig. 5 ) indicating coordination of the carbonyl oxygen. Also, the wavenumber of thione sulfur in 3a appeared at bending vibration frequency ν = 850 cm −1 which was shifted in case of 7a to ν = 870 cm −1 , indicating the coordination of the sulfur lone pair of the thione group with the metal. Moreover, the nitrogen of the azomethine in 3a was shown at stretching vibration frequency ν = 1608 cm −1 that was shifted to ν = 1620 cm −1 upon complexation into 7a ( Fig. 5 ). In general, the IR spectra indicated the coordination of the nitrogen of azomethine group, the sulfur of thione and oxygen of the quinolonyl-carbonyl group. Assignment by NMR spectra Utilized by elemental analyses, mass-, IR-and the chemical shifted in NMR-spectra, the structure of the complexation process was confirmed ( Table 4 ). As examples, Fig. 6 showed the remarkable shifts in chemical shifts ( δ) in 1 H NMR of compounds 3a,b and their complexes 5a-c . NMR spectroscopic data were distinguished for compounds 5a-c for Cu(I) complexes due to the paramagnetic property of Cu(I). However, Ni(II) and Cu(II) complexes didn't show any distinguished bands due to the diamagnetic properties of Ni(II) and Cu(II) Fig 7 . One can conclude that the formed metal complexes could be either in its cationic-anionic form or in is acidic form, which exceeded by a proton from the proposed structure. As for example, the molecular formula of compound 5a' is exceeded by a hydrogen proton ( Fig. 8 ). The exact molecular formula of 5a , as an ex- ample, was in a good agreement with the molecular formula of C 11 H 8 ClCuN 4 O 2 S ( Fig. 8 ). Docking studies Molecular modeling study was performed through docking most of the target organometallic complexes 5a,b and 7a-f at the RdRp complex of SARS-CoV-2 using Molecular Operating Environ-ment (MOE®) version 2014.09. Complexes of the same metal and different anion were not studied. The crystal structure of RdRp was obtained from the PDB with PDB ID: 6M71 [32] . Molecular docking is in silico computer algorithm used to estimate two main terms; the first is to evaluate the appropriate pose (orientation & conformation) of the target compounds inside the binding site in comparison to that of their starting ligands 3a-f and the second is the computation of the docking scoring (C-docker energy), in order to provide us with information about the interactions between the Table 4 Chemical shifts (d) including 1H and 13C NMR spectroscopic data for ligands 3a,b and Cu(I)-complexes of 5a-c. metal complex and the RNA dependent RNA polymerase (RdRp) complex and also gave us a fair conclusive idea of their potential to act as an inhibitor in preventing the RNA replication process in the cells infected by SARS-CoV-2 and creating new virions. In addition to evaluate the effect of metal complexation with the ligands in binding to the active site in comparative to the ligands itself, the docking scores of the tested compounds are depicted in Table 5 that used to calculate the inhibition constant ( Ki value) according to the reported equation [33] (see supplementary data). Typically, a high potency is implied by a low Ki value and it should be in the micromolar range for a molecule to be qualified as a hit or lead compound. Compounds 7d and 7e have the least Table 5 Energy scores for the complexes formed by the optimized structures of tested complexes 5a,b and 7a-f and their corresponding ligands 3a-f in the active site of the RdRp complex of SARS-CoV-2. (PDB ID: 6M71). Compound S Score Ki value of 8.06 × 10 −8 and 2.6 × 10 −7 μM, respectively to qualify as a drug and hence, the most potent among the other tested compounds. Prior to the molecular docking studies, the receptor protein was prepared for docking by deleting additional water and co-factors, followed by the addition of polar hydrogens and computing charges fixation. Table 5 illustrates the binding free energies from the best favorable poses of the target complexes 5a,b and 7a-f . Most of the tested compounds have high binding affinity to the RdRp as the binding free energy ( G) values of them range from -10.7 to 43.6 Kcal/ mole better than their corresponding ligands 3a-f ( G = -5.7 to -0.6 Kcal/mole). The docking study results of target complexes 5a,b and 7a-f showed better mode of interactions than their corresponding ligands 3a-f , however, complexes of Cu(II) and Ni(II) exhibited better S -score than those of Cu(I).The 2D & 3D diagrams of the compounds showed crucial binding with ASP623 and ARG553 through quinoline and Cu(I)/ (II) or Ni functionality. Furthermore, stabilization of the complexes within the active site occurred through two strong hydrogen bond interactions with amino acid residue THR556 and LYS621. Compounds 7a , 7b and 7e exhibited potential interactions with all the previous mentioned amino acid, however, all of the tested derivatives 5a,b and 7a-f possess interaction with ARG553 amino acid residue(See Fig. 9 for 7e ). Most of the tested complexes showed interaction greater than their corresponding ligands to interact with the same amino acids with additional hydrogen bond interaction with THR556 which is lacked in ligands. On the other hand, Cu(I) complexes 5a and 5b dismissed hydrogen binding interactions with the amino acid residue THR556 and LYS137(See Fig.9 for 5a ) . Also, compound 7c kept two hydrogen bond interactions with ARG553 and LYS621. The corresponding ligands 3a , 3b and 3d-f showed additional Hbonding with ARG555 amino acid residue which is not observed with the complexes. Prediction of physicochemical properties, pharmacokinetics, and drug-likeness profile in silico Because of unacceptable ADME parameters (distribution, excretion, absorption and metabolism), in addition to the costs needed for developing a new drug. Design and applied new drugs is considered to be complicated. Hence, estimating the pharmacokinetic properties of a new drug is a critical step in the process of drug development and can directly contribute to optimization effort s into recovered analogs [34] . Recently, The most promising compounds can be picked in silico ADMET screens, reducing the chance of degradation of drugs in late stages [35] . To achieve a desired in vivo goal, it should be balance between pharmacodynamics and pharmacokinetic properties. Also, Further information about regimen and drug dose are given by the prediction of brain penetration, volume of distribution, oral bioavailability, and clearance [36] . Many parameters such as drug solubility S, partition coefficients, polar surface PSA, cell permeability, human intestinal absorption HIA, and drug-likeness score have been studied during virtual screening methods. An available orally drug elected in agreement with Lipinski's rule If the molecular weight is less than 500, Log P is not higher than 5, the number of hydrogen bond acceptors is less than 10 and the number of donor hydrogen bond donors is less than 5 [37] . The number of rotatable bonds reflects molecular flexibility that plays an important role in oral bioavailability and means less orally active in a flexible molecule. The number of hydrogen bonding groups has also been suggested as a consideration to substitute for the polar surface area (PSA) and also to measure the percentage absorption (%ABS) as it is in inversely proportional to tPSA %ABS = 109 − 0.345 tPSA. The higher oral bioavailability exhibited by Compounds with tPSA of less than 140 A 2 and 10 or fewer rotatable bonds [34] .
2022-06-10T13:10:10.375Z
2022-06-09T00:00:00.000
{ "year": 2022, "sha1": "b9b069ad176f60a21c5e1c212a6e2af7035fbebb", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.molstruc.2022.133480", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "34d908f40d2e2cda856af688b48c73b8493bfabc", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
76140068
pes2o/s2orc
v3-fos-license
An evaluation of the indirect cohort method to estimate the effectiveness of the pneumococcal polysaccharide vaccine We examined the validity of the indirect cohort method as a rapid assessment tool to estimate pneumococcal polysaccharide vaccine effectiveness (VE). Using evidence from published clinical trials, we reviewed the primary assumption about the appropriateness of the control group underpinning the indirect cohort method, which is that the risk of non-vaccine type invasive pneumococcal disease is equal for vaccinated and unvaccinated participants. We found an absence of evidence to support the non-differential risk assumption for non-vaccine type invasive pneumococcal disease occurring among clinical trial participants. In those instances where the design has been utilised, we also note that these typically rely on very small numbers of non-vaccine type invasive pneumococcal disease adding to the concerns as an unreliable comparator. We do not consider the indirect cohort method to be a valid tool for rapid assessment of pneumococcal polysaccharide VE. sterile site, were classified as cases if they were known to be due to a serotype contained in the vaccine (VT IPD) and controls if they were caused by a non-vaccine serotype (non-VT IPD).As a modified case-control methodology, the indirect cohort method compares the proportion of case subjects who are vaccinated against the proportion of control subjects who are vaccinated, that is, a comparison of the risk of vaccination between the two groups.It then follows that for an effective vaccine, the cases of VT IPD should be less likely to be vaccinated than the controls (non-VT IPD).The control group in this instance was intended to represent the population risk of vaccination. Validity of comparator In order for the control group to be a valid indicator of the Open Access NobleResearch population risk of vaccination, the key assumption of the indirect cohort method is that the risk of non-VT IPD is equal for both the vaccinated and unvaccinated individuals.That is, being vaccinated does not alter an individual's risk of acquiring non-VT disease.If this assumption is valid and provided there were no systematic differences in the ascertainment of vaccination status, then individuals with non-VT IPD are an attractive choice as controls as they likely represent the same population from which the cases of VT IPD arose (care-seeking behaviour and blood culture taking behaviours also likely to be similar).However, if the assumption (risk of non-VT IPD equal between vaccinated and non-vaccinated population) does not hold, then the estimate of VE will be biased.For example, if the risk of non-VT IPD was differentially higher in vaccinated individuals, as could occur in the presence of serotype replacement, the indirect cohort method would exaggerate VE.In contrast, if the risk of non-VT IPD was lower in vaccinated individuals, as could occur if vaccination afforded any protection against non-VT disease, then the indirect cohort method would underestimate VE. Original evidence base Although non-VT IPD is the comparator for the indirect cohort method, it was non-VT pneumococcal pneumonia (a much less specific outcome) that was used by Broome et al. [1] as the basis for comparison of risk between vaccinated and unvaccinated individuals.The outcome data were from three early trials [2][3], where presumptive non-vaccine type pneumonia attributed the causative serotype from a specimen collected from a non-sterile site where the pneumococcus could have colonised without causing disease.Although a seminal paper of the time, MacLeod et al. [3] diagnosed pneumonia only by clinical signs.In total, the three studies considered by Broome et al. [1] provided evidence of equal risk of presumptive non-VT pneumonia between vaccinated and unvaccinated individuals but not evidence of equal risk of non-VT IPD.An equal risk of presumptive non-VT pneumonia is perhaps not surprising given the most recent meta-analyses also suggest there is evidence of equal risk of all-cause pneumonia between vaccinated and unvaccinated individuals (i.e.no evidence of benefit against pneumonia) [5,6]. Re-assessment of evidence of equal risk of non-VT IPD in vaccinated and unvaccinated individuals We attempted to test the hypothesis that the assumed risk of non-VT IPD was equal among vaccinated and unvaccinated individuals by utilising the data from the RCTs included in the most recent update of the Cochrane systematic review of pneumococcal polysaccharide vaccine in adults [7].However, only one included study reported data on culture confirmed non-VT IPD [4], with the remaining studies either not reporting serotype specific data, or having no cases of non-VT IPD [8][9][10]. The one study contributing data on non-VT IPD was a report from three pooled studies conducted by Austrian et al. [4] amongst South African miners in the 1970s.One study used a 6-valent PPV, the other two studies used a 13-valent PPV.In total, there were 22 cases of non-VT IPD amongst 3943 vaccinated participants compared with 63 events amongst 8024 unvaccinated participants.The rate of non-VT IPD was 5.6/1000 in vaccinated participants (95%CI 3.5 to 8.4) compared with 7.9/1000 unvaccinated participants (95%CI 6.0 to 10.0, p0.17).Given we have only one study able to contribute outcomes for non-VT IPD, we conclude that there is an absence of evidence to support the primary assumption that the risk of non-VT IPD is equal in the vaccinated and the unvaccinated.Moreover, there were no RCTs of 23vPPV that contributed data on non-VT IPD. Studies utilising indirect cohort method A review of studies utilising the indirect cohort method to assess the effectiveness of the 23-valent pneumococcal polysaccharide vaccine indicates an obvious limitation in the use of non-VT IPD as the comparator, namely, small numbers (Table ).Few cases of non-VT IPD results in a loss of power to detect an effect of vaccination.Broome et al. [1] had stated that 'the estimate does depend on a similar proportion of vaccine type and non-vaccine type disease in unvaccinated populations', which would have been a realistic assumption at the time that the 14-valent vaccine formulation was available.Although not a component of the formula for vaccine effectiveness under the indirect cohort method, when there are few available subjects within the comparator group this limits the power of any individual study and increases the potential risk of bias.Although estimates of vaccine effectiveness derived from studies utilising the indirect cohort method (Table ) have been similar to estimates from other case-control studies and from some meta-analyses [6], of itself, this does not provide a theoretical basis for further use of the method. Conclusions A novel method of its time, we do not support on going use of the indirect method for assessment of vaccine effectiveness.Our findings highlight the lack of evidence to support the methodological basis that the risk of non-VT IPD in vaccinated and unvaccinated individuals is equal.Moreover, the very small numbers of non-VT IPD available in most published studies must cast doubt on the reliability of the assumption that the comparison group adequately reflects the population from which cases arise.In our view, these are important limitations of the indirect cohort method as a rapid assessment tool, and therefore do not advocate future use. Table Studies utilising the indirect cohort method by total number of IPD cases, number of cases excluded, proportion of known serotypes due to a non-VT type, and estimates of vaccine effectiveness. *Reasons for exclusions include no medical records, no vaccination records, or specimen not serotyped.
2019-02-17T17:35:25.860Z
2014-05-01T00:00:00.000
{ "year": 2014, "sha1": "d52ef67d00661035b27c672c6dbc7c52890f9f38", "oa_license": "CCBY", "oa_url": "https://nobleresearch.org/Content/PDF/3/2053-1273.2014-1/2053-1273-2014-1.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d52ef67d00661035b27c672c6dbc7c52890f9f38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }