id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
6497825
pes2o/s2orc
v3-fos-license
Local transcriptional control of utrophin expression at the neuromuscular synapse. Recently, the use of a transgenic mouse model system for Duchenne muscular dystrophy has demonstrated the ability of utrophin to functionally replace dystrophin and alleviate the muscle pathology (see Tinsley, J. M., Potter, A. C., Phelps, S. R., Fisher, R., Trickett, J. I., and Davies, K. E. (1996) Nature 384, 349–353). However, there is currently a clear lack of information concerning the regulatory mechanisms presiding over utrophin expression during normal myogenesis and synaptogenesis. Using in situ hybridization, we show that utrophin mRNAs selectively accumulate within the postsynaptic sarcoplasm of adult muscle fibers. In addition, we demonstrate that a 1.3-kilobase fragment of the human utrophin promoter is sufficient to confer synapse-specific expression to a reporter gene. Deletion of 800 base pairs from this promoter fragment reduces the overall expression of the reporter gene and abolishes its synapse-specific expression. Finally, we also show that utrophin is present at the postsynaptic membrane of ectopic synapses induced to form at sites distant from the original neuromuscular junctions. Taken together, these results indicate that nerve-derived factors regulate locally the transcriptional activation of the utrophin gene in skeletal muscle fibers and that myonuclei located in extrasynaptic regions are capable of expressing utrophin upon receiving appropriate neuronal cues. Duchenne muscular dystrophy (DMD) 1 is the most severe and prevalent primary myopathy. The genetic defect responsible for DMD is located on the short arm of the X chromosome and prevents the production of normal size dystrophin, a large cytoskeletal protein of 427 kDa (1,2). In 1989, Love and colleagues (3, see also Ref. 4) showed the existence of a gene on chromosome 6q24 that encodes a cytoskeletal protein, named utrophin, displaying a high degree of sequence identity with dystrophin. In skeletal muscle, the level and localization of utrophin has been shown to vary markedly according to the state of differentiation and innervation of muscle fibers. In embryonic tissue, for instance, utrophin localizes to the sarcolemma along the entire length of developing fibers (5,6). As the muscle matures, the amount of utrophin decreases progressively, and utrophin becomes preferentially localized to the neuromuscular synapse (7,8). An exception to this occurs in muscle fibers from both DMD patients and mdx mice where utrophin persists at the sarcolemma in extrasynaptic regions (9 -11). Under specific conditions, therefore, utrophin presents a more homogeneous distribution along the sarcolemma of adult muscle fibers. Together, these studies therefore suggest that in addition to therapies based on dystrophin gene transfer, up-regulation of utrophin may be envisaged as an alternative strategy to prevent the relentless progression of DMD. In this context, we have recently shown that high expression of a truncated utrophin transgene markedly reduced the dystrophic muscle phenotype in mdx hind limb and diaphragm muscles indicating that systemic up-regulation of utrophin may indeed be an effective treatment for DMD (12). The next step is to decipher the regulatory mechanisms presiding over utrophin expression in attempts to ultimately induce expression of the endogenous gene product throughout skeletal muscle fibers. In the present study, we have thus initiated a series of experiments focusing on the molecular mechanisms involved in the restricted expression of utrophin at the neuromuscular synapse. EXPERIMENTAL PROCEDURES Surgery-Ectopic synapses were induced to form on soleus muscles from adult control and mdx mice. An incision was first made at the mid-calf region, and the common peroneal nerve was exposed by blunt dissection. Both branches of this nerve were isolated, cut, and transplanted onto the distal surface of the soleus (13). Fourteen days later, ϳ5 mm of the tibial nerve was cut and removed to denervate the muscle and to allow the foreign nerve to form synaptic contacts with soleus muscle fibers. Two weeks after sectioning the tibial nerve, the sciatic nerve was stimulated. Soleus muscles which demonstrated contractile activity in response to electrical stimulation were excised, mounted with Tissue Tek, and frozen. Immunofluorescence-Immunofluorescence experiments were performed on longitudinal serial sections (12 m) of soleus muscles. The presence of synapsin was monitored using a rabbit anti-synapsin antibody (Alexis Corp., San Diego, CA). Utrophin immunoreactivity was detected using either a rabbit anti-utrophin antibody (from Dr. Tejvir Khurana, Harvard University) or a monoclonal anti-utrophin antibody (from Dr. Glen Morris, N.E. Wales Institute, UK). Synapsin and utrophin antibodies were applied onto separate serial muscle sections for 1 h. In Situ Hybridization-Longitudinal serial cryostat sections (12 m) of hind limb muscles from control C57BL/6 and mdx mice were placed on alternate Superfrost Plus slides (Fisher Scientific Co., Pittsburgh, PA). Alternate slides were either processed for acetylcholinesterase (AChE) histochemistry (14) to visualize neuromuscular junctions or subjected to in situ hybridization using synthetic oligonucleotides for detection of utrophin transcripts. In situ hybridization was performed using two antisense oligonucleotides complementary to the mouse utrophin mRNA (oligonucleotide 1, 5Ј-TGTGCCCCTCAGCCACTCTTCCTTCTCCTTGATGG-TCTCCTC-3Ј, and oligonucleotide 2, 5Ј-TGCTGCCTGGTGGAACTGTG-GGCCTGGGTCAGTGTCAAGTG-3Ј) according to Schalling et al. (15). Analysis of in situ hybridization labeling was performed using an image analysis system equipped with Image 1.47 software (Wayne Rasband, NIMH) (16). The density of in situ hybridization labeling in synaptic versus extrasynaptic regions was determined by measuring the number of labeled pixels within a circular field of constant 100 m in diameter. To determine the difference in utrophin mRNA levels between control and mdx mouse muscles, 1-mm 2 areas of extrasynaptic regions were sampled. Optical density values were used as a measure of labeling with higher values indicating greater labeling (17). Twelve muscle sections were processed for each condition, and a minimum of four measurements were performed on each section. Three animals were used for each condition. Expression of Utrophin Promoter-Reporter Gene Constructs-Four human utrophin promoter-reporter gene constructs were used in these experiments: 1.3-and 0.5-kb promoter fragments positioned in either the forward or reverse orientations (see Fig. 1 and Ref. 18). These promoter fragments were inserted upstream of the reporter gene lacZ and a nuclear localization signal (nlsLacZ). Plasmid DNA was prepared using the Qiagen Mega-prep procedure (Chatsworth, CA), and final pellets were resuspended in sterile phosphate-buffered saline to a final concentration of 2 g/l. For direct gene transfer, 25 l of DNA solution was injected directly into the tibialis anterior (TA) muscle of 4-week-old mice (19 -21). At different time intervals thereafter (7-42 days), TA muscles were excised and quickly frozen for serial cryostat sectioning. Tissue sections were processed histochemically for the demonstration of ␤-galactosidase and AChE activity. The position of blue myonuclei indicative of utrophin promoter activity was determined and compared with the presence of neuromuscular synapses using the quantitative procedure established by Duclert et al (21). RESULTS In a first series of experiments, we examined by in situ hybridization the distribution of utrophin mRNAs along muscle fibers from both C57BL/6 and mdx mice. Our results disclosed a selective accumulation of utrophin transcripts within the postsynaptic sarcoplasm (Fig. 2, A and B). In these experiments, utrophin mRNAs were also detected in extrasynaptic regions of muscle fibers albeit at significantly lower levels in comparison to synaptic sites. As expected, utrophin transcripts were observed in blood vessels and capillaries (Fig. 2C). Control experiments performed with synthetic oligonucleotides corresponding to the sense strand of the mouse utrophin mRNA failed to label subcellular structures within these muscle sections (not shown). Quantitative analyses revealed that of 375 neuromuscular junctions, 313 (ϳ83%) displayed an accumulation of silver grains corresponding to utrophin transcripts. Densitometric analysis further showed that the levels of utrophin mRNAs confined within the postsynaptic sarcoplasm were approximately 12-fold higher than those observed in extrasynaptic regions (Fig. 3A). In agreement with previous reports showing up-regulation of utrophin in mdx mouse muscle (see, for example, Ref. 22), we also noted that in comparison to control mice, levels of utrophin mRNA were significantly elevated (ϳ400%) in hind limb muscle fibers from mdx mice (Fig. 3B). However, the ratio of utrophin transcripts in synaptic versus extrasynaptic regions from mdx mouse muscle fibers was similar to that obtained with C57BL/6 mice. To determine whether selective transcription of the utrophin gene accounts for the preferential accumulation of utrophin transcripts within the postsynaptic sarcoplasm, we performed an additional set of experiments in which human utrophin promoter-reporter gene constructs were directly injected into skeletal muscle. Muscles injected with the 1.3-kb utrophin promoter-nlsLacZ construct demonstrated a strong level of expression (Fig. 4). In fact, quantitative analysis revealed that ϳ72% of muscles injected with this construct contained myonuclei expressing significant levels of ␤-galactosidase (Fig. 5A). By contrast, expression of the nlsLacZ construct driven by the 0.5-kb utrophin promoter fragment was markedly reduced since less than 30% of the injected muscles displayed blue myonuclei. Injections of TA muscles with the construct containing the 1.3-kb human utrophin promoter fragment led to the preferential expression of ␤-galactosidase in myonuclei located in the vicinity of neuromuscular synapses (Fig. 4). Detailed quantitative analysis showed that in approximately 55% of the cases, the presence of blue myonuclei coincided with synaptic sites identified by AChE histochemistry (Figs. 4 and 5B). Similar patterns of expression were observed at different time intervals following DNA injection. Deletion of 800 bp 5Ј of this utrophin promoter fragment led to a marked reduction in the percentage of synaptic events (Fig. 5B). These results are nearly identical to those recently reported for the synapse-specific expression of AChR subunit gene promoters (50 -55%) and for the non-synapse-specific expression obtained with the muscle creatine ki-nase promoter (10 -12%; Refs. 19 -21). In our experiments, injections of constructs containing the utrophin promoter fragments cloned in the reverse orientation failed to induce nlsLacZ expression in TA muscles. Finally, we induced the formation of ectopic synapses at sites distant from the original synaptic regions to: (i) examine the contribution of the nerve in the local accumulation of utrophin at the neuromuscular junction and (ii) determine whether utrophin could be expressed in extrasynaptic regions of adult muscle fibers. In these experiments, we observed numerous newly formed ectopic synapses in all soleus muscles that displayed a functional motor response. In fact, co-distribution between the presence of synapsin immunoreactivity and acetylcholine receptors (AChR) was routinely observed (Fig. 6, A and B). Immunofluorescence experiments performed on both control and mdx mouse soleus muscles using either one of the two utrophin antibodies revealed that utrophin was already present at the postsynaptic membrane of these ectopic synapses (Fig. 6, C and D). DISCUSSION The postsynaptic sarcoplasm of the neuromuscular junction represents a highly differentiated domain within muscle fibers in which numerous organelles accumulate. These include morphologically distinct myonuclei referred to as fundamental by Ranvier (23), a synapse-specific Golgi apparatus (24,25), and a stable array of microtubules (26). Previous studies have also FIG. 5. Expression of utrophin promoter-reporter gene constructs in muscle fibers. A, percentage of TA muscles expressing the construct following injections with plasmids containing either the 1.3or 0.5-kb utrophin promoter fragment. Note that deletion of 800 bp from the 5Ј region of the 1.3-kb fragment reduced the percentage of muscles expressing the reporter gene. B, percentage of synaptic events (see "Experimental Procedures") following injections with the two different constructs. Note that the 1.3-kb utrophin promoter fragment confers preferential synaptic expression to the reporter gene nlsLacZ. FIG. 4. Expression of utrophin promoter-reporter gene constructs in muscle fibers. A and B show examples of TA muscles injected with plasmids containing the 1.3-kb utrophin promoter fragment and nlsLacZ. Brown precipitates correspond to AChE histochemistry indicating the presence of neuromuscular junctions. Note the co-localization between the presence of ␤-galactosidasepositive myonuclei and neuromuscular synapses following injections with this utrophin promoter fragment. C and D represent TA muscles injected with plasmids containing the 0.5-kb utrophin promoter fragment. Note that blue myonuclei are observed in extrasynaptic regions of muscle fibers. Bar ϭ 60 m. shown the selective accumulation of transcripts encoding the various AChR subunits (27,28) as well as AChE (29,30) in the postsynaptic sarcoplasm of adult muscle fibers. In the present study, we show that accumulations of utrophin mRNAs are detectable at 83% of the neuromuscular junctions. This value is in fact similar to those reported recently for transcripts encoding other synapse-associated proteins (31). In attempts to elucidate the mechanisms involved in the preferential accumulation of utrophin mRNAs in synaptic regions of muscle fibers (29,31) we injected various utrophin promoter-reporter gene constructs directly into muscle. Similar to the transcriptional activation of the various AChR subunit genes within the fundamental myonuclei (27,28), we observed that injection of constructs containing the 1.3-kb utrophin promoter resulted in synapse-specific expression of the reporter gene. Deletion of 800 bp 5Ј of this promoter fragment abolished synapse-specific expression indicating therefore that regulatory elements contained within this DNA fragment are necessary for conferring synapse-specific expression. Sequence analysis of the deleted 800-bp fragment revealed the presence of an E box which is known to bind myogenic transcription factors. Interestingly, this site is the only consensus sequence that has been found common to all AChR promoters to date (28). Although myogenic factors contribute to the activity-dependent regulation of AChR subunit genes in muscle fibers, this binding site is not required for synapse-specific expression of the AChR ⑀-subunit gene (19). An N box motif constitutes another DNA element which may be involved in the local expression of the utrophin gene within nuclei located in the postsynaptic sarcoplasm (Ref. 20 and Fig. 1). Deletion and mutagenesis experiments have revealed that this DNA element is sufficient to confer synapse-specific expression to the mouse AChR ␦and ⑀-subunit genes, and that it binds a protein complex from muscle nuclear extracts in gel retardation assays (20,21). This DNA element may thus be responsible for the synapse-specific expression conferred by the 1.3-kb utrophin promoter fragment. Ectopic nerve implants have been used successfully to study the development of the neuromuscular junction in vivo. Using this approach, we observed numerous ectopic synapses in "old" extrasynaptic regions of soleus muscle fibers. Immunofluorescence experiments further showed that utrophin appeared at these newly formed synaptic sites within 2 weeks following induction of ectopic synapses. These results are thus in agreement with previous studies which showed the presence of utrophin at agrin-induced AChR clusters in cultured myotubes (32). More importantly, our results indicate that the utrophin gene may be expressed in extrasynaptic regions of muscle fibers upon receiving appropriate neuronal cues. It appears therefore that nerve-derived factors play a crucial role in dictating the local expression of utrophin gene products. Several nerve-derived factors are known to influence the localization and regulation of AChR. For example, ARIA/ heregulin has been shown to markedly influence expression of AChR and, in particular, the expression of the ⑀-subunit gene (33). Since the pattern of expression of the utrophin gene along muscle fibers is similar to that of the ⑀-subunit gene (Ref. 21 and this study) and since both genes appear largely insensitive to abolition of neuromuscular activity (34,35), ARIA/heregulin may thus be considered as a plausible candidate involved in the local regulation of utrophin at the synapse. Agrin represents another factor that may also contribute to the regulation of the utrophin gene within the postsynaptic sarcoplasm. A recent study has in fact shown that substrate-bound agrin induces a 2to 3-fold increase in the expression of the AChR ⑀-subunit gene in cultured myotubes (36) thereby providing support to the notion that agrin is also a transcriptional activator. Since utrophin may be involved in the early steps of synaptogenesis, it is thus possible that agrin stimulates expression of utrophin to ensure the presence of a cytoskeletal scaffold necessary for the assembly and stabilization of postsynaptic membrane domains. Preliminary results obtained in our laboratory indicate that, indeed, both Torpedo and recombinant agrin increase the levels of utrophin mRNA in cultured myotubes. 2 The identification of nerve-derived factors involved in modulating expression of the utrophin gene will provide key information essential for the up-regulation of utrophin as a therapeutic strategy for DMD.
2018-04-03T03:29:52.322Z
1997-03-28T00:00:00.000
{ "year": 1997, "sha1": "3707aa4ca6e059ea0ad401762d5849c11511e17d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/13/8117.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "83c0478cd1d2b10a3ee56714028ea1e5f636d7f1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
246653724
pes2o/s2orc
v3-fos-license
Self-Consciousness of Appearance in Chinese Patients With Cleft Lip: Validation of the Chinese Derriford Appearance Scale 59 (DAS 59) Instrument Objective To develop a reliable and valid Chinese version of the Derriford Appearance Scale 59 (DAS 59) instrument for assessing the self-consciousness of appearance in Chinese patients with cleft lip. Methods The original DAS 59 instrument was translated into Mandarin, back-translated, and culturally adapted among the Chinese population, following the protocol of the original DAS 59. The validation of the Chinese DAS 59 instrument was estimated on 443 adult participants including 213 subjects with a history of cleft lip with/without palate (CL/P, study group) and 230 normal subjects without facial appearance concern (control group). The reliability was estimated by Cronbach's α coefficient and Guttman's split-half coefficient. Content validity was tested using the Spearman correlation coefficient, while discriminant validity was tested by the Mann–Whitney U test. Results The overall internal consistency of Chinese DAS 59 was excellent; Cronbach's α was 0.951 (α = 0.965 and 0.959 in the study and control groups, respectively). Further, Guttman's split-half coefficient was excellent in the study group (0.935) and control group (0.901). The validity of content was good with an acceptable correlation between all the items and domains. The construct validity through the discrimination was good with a statistically significant difference in most domains between the two groups. Patients with CL/P had more concern about the general self-consciousness and social self-consciousness of appearance. They also showed a good self-concept score. Conclusion The Chinese version of DAS 59 demonstrated acceptable reliability and good construct and discriminant validity. It can be used for the research and assessment of the psychological state and quality of life for Chinese patients with cleft lip as well as other appearance problems and concerns. INTRODUCTION Cleft lip with/without palate (CL/P) is a common congenital developmental malformation of the oral and maxillofacial region, accompanied by severe physical defects and various psychological problems (1). The psychological state of CL/P patients can be different from that of normal people (2). Hence, psychological intervention is also a key part of CL/P sequential therapy, which shows a crucial role in the treatment and quality of life (QoL) of patients with CL/P (3). Physical attractiveness is commonly estimated as a prediction for the individual character and is esteemed independent of other traits. Consequently, physical beauty could affect upon how others see people, with the end goal that alluring people get special treatment during youth and adulthood in most social circumstances. Disfigurement and deformity caused by congenital malformations, diseases, trauma, and burns in addition to their treatments lay apparently various individuals at a social drawback with a danger of heavy trouble and social brokenness (4)(5)(6). Therefore, the psychological aspects of deformity should be estimated well, as regular social communication for those with appearance issues is a foundation of unalleviated stress, anxiety, and pain. Moreover, it can be used with surgical interventions to test variations from baseline to post-operation stage, like the benefit inferred from pre-and post-operative photographs (7). The primary objective of CL/P treatment is to improve appearance and function to get better QoL. To decide if CL/P therapy achieves its proposed target, investigations more often depend on estimating and depicting the professionally reported results, for example, the clinical marks of surgical competence or aesthetics dependent on expert perception (8,9). Estimating clinician-reported results is critical to set up the clinical adequacy of treatment and guarantee beneficial practice. Nonetheless, relying on clinician-reported results implies significantly less familiarity with the patient-reported results of repair at the end of cleft treatment pathway (10,11). As of late, there has been a more prominent acknowledgment of the demand concerning the patient's perception in deciding the genuine result of surgery after the treatment of patients with oral cleft (8,12). However, there is still a lack of comprehensive, valid, and reliable tools that could help in understanding the psychological status of patients with CL/P. In the past, the investigation regarding this scope was restricted by the lack of a suitable effect measure to estimate individual fulfillment and health-associated QoL. Later on, various psychometric instruments have been developed to evaluate the psychological influence and alteration of manifestation such as the Appearance Schemas Inventory (13), Body Image Avoidance Questionnaire (14), and Body Dysmorphic Disorder Examination (15); however, most of them were criticized for weak content validity, a limited range of applicability, or restricted psychometric development (7). Most of the aforementioned tools did not develop obviously to estimate the range of symptomatology that is applicable to the wide scope of hardships experienced by individuals living with issues of appearance. Thereby, such measures reported lower response to the nature of the dysfunctions and the seriousness of the misery that the involved individuals experience (7). There is a demand for the establishment and validation of an instrument to estimate patient-reported outcomes in terms of satisfaction, psychosocial prosperity, and health-associated QoL in Chinese patients with cleft lip. Among the patientreported outcome instruments, the CLEFT-Q is an instrument that was developed internationally for children and young adults with CLP (16,17). It includes 12 independently functioning scales that evaluate different concerns in terms of appearance, facial function (speech), and health-related QoL. The Derriford Appearance Scale (DAS) was designed by Carr et al. (7) to measure the psychosocial adjustment in patients with appearance issue. The original DAS had shown sustainable reliability and validity and had been utilized in a wide range of populations, including those who had (no) appearance concern in both general and clinical individuals. Two forms of DAS had been developed, one with 24 items, which is mainly composed for daily application in clinical work and another with 59 items, which is intended for research and deep estimation (18,19). The DAS had also exhibited good psychometric features when translated and approved into different cultural nations (20)(21)(22)(23). Prior to applying any psychometric measure to various cultural group settings, it ought to be translated, approved, and adjusted to regional cultural and social requirements. Thus, the present study was sought to establish a valid and reliable Chinese version of the Derriford Appearance Scale 59 (DAS 59) instrument for assessing the self-consciousness of appearance in Chinese patients with cleft lip. Translation of DAS 59 Instrument The Chinese translation of the DAS 59 instrument was conducted according to the protocol mentioned in the original DAS (7). A native Chinese-speaking surgeon, who was additionally familiar with English, translated the DAS 59 tool into Chinese. A board of Chinese mother-tongue specialists was gathered, which involved five surgeons who lived in an English-native country for more than 1 year. The committee talked about some disputable words utilized in the translation. A backward translation of the Chinese DAS 59 tool into English was conducted by an individual with experience in English linguistics who was not aware with the health-related quality of life (HRQoL) inventories. A specialist board-checked the backward-translated English edition and reconciled it again with the first English one. The last Chinese DAS 59 tool was pretested with a suitable group of patients with/without appearance concerns (40 patients with CL/P and 40 normal subjects) to confirm the readability. After a minor adjustment by the board, the final Chinese DAS 59 tool was developed. The Chinese DAS 59 Instrument Similar to the original DAS 59 (7), the Chinese DAS 59 is formulated as a sequence of 59 statements and questions with reply categories in a Likert format from 1 to 5 to assess the frequency of symptoms (1, almost never.... 5, almost always) and levels of distress (1, not at all distressed.... 5, extremely distressed). It is intended for use in subjects aged ≥16 years. An introductory part gathers the features of appearance aspect to which the respondent raises most sensitivity. This is meant as the respondent "feature" in the body of the instrument. The instrument also detects any other aspects of appearance to which the person might also have distresses. A total of 57 items assess the range of psychological discomfort and dysfunction, and 2 items assess physical distress and physical dysfunction. The instrument was developed to be utilized by clinical and scientific experts from relevant fields of plastic surgery, dermatology, clinical psychology, and psychiatry. Simple and concise guidelines are given on how to accomplish the DAS 59, which is composed as a self-report instrument to be finished without others' intervention. The DAS 59 creates six aspects of psychological discomfort and dysfunction (total score and five domain scores) in addition to an aspect of physical discomfort and dysfunction. The five domains are (1) General self-consciousness of appearance (GSC); (2) Social selfconsciousness of appearance (SSC); (3) Sexual and body selfconsciousness of appearance (SBSC); (4) Negative self-concept (NSC); and (5) Facial self-consciousness of appearance (FSC). The greater the scale, the higher the severity of discomfort and dysfunction. Total scale and domain scores are achieved by adding the scores of independent items as indicated by the instructions presented in a guide that supplies the original DAS 59 (24). Sample Size Calculation The required sample for internal consistency of the Cronbach's alpha was computed by utilizing Bonnett's formula (25) with an alpha of 0.05 and a power of 90%, 133 participants would be required. Ethical approval was obtained from the ethical committee of West China Hospital of Stomatology, Sichuan University (No. WCHSIRB-D-2016-084R1), and the guidelines from the Declaration of Helsinki were followed. Study Participants A total of 218 adult patients with a history of CL/P who visited the center of cleft lip and palate, West China Hospital of Stomatology, Sichuan University from March 2018 to October 2019 were enrolled as a study group. A control group of 230 normal adults who did not have appearance-related concerns and were free of any previous history of cosmetic-intended surgical interventions were conveniently enrolled from outpatient clinics in the same area of the study. Therefore, the Chinese DAS 59 instrument was delivered to a total of 448 participants after being instructed about the objective of the project and singed an informed consent form. The participants completed the Chinese DAS 59 independently within 15 min under the guidance of an assistant who did not interfere with the privacy of the subjects. Finally, there were 443 participants who completed the instrument, accounting for 98.9% of delivered reports; 213 in the study group (129 males and 84 females) aged between 16 and 64 years (mean age = 22.53 ± 7.87 years) and 230 in the control group (96 males and 134 females) aged between 18 and 65 years (mean age = 28.47 ± 10.42 years). Reliability The Chinese DAS 59 instrument revealed excellent reliability and internal consistency as demonstrated by a total excellent Cronbach's alpha coefficient (0.961, Table 1) and Guttman's splithalf coefficient (0.931, Table 2). The five main domains of DAS 59: GSC, SSC, SBSC, NSC, and FSC, have revealed good reliability as revealed by the total Cronbach's alpha values of 0.923, 0.928, 0.774, 0.885, and 0.723, respectively. The Guttman's split-half coefficient also confirmed the good reliability of these five main domains as shown in Table 2. Validity The face validity of Chinese DAS 59 was good as confirmed by the expert committee and was further confirmed by a review of literature. The content validity was also good as revealed by the significant correlation between almost all the items and domains ( Table 3). Discriminant validity was proven by the significant variations between the study and control groups ( Table 4). Tables 5, 6 show the scores of DAS 59 responses for patients with cleft lip. Overall, patients with cleft lip have shown high concerns regarding their appearance, particularly in terms of the general self-consciousness of appearance and social self-consciousness of appearance. On the other hand, they had shown a good degree of positive self-concept. DISCUSSION More recently, the QoL has turned into a valuable parameter in evaluating the viability of clinical interventions. An evaluation of the results relevant to QoL is critical in aesthetic surgery, like CL/P treatment, since patient satisfaction is the transcendent element by which achievement is characterized. Until a later time, it was hard to validate these disputes in an objective design. The previous period, nonetheless, has realized a blast of concern in QoL evaluation devices as a substitute tool for generally profiting from health interventions (27). A progression of tools relevant to health-associated QoL are presently accessible; a large portion of them are instruments with non-specific expressions, not explicitly made for subjects going through aesthetic surgery, and they might misjudge specific impacts of body modifications coming from these interventions. In the modern time, investigations have started to consider the construction of self-perception and its connection to corrective clinical treatments (28). Experimental proof reflected by some developing studies proposes that aesthetic patients reveal selfperception disappointment at baseline and improvements in self-perception after the operation. Estimating patient-centered outcomes have become progressively critical in aesthetic and reconstructive surgical procedures. The DAS is a tool developed to precisely and reliably assess the variety of the QoL after cosmetic and reconstructive surgical interventions. The DAS was particularly intended to assess the psychosocial change in individuals who have reported cosmetic issues. This instrument has shown sustainable reliability and validity and had been used in a wide range of people, including those with/without appearance concerns. The present study describes the translation and validation of the Chinese form of the DAS 59 instrument and its application to patients with CL/P. The translation procedure was performed, following the protocol mentioned in the original DAS. The Chinese DAS enrolled a clinical population represented by cleft lip patients and normal population without appearance concerns. The total internal consistency was excellent (α = 0.96) and was quite close to the alpha value reported in the original DAS 59 (7). All the items have demonstrated good correlation, revealing a good homogeneity. Further, domains also exhibited sustainable internal consistency and correlation. The construct validity of the Chinese DAS 59 was tested through discrimination between the involved groups. Discriminant validity was confirmed by significant variations among the CL/P subjects and normal population. In this context, the social self-consciousness of appearance, sexual and bodily self-consciousness of appearance, negative selfconcept, and facial self-consciousness of appearance have good discriminating validity between two groups. Meanwhile, it can be reflected in the patients with CL/P that maxillofacial defects have obvious influence on social life and other aspects of clinical patients. However, in terms of the general selfconsciousness of appearance, and physical influence, there was no obvious variation between the two groups. These findings could be referred to some various demographic characteristics between the two groups such as occupation and education level. Patients with cleft lip showed almost high concerns about their appearance and associated psychosocial aspects as revealed by their self-reported scores in DAS 59 responses. Their concerns were more prominent in the aspect of general self-consciousness of appearance (41.8 ± 14.7), particularly in the items of self-consciousness of "feature, " taking a special interest in others' "features, " avoiding photography, being hurt by others' comments, distress when others make remarks, and distress when others ask about the 'feature'. Moreover, they also gave an apparent concern about their social self-consciousness of appearance (41.1 ± 16), particularly in the items of being misjudged, difficulty making friends, avoiding parties/discos, avoid leaving the house, feel isolated, feel rejected, and distress when going to school/college/work. On the other hand, patients with cleft lip still showed an apparent degree of positive self-concept as shown in their responses of items under the negative self-concept domain. More and less, the current study further confirms that the DAS 59 could be a valuable tool for assessing the viability of cosmetic and reconstructive intervention and going further to analyze the motivations underlying the demand of appearancerelated interventions. Although the present study included an acceptable sample size, it is still not enough for a more comprehensive analysis of the DAS 59 instrument. Hence, the present cohort did not investigate the factorial construction of the Chinese DAS 59. In addition, the present study had only estimated the validation of DAS 59 in a limited range of clinical samples (CL/P). Therefore, a multicenter research that includes a larger sample size with a wide diversity of clinical population will be necessary to get a more robust result and thorough exploring of the different aspects underlying the DAS 59. CONCLUSION A reliable and valid Chinese form of DAS 59 was established to assess the psychological impact in individuals with appearance concerns. Patients with cleft lip have shown a high concern regarding their appearance, especially to the general and social self-consciousness of appearance. Thereby, the Chinese DAS 59 instrument could be useful in assessing and understanding the psychological and psychosocial distress of Chinese patients with cleft lip. The present Chinese DAS 59 instrument could be approved more by further validation and improvement study. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Ethical Review Board of West China Hospital of Stomatology, Sichuan University (approval no. WCHSIRB-D-2016-084R1). Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
2022-02-09T14:36:52.065Z
2022-02-09T00:00:00.000
{ "year": 2021, "sha1": "15b7741d7ea6c6a784acf8213ebfe7e2b1b3eb12", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "15b7741d7ea6c6a784acf8213ebfe7e2b1b3eb12", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
111815567
pes2o/s2orc
v3-fos-license
TOWARDS GLOBAL RIVER DISCHARGE ASSESSMENT USING A DISTRIBUTED HYDROLOGICAL MODEL AND GLOBAL DATA SETS To improve global water resource assessment, quantification of the global river discharge while considering the effect of slopes is required. Rainfall patterns need to be transformed into discharge by hydrological models like a distributed hydrological model (DHM). This model is expected to represent the spatial variation in aspects of digital global mapping such as land use, land cover, vegetation and elevation. Particularly, a digital elevation model (DEM) is crucial for tracking the flow direction and defining the river network in a basin. In this study, the performance of a 1-km resolution DHM is compared with that of a 90-m resolution DHM in simulating the discharge of the Meghna River in Bangladesh. The input rainfall was obtained from the Tropical Rainfall Measuring Mission (TRMM). The TRMM raw data was improved using available rain gauges over Bangladesh employing correcting factors. These correction factors were then also extended over India where rain gauge data was not available. In summary, the simulation of river discharge using the 1-km resolution model gave reasonable results even though the condition of the slope was limited. Therefore, the procedure here shows the feasibility of modeling global river discharge using global mapping data set. In future development, after setting thresholds at different control points, the potential flood damage to population centres can be evaluated for sound decision making. INTRODUCTION The assessment of global water resources is becoming even more relevant due to growing population, water demands, and the effects of global warming. The Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) summarized and projected that the frequency of heavy precipitation events has increased over most land areas, consistent with warming and observed increases of atmospheric water vapour. And, it is very likely that heavy precipitation events will continue to become more frequent [1]. Quantitative information on the global water cycle, as a basis for sound and effective water management, is thus required. River discharge is a crucial state variable that needs to be estimated to provide information for the study of potential water use and water disasters. Efforts to determine river discharge have been reported. For example, it was developed a total runoff integrating pathway (TRIP) network at 1° × 1° resolution [2], and later on improved [3], [4]. However, the effect of slope has not yet been considered in these approaches. This study attempts to route water from upstream relying on digital elevation models using hillslope elements to simulate the hydrological processes. To capture rainfall patterns and simulate river discharge within a basin, a distributed hydrological model is recommended. The model requires spatially distributed input data such as land cover and elevation data. Nowadays, a number of initiatives offer global geographical data but few of them gather under common standards for societal benefits. An example would be the efforts of Global Mapping initiative that develops global scale geographic information through international cooperation. One of its interests is the prediction of global water resources for preventing water-related disasters. A hydrological model would be very useful for assessing and preventing disasters at a basin scale while using global data set. A key component of the water cycle is precipitation, but not all basins are well gauged at the surface. The TRMM multi-satellite precipitation analysis (TMPA) carries out estimations from satellite measurements. In most cases a ground validation might be required. Only a few works on this topic in Asia have been reported. For example, it was used 1º  1º boxes over Thailand at monthly and daily scales to evaluate TRMM predicted rainfall against rain gauge measurements [5]. It was found TRMM 3B42 V6 performed better than the 3B43 product when evaluated against gauge measurements. Moreover, high correlation was found for 3 days using the spatial average of 50 boxes. This latter study was limited to evaluation without the proposal of a validation method. Similarly, it was compared the performance of TRMM products against ground-based gauge measurements over Bangladesh [6]. The analysis was made on a point-by-point basis at five selected gauges for the period 1998-2002. It was confirmed that 3B42 V6 performed better than other TRMM products. Moreover, they reported that using spatial averages at a country level reduces biases compared with analysing on a point basis. Again, it would be interesting to include a validation procedure on top of the evaluation. The present study, different from the previous two, proposes correction factors for TRMM products using the available ground measurements at basin scale for hydrological purposes. In Asia, cooperation among 18 countries has been established as the Asian Water Cycle Initiative (AWCI). The objective of the initiative is to promote integrated water resources management by making usable information from the Global Earth Observation System of Systems (GEOSS) to address common water-related problems in Asia. The key aspects of the AWCI are convergence of satellite observations, data integration, open data and source policies, and capacity building. Under this initiative, each member country proposed one demonstration river basin. In central Vietnam, a hydrological model of a small basin was set up to predict floods using forecasted and observed rainfall [7]. TRMM products were used successfully without corrections; however, the results should be confirmed in larger basins with few gauging stations. The present study attempts to use satellite measurements over the Meghna River, GEOSS/AWCI demonstration basin, in Bangladesh to assess river discharge. In summary, the goal of this paper is to contribute to river discharge assessment using available global datasets through an application to a large river basin. Since datasets might show constraint at coarse resolution, a ground validation is proposed. STUDY AREA Bangladesh is a land of rivers. This small country with an area of about 144 000 km 2 happens to be the meeting point of three major rivers: the Jamuna-Brahmaputra, the Padma-Ganges and the Meghna. The majority of the country is characterized by flat terrain, and the small areas of non-flat terrain lie to the northeast and southeast. Owing to this terrain structure, even though the rest of the country experiences prolonged flooding, the population living downstream of the Meghna basin experience flash flooding. Damage due to flash flooding in this region has drawn the attention of researchers and modellers for some time. The Meghna River basin extends between Bangladesh and India. About 35 % of the basin area falls within the Bangladesh borders and the remaining 65% lies within India. Specifically, the basin encompasses the Assam and Tripura mountain regions of India down to the agricultural area of Bangladesh at latitude of 23-26ºN and longitude of 90-95ºE as seen in Figure 1. Downtown Dhaka is located 50 km southwest of the outlet of the simulation, Bhairab Bazar. The total river system basin area is approximately 64 000 km 2 ; about 62% of the area comprises forested mountainous and hilly terrain. On the other hand, 29% comprises irrigated cropland and pastures. The basin altitude varies from 1 m to 2 688 m with a mean of 362 m a.s.l., and the average basin slope is 1.3%. The main river channel runs northeast-southwest towards Bhairab Bazar. The topography of the basin changes rapidly in northern and eastern areas as shown in Figure 1 with dark shading. The high runoff in the rainy season creates large amounts of discharge. However, the central part of the basin comprises plains and gentle hills and is highly susceptible to flooding. The Meghalaya mountainous area has recorded high rainfall exceeding 5 500 mm per year. The Meghna River is a main source of irrigation for agriculture and aquaculture, which shift with the rainy season. The year can be divided into April-May as the pre-monsoon season, June-September as the monsoon season and October-November as the post-monsoon season [6]. December-March is the dry season during which there is only small amount of rainfall. METHODOLOGY To transform rainfall patterns into river discharge while considering the effect of slope, a physically based distributed hydrological model (DHM) was selected. A brief description of the model, the required input data and the methods undertaken to validate the global dataset are described in this section.  Distributed Hydrological Model A DHM was used to simulate the spatially distributed hydrological processes in the study area and the routing of water in the river network system. The DHM employed in this study is a grid-based geomorphology-based hydrological model (GBHM) where the computational unit is a geometrically symmetrical hillslope [8]. This element is viewed as a rectangular inclined plane with a defined length and unit width. The inclination angle is given by the surface slope, and bedrock is assumed to be parallel to the surface. The GBHM solves the governing equations using two models. First, a hillslope model evaluates hydrological processes such as canopy, interception, evapo-transpiration, infiltration, surface flow, and subsurface flow, as well as exchanges between groundwater and surface water using governing equations. Second, the water routing of the river network is determined using a kinematic wave approach.  Topography The first step in building the described model is to delineate the modelling area from a digital elevation model (DEM) using a geographic information system (GIS). The DEM is crucial in tracking the flow direction and defining the river network in the basin. Two different sources for the DEM were employed. A 90-m DEM resolution product was employed. It is produced by the Shuttle Radar Topography Mission (SRTM) and it can be accessed at http://seamless.usgs.gov/. The SRTM is a joint project between the National Aeronautics and Space Administration Once both datasets (90-m & 1-km DEM) were downloaded, two different 2700-m DEM models were prepared by aggregating their original elevation resolutions using the TOPOGRID command in Arc/Info version 8.2. This command allows the generation of hydrological correct elevation from the river network (in vector format). This layer can be generated from a finer DEM and/or digital streams. The river network obtained using a 90-m DEM did not show relevant differences with the actual river network. There were some grids with no data that were corrected using a base layer. This layer was created by averaging surrounding grids and then using the FILL command in Arc/Info. On the other hand, the new DEM obtained using the 1-km DEM was far from representing the actual stream network. Therefore, the digital drainage layer of Bangladesh provided by the Global Mapping dataset was used as input in the TOPOGRID generation. The results of the first attempt and the corrected 1-km DEM are displayed in Figure 2. It can be seen that the main constraint was the flow direction at low elevations where streams did not converged at Bhairab Bazar. Using the surface hydrologic analysis with Arc/Info, two watersheds were delineated. They were then divided into sub-basins, as shown in Figure 3, using the Pfafstetter scheme [9]. The resolution of the source data is reflected by the morphology of the streams, which are straighter using the 1-km DEM. The total drainage area using the 1-km DEM and 90-m DEM reached 63 277 km 2 and 64 495 km 2 , respectively. The discrepancy for both watersheds was 1.89%.  Other Spatial data Global land cover characteristics at 1-km resolution were prepared and clipped to the delineated watersheds using GIS tools. The dominant land use is forest with 62% coverage and 29% corresponds to irrigated cropland and pastures. Additionally, the vegetation raster data from Global Mapping was used to define the percentage of tree coverage for each land use type. Bhairab Bazar Improved version using 1-km DEM First attempt using 1-km DEM 1-km DEM based basin 90-m DEM based basin The soil type was determined from the Food and Agriculture Organization global soil maps [10], which include derived soil water parameters associated with each soil unit (available at: http://www.fao.org/AG/agL/agll/dsmw.stm). The parameters used were the saturated hydraulic conductivities, saturated and residual water content, and Van Genuchten's constants. Similarly as for land use, the soil unit data was clipped to the delineated watersheds using GIS. Sub-grid parameterization was carried out using the original grid size resolution to consider the heterogeneity of land use and the length of hillslope elements in the 2 700-m grid models. In the case of land use, the percentage of coverage was applied. The total length of a hillslope in a 2 700-m grid was extracted from small hillslope lengths of the DEM following the reported literature [11,12]. The distribution and depth of topsoil play key roles in determining the simulated discharge, since subsurface depths and initially saturated zones are defined by topsoil depth. For instance, forested areas with loose material and gentle slopes are most likely to have greater topsoil depths than other combinations [13].  Precipitation data Once the drainage area of the basin was defined, observed time series data were prepared. The rainfall amounts recorded by the rain gauge network, indicated with triangles in Fig. 1, were used as input data for the model. The values are daily and the coverage is only over the Bangladeshi area. Besides observed values at the surface, observed satellite values from the TRMM were also used. The TRMM is a joint mission between NASA and the Japan Aerospace Exploration Agency to monitor and study tropical rainfall and energy exchange [14]. The satellite has been in orbit for more than 10 years. The efforts to improve estimations of TRMM multi-satellite precipitation analysis (TMPA) with different applications was reported [15]. The 3B42 version 6 product was used in this study which includes calibration with monthly merged rainfall from Global Precipitation Climatology Project (GPCP). The temporal resolution is 3h and spatial resolution 0.25º and it can be downloaded free of charge from http://gdata1.sci.gsfc.nasa.gov/daacbin/G3/gui.cgi?instance_id=TRMM_3-Hourly. This webpage also allows interactive visualization of spatial and temporal sub-sets.  Improvement of Satellite Precipitation The gauges located within the modelling domain were selected and given a systematic code. First, the 3-hourly 3B42 V6 product was aggregated into daily values. Second, these daily values were averaged over the same month using TRMM and gauge data. Third, at each gauging station, mean values daily_mean TRMM and daily_mean Gauge were obtained for each month. According to the overestimation or underestimation, a correction factor was then used at each gauge: For example, overestimation of TRMM by 20% would be denoted as 1.2; the correction factor would then be 0.8 and vice versa. Note that equation (1) is only valid when the daily rainfall is greater than zero to avoid infinite magnitudes. Moreover, the lower and upper boundaries of the correction factor are defined as 0.5 and 1.5 respectively. As a matter of fact, the correction of TRMM values cannot exceed 50% of values at the surface. After applying equation (1), at each gauging station a different factor is obtained according to the month. However, the DHM model requires the rainfall amounts at each computing grid. The factors are applied to those grids covered by each gauging station defined by Thiessen polygons. This procedure is valid within model grids in the Bangladesh area; however, the factors are extrapolated to India area using measurements from the nearest gauges. The new TRMM values at each grid were obtained using the correction factors as: MODEL APPLICATION This study targeted the rainy season running from April to November for the years 2001, 2002, 2003 and 2004 owing to the availability of input data. The situation before and during main floods was targeted. The hydraulic conductivities of top soil layers, surface storage and anisotropy were calibrated by running the DHM for the rainy season of 2001. April was used for model initialization and May and June for evaluation. July was then considered for validation using rainfall measurements from observed rain gauges. The calibrating method was trial-and-error reduction of the root mean square difference between simulated and observed river discharges. Within the monsoon season of 2004, the highest flood peak events of mid September and November were targeted at Zhakiganj (intake of Upper Meghna) and Bhairab Bazar (outlet of the model) as seen in Figure 4. The 1-km model gave reasonable results even though the condition of the slope was limited. Next, the 1-km DEM based model was run using TRMM raw data and the improved TRMM data. This shows the availability of water resources during the rainy season. DISCUSSION & OUTLOOK The river network of the Meghna River, a demonstration basin for GEOSS/AWCI, was obtained using global datasets. The typical limitations of using a 1-km DEM in flat areas were overcome using national drainage data from the Global Mapping project. The delineated area obtained using the 1-km DEM was very close to that obtained using a fine DEM. The simulated discharge obtained using the 1-km DEM within the river network was validated by comparing with the finer DEM results even though there were slope constraints. The 3-hourly TRMM 3B42 V6 product was then improved using correction factors for each month within the rainy season. These factors were obtained at each gauging station then transferred to each grid according to Thiessen polygon areas of influence. The performance of the improved TRMM product was evident when plotting the accumulated discharge. The present river routing model should be improved in future development in order to simulate inundations. In addition, after setting a threshold discharge, the potential flood damage to population centres should be evaluated for flood hazard predictions and warnings. The International Steering Committee for Global Mapping is currently entering its third phase, which involves further upgrades of global maps and the expansion of the country-level dataset. At the time of writing, 164 countries share their national data according to digital standards. It would be desirable that Bolivia also joins the group sharing the national digital maps to enable the set-up of Bolivian rivers using the global data set fully as showed in this study. The obtained results are promising for the simulation of other large river basins of the world. River discharge measurements are needed to assess water resources at basin scale. Mainly in areas where no in situ measurements are available, TRMM precipitation data can be very useful providing rainfall patterns. An example of such use would be the Predictions in Ungauged Basins (PUB) initiative supported by the International Association of Hydrological Sciences [16]. The information derived from the simulations is crucial for planning water usage and reducing the extent of disasters such as floods and droughts. Particularly, the hydrological modelling of large trans-boundary river in South America such as Amazon and La Plata basins would be feasible using the presented approach. For example, La Plata basin covering 3.2 millions km 2 with very localized rain gauge network mainly over Brazilian territory where calibration of TRMM satellite data could be done. Then, those correction factors could be applied over large areas but with less observation coverage like in Bolivia and Argentina. ACKNOWLEDGMENTS We would like to thank the Bangladesh Meteorological Department and Ministry of Defence of Bangladesh for providing in situ data for the Meghna River basin.
2019-01-02T13:52:27.914Z
2009-07-31T00:00:00.000
{ "year": 2009, "sha1": "efd025307dfb6a1ebed7bb0f9ce8d2855d99c0b8", "oa_license": "CCBYNCSA", "oa_url": "http://www.upb.edu/revista-investigacion-desarrollo/index.php/id/article/download/82/228", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "812226f2f2b2e18fd9c91cc5385f7d1760ecd40b", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
198179828
pes2o/s2orc
v3-fos-license
Lagrangian cobordisms and Legendrian invariants in knot Floer homology We prove that the LOSS and GRID invariants of Legendrian links in knot Floer homology behave in certain functorial ways with respect to decomposable Lagrangian cobordisms in the symplectization of the standard contact structure on $\mathbb{R}^3$. Our results give new, computable, and effective obstructions to the existence of such cobordisms. Introduction Let ξ std be the standard contact structure on R 3 , given by the kernel of the 1-form A difficult problem in contact and symplectic geometry, which has attracted a great deal of attention in recent years, is to decide, given two Legendrian links whether there exists an exact Lagrangian cobordism from Λ − to Λ + in the symplectization (R t × R 3 , d(e t α std )). In the smooth category, any two links are cobordant in R×R 3 , and the challenge is to determine the minimum genus among such cobordisms. The opposite is true in the Lagrangian setting, where the existence of an exact Lagrangian cobordism is constrained but its genus is completely determined by the classical Thurston-Bennequin and rotation numbers of the Legendrian links at the ends. Indeed, Chantraine showed in [Cha10] that if L is an exact Lagrangian cobordism from Λ − to Λ + , then (1) tb(Λ + ) − tb(Λ − ) = −χ(L) and r (Λ + ) = r (Λ − ). An important goal, therefore, is to develop obstructions to the existence of exact Lagrangian cobordisms that are effective, meaning that they can obstruct such cobordisms where smooth topology and the classical invariants do not. In this article, we restrict our attention to decomposable Lagrangian cobordisms, which are those that can be obtained as compositions of elementary cobordisms associated to Legendrian isotopies, pinches, and births, as shown in Our main result, described in Sections 1.1-1.3, is that knot Floer homology provides effective obstructions to decomposable Lagrangian cobordisms. Symplectic Field Theory also furnishes various obstructions to exact Lagrangian cobordisms; see [EHK16,CNS16,Pan17,CDGG15,ST13]. One advantage of our knot Floer obstructions is that they are generally much easier to compute than those coming from SFT. Moreover, we show that knot Floer homology obstructs decomposable cobordisms in cases where the SFT invariants do not (and vice versa). As discussed in Section 1.4, there is an existing body of work [BS18a,BS18b,GJ19] showing that knot Floer homology effectively obstructs Lagrangian cobordisms of genus-zero in various settings. Ours is the first result that shows that knot Floer homology can effectively obstruct Lagrangian cobordisms of positive genus. 1.1. Obstructions. In [OSzT08], Ozsváth, Szabó, and Thurston used the combinatorial grid diagram formulation of knot Floer homology [MOS09] to define invariants of Legendrian links in (R 3 , ξ std ). These so-called GRID invariants assign to such a Legendrian link Λ two elements in the hat flavor of the knot Floer homology of Λ ⊂ −S 3 , 1 λ + (Λ), λ − (Λ) ∈ HFK(−S 3 , Λ), 2 which depend only on the Legendrian isotopy class of Λ. These elements are effective invariants in that they can distinguish Legendrian links that are not isotopic but have the same classical invariants (see [NOT08], for example), and are combinatorially computable. We prove that the GRID invariants are well-behaved under decomposable Lagrangian cobordisms. As explained in Section 1.3, this provides effective obstructions to such cobordisms. Then there is no decomposable Lagrangian cobordism from Λ − to Λ + . Theorem 1.2 has the following corollaries. Then there is no decomposable Lagrangian filling of Λ. As explained in Section 3.3, this follows from the fact that the GRID invariants are nonzero for the tb = −1 Legendrian unknot. 1 The GRID invariant is defined for knots in S 3 , but a Legendrian knot in (R 3 , ξ std ) can be viewed naturally as a Legendrian in the standard contact structure on S 3 . We follow the conventions of [OSzT08] and view these invariants as living in HFK(S 3 , m(Λ)), which we identify with HFK(−S 3 , Λ). 2 There are also versions of the GRID invariants in the more general minus flavor. Then there is no decomposable Lagrangian cobordism from Λ − to Λ + . This follows immediately from the fact (see Proposition 2.2) that the elements λ + and λ − vanish for positively and negatively stabilized Legendrian links, respectively. In [LOSSz09], Lisca, Ozsváth, Stipsicz, and Szabó used open book decompositions to define a knot Floer invariant of Legendrian knots in any closed contact 3-manifold. For a Legendrian knot Λ ⊂ (R 3 , ξ std ), their so-called LOSS invariant also takes the form of an element Although the LOSS invariant is not algorithmically computable, Baldwin, Vela-Vick, and Vértesi proved in [BVV13] that it agrees with the GRID invariants, for Legendrian knots in (R 3 , ξ std ). More precisely, given such a knot Λ, there are isomorphisms This gives corresponding versions of Theorem 1.2 and its corollaries for the LOSS invariant. 1.2. Proof. Theorem 1.2 follows from a similar result for the tilde version of the GRID invariants. We explain this below after providing a bit of additional background on the construction of the GRID invariants (see Section 2.2 for details). A Legendrian link Λ ⊂ (R 3 , ξ std ) can be represented by a grid diagram G. This grid diagram determines a combinatorially computable, bigraded chain complex whose grid homology agrees with the knot Floer homology of Λ ⊂ −S 3 , GH(G) ∼ = HFK(−S 3 , Λ). There are two canonical cycles in this grid chain complex, representing elements The hat version of the GRID invariants discussed previously are defined by A specialization of this chain complex gives rise to the tilde version of grid homology, which agrees with the tilde flavor of knot Floer homology, and is related to the hat flavor by is the two-dimensional vector space supported in the Maslov-Alexander bigradings indicated by the subscripts; |G| is the grid number of G; and |Λ| is the number of components of Λ. There are two canonical elements in this version of grid homology as well, which we refer to as the tilde version of the GRID invariants. Moreover, there is an injection that sends λ ± (G) to λ ± (G) [NOT08]. In particular, Theorem 1.2 therefore follows immediately from our main technical result below, which states that the tilde versions of the GRID invariants satisfy a weak functoriality under decomposable Lagrangian cobordisms. Theorem 1.5. Suppose Λ − , Λ + are Legendrian links in (R 3 , ξ std ) with grid representatives G − , G + , respectively. Suppose there exists a decomposable Lagrangian cobordism L from Λ − to Λ + . Then there is a homomorphism This map has Maslov-Alexander bidegree where |Λ ± | is the number of components of Λ ± . Recall that a decomposable cobordism L as in the theorem can be described as a composition of elementary cobordisms associated with Legendrian isotopies, pinches, and births. To prove Theorem 1.5, we define combinatorially computable maps on the tilde version of grid homology for each of these elementary cobordisms (the maps corresponding to Legendrian isotopies were defined in [OSzT08]), and show that these elementary maps preserve the tilde GRID invariant. We then define Φ L to be the appropriate composition of these elementary maps. In [Juh16,Zem19], Juhász and Zemke independently proved that decorated link cobordisms between pointed links induce well-defined maps on knot Floer homology. (They defined these maps differently, but showed in [JZ19] that their definitions agree for the tilde flavor of HFK.) A grid diagram naturally specifies a pointed link, and the sequence of grid moves corresponding to a decomposition of a Lagrangian cobordism L from Λ − to Λ + into elementary pieces specifies a decorated cobordism between pointed copies of Λ ± . We believe that the map Φ L agrees with the functorial map of Juhász-Zemke associated to this decorated cobordism, but do not prove this here. 1.3. Effectiveness. In Section 4, we give several examples that show that Theorem 1.2 can be used to obstruct decomposable Lagrangian cobordisms where the classical invariants and smooth topology do not. In particular, we prove the following in Section 4.3. Theorem 1.6. For each g ∈ Z ≥0 , there are Legendrian knots Λ − , Λ + ⊂ (R 3 , ξ std ) such that • there is a smooth cobordism of genus g in R × R 3 between Λ − and Λ + , The last item implies that there is no decomposable Lagrangian cobordism from Λ − to Λ + . As alluded to above, Symplectic Field Theory [EGH00] also provides effective obstructions to Lagrangian cobordisms. The most-studied such SFT obstruction comes from the Chekanov-Eliashberg DGA [Che02,Eli98], which assigns to a Legendrian knot Λ ⊂ (R 3 , ξ std ) a differential graded algebra (A Λ , ∂ Λ ), which is an invariant of the Legendrian isotopy class of Λ, up to stable tame isomorphism. This DGA is said to be trivial if it is stable tame isomorphic to a DGA in which the unit is a boundary. Ekholm, Honda, and Kálmán proved in [EHK16] that an exact Lagrangian cobordism from Λ − to Λ + induces a DGA morphism Therefore, if the first DGA is trivial and the second is nontrivial then there cannot exist such a cobordism. It can be difficult to determine whether these DGAs are trivial, meaning that this obstruction can be hard to apply in practice. By contrast, there is a simple algorithm to decide whether the GRID invariants vanish and apply Theorem 1.2. Another advantage of the GRID invariants is that the elements λ + and λ − are preserved by negative and positive Legendrian stabilization, respectively. This implies, for example, that for any pair Λ − , Λ + of Legendrian knots as in Theorem 1.6, the GRID invariants also obstruct the existence of a decomposable Lagrangian cobordism from any negative stabilization of Λ − to any negative stabilization of Λ + . By contrast, the Chekanov-Eliashberg DGA is trivial for stabilized knots, and therefore cannot obstruct such cobordisms. We should point out that there are also examples for which the DGA obstructs decomposable Lagrangian cobordisms where the GRID invariants do not (see Section 4.4). 1.4. Antecedents. As mentioned above, there are a few prior works that use knot Floer homology to obstruct genus zero Lagrangian cobordisms; such cobordisms are called Lagrangian concordances, and are automatically exact. In [BS18a], for instance, Baldwin and Sivek defined an invariant of Legendrian knots in arbitrary closed contact 3-manifolds using monopole knot homology, and showed that it satisfies functoriality with respect to Lagrangian concordances in symplectizations of such manifolds. They then proved in [BS18b] that there is an isomorphism between monopole knot homology and knot Floer homology that identifies their Legendrian invariant with the LOSS invariant. This implies that the LOSS invariant is well-behaved with respect to Lagrangian concordances, and, in particular, reproduces Theorem 1.2 for concordances between knots in the symplectization of (R 3 , ξ std ), without the assumption of decomposability. The other notable result in this area is due to Golla and Juhász, who proved in [GJ19] that the LOSS invariant satisfies functoriality with respect to regular Lagrangian concordances in Weinstein cobordisms between closed contact 3-manifolds (which include symplectizations). More precisely, they showed that the functorial map (of Juhász-Zemke) associated to a decorated regular Lagrangian concordance L in a Weinstein cobordism W , sends L(Λ + ) to L(Λ − ), for decorations consisting of two parallel arcs that partition the cylinder into disks. We note that regular cobordisms are exact and, in the symplectization of (R 3 , ξ std ), include decomposable cobordisms [CET19]; in brief, and it is open whether any of these inclusions are proper. The Golla-Juhász result therefore recovers Theorem 1.2 for concordances between knots in the symplectization of (R 3 , ξ std ), with the potentially weaker assumption of regularity. As noted previously, what most differentiates the results in this paper from those in previous works is that ours apply to positive genus Lagrangian cobordisms as well as to concordances. The table below summarizes the different settings in which the various knot Floer obstructions to Lagrangian cobordism are known to hold. 1.5. Organization. Section 2 consists of background material. In Section 3, we prove Theorem 1.5, which, as described above, implies Theorem 1.2. In Section 4, we show via examples that our obstructions are effective, proving Theorem 1.6. 1.6. Acknowledgements. The authors thank Lenny Ng for his insights on the Chekanov-Eliashberg DGA and, in particular, for computing this DGA for the knot m(10 145 ). The authors also thank Caitlin Leverson, Marco Marengon, Lenny Ng, Peter Ozsváth, Yu Pan, Josh Sabloff, Steven Sivek, Zoltán Szabó, and Shea Vela-Vick for helpful conversations. The third author thanks North Carolina State University for their hospitality. Background In this section, we provide some background on Legendrian knots, Lagrangian cobordisms, knot Floer homology, and the GRID invariants. 2.1. Legendrian knots and Lagrangian cobordisms. Let ξ std = ker(α std ) be the standard contact structure on R 3 , where We will primarily study Legendrian links up to Legendrian isotopy (and will frequently blur the distinction between Legendrian links and Legendrian link types). Furthermore, our Legendrian links will generally be oriented but we will often suppress the orientation from the notation. We will typically represent a Legendrian link by its front diagram, which is its projection to the xz-plane, as illustrated in Figure 1. Note that a Legendrian link is completely determined by its front diagram; in particular, the crossing information is encoded in the slopes of the strands in the diagram (strands with more negative slope pass over strands with less negative slope). Front diagrams for Legendrian isotopic links are related by a sequence of Legendrian planar isotopies and Legendrian Reidemeister moves, shown in the first three diagrams of Figure 2. Recall that the symplectization of (R 3 , ξ std ) is the symplectic 4-manifold and that an embedded surface L in the symplectization is called Lagrangian if Suppose Λ − , Λ + are two Legendrian links in (R 3 , ξ std ). A Lagrangian cobordism from Λ − to Λ + is an embedded Lagrangian surface L in the symplectization such that for some T > 0. This Lagrangian is said to be exact if there exists a function f : L → R that is constant on the cylindrical ends and satisfies A Lagrangian cobordism of genus zero is called a Lagrangian concordance, and is automatically exact. As mentioned in the introduction, Chaintraine proved in [Cha10] that the existence of a Lagrangian cobordism L from Λ − to Λ + implies that In particular, Lagrangian cobordisms (even concordances [Cha15]) are directed. By work of Bourgeois, Sabloff, and Traynor [BST15], Chantraine [Cha10], Dimitroglou Rizell [Dim16], and Ekholm, Honda, and Kálmán [EHK16], there exists an elementary exact Lagrangian cobordism from Λ − to Λ + whenever Λ + is obtained from Λ − via Legendrian isotopy, a pinch, or a birth, as illustrated in Figure 2. Note that for a pinch, it is Λ − that in fact looks as if it has been obtained from pinching Λ + . Topologically, these elementary cobordisms are annuli, saddles, and cups, respectively. Any composition of elementary cobordisms yields an exact Lagrangian cobordism, and an exact Lagrangian cobordism is called decomposable if it is isotopic through exact Lagrangians to such a composition [Cha12]. As mentioned in the introduction, it is open whether every exact Lagrangian cobordism is decomposable. 2.2. Knot Floer homology and the GRID invariants. We begin by reviewing the grid diagram formulation of knot Floer homology, following the conventions in [OSSz15]. See also [MOS09,MOSzT07]. A grid diagram G is an n × n grid of squares together with sets of markings in the squares such that each row and column of G contains exactly one O marking and one X marking (we omit the subscripts indexing these markings when convienient); n is called the grid number of G. We will think of G as a torus by identifying its top and bottom sides and its left and right sides in the standard way, so that the horizontal grid lines become horizontal circles and the vertical grid lines become vertical circles, as indicated in Figure 4. A grid diagram specifies an oriented link in R 3 , obtained as the union of vertical segments from the Xs to the Os in each column with horizontal segments from the Os to the Xs in each row, such that vertical segments pass over horizontal ones, as shown in Figure 4. Conversely, every oriented link in R 3 can be represented by a grid diagram in this way. Figure 4. A grid diagram G for the right-handed trefoil L, and the corresponding front diagram for a Legendrian representative Λ of m(L), obtained by changing all crossings in the link diagram and rotating 45 degrees clockwise. Suppose G is a grid diagram as above, representing an oriented link L. (The use of L for links will only occur in this subsection, and hence should not cause confusion with Lagrangian cobordisms.) Let α = {α 1 , . . . , α n } and β = {β 1 , . . . , β n } denote the vertical and horizontal circles of G, respectively. The minus flavor of the grid chain complex, (GC − (G), ∂ − ), is generated by one-to-one correspondences between the vertical and horizontal circles. Equivalently, a generator is a set of n intersection points between these circles where each intersection point in the set belongs to exactly one α circle and one β circle. Letting S(G) denote the set of generators, GC − (G) is defined to be the free F[U 1 , . . . , U n ]-module generated by the elements of S(G), where each U i is a formal variable corresponding to the marking O i and F is the 2-element field. Given x, y ∈ S(G), let Rect G (x, y) be the set of rectangles in G with the following properties. Rect G (x, y) is empty unless x and y coincide in exactly n − 2 intersection points. An element r ∈ Rect G (x, y) is an embedded rectangle in the toroidal grid whose edges are arcs contained in the vertical and horizontal circles, and whose four corners are points in x ∪ y. Moreover, we require that, with respect to the induced orientation on ∂r, every vertical edge of r is directed from a point in y to a point in x, and vice versa for horizontal edges; that is, (The astute reader may have noticed that this does not seem to line up with the usual convention in Lagrangian Floer homology, but we are in fact computing Heegaard Floer homology for (−T 2 , α, β).) If Rect G (x, y) is non-empty then it contains exactly two rectangles, as illustrated in Figure 5. where O i (r) denotes the number of times the marking O i appears in r. This complex is equipped with two gradings, the Maslov grading and the Alexander, defined as follows. Consider the partial ordering on points in R 2 given by (p 1 , p 2 ) < (q 1 , q 2 ) if p 1 < q 1 and p 2 < q 2 . Given two sets P and Q consisting of finitely many points in R 2 , let We symmetrize this quantity by defining J (P, Q) = I(P, Q) + I(Q, P ) 2 . A generator x ∈ S(G) can be viewed as a finite set of points in R 2 , as can the marking sets X and O. It therefore makes sense to define The Maslov and Alexander gradings of a generator x are then given by where |L| is the number of components of the link L. It follows that for x, y ∈ S(G) and r ∈ Rect G (x, y), the relative Maslov and Alexander gradings of these generators are given by These gradings are extended to gradings on the complex GC − (G) by the rule that multiplication by any of the U i lowers Maslov grading by 2 and Alexander grading by 1. Note that the differential ∂ − G lowers the Maslov grading by 1 and preserves the Alexander grading. Setting all of the U i to zero yields the tilde version of the grid complex, , whose homology agrees with the tilde version of knot Floer homology, and is related to the hat flavor by on homology [NOT08]. Suppose G is a grid diagram representing L. By changing all crossings in the associated link diagram, rotating 45 degrees clockwise, smoothing the top and bottom pointing corners, and turning the left and right pointing corners into cusps, we obtain a front diagram for a Legendrian representative Λ of m(L), as indicated in Figure 4. We say that a grid diagram G represents a Legendrian link Λ if the front diagram obtained from G in the manner above is isotopic to the front diagram for Λ. Every Legendrian link in (R 3 , ξ std ) can be represented by a grid diagram in this way. Suppose G represents the smooth link L and a Legendrian representative Λ of m(L) as above. As shown in [OSzT08], there are two canonical cycles consisting of the intersection points to the immediate upper right and lower left, respectively, of the markings in X. These two generators give rise to cycles in the hat and tilde complexes as well, which we denote in the same way. The Maslov and Alexander gradings of these cycles are given by where |Λ| is the number of components of Λ. In particular, the gradings of the two generators recover tb(Λ) and r (Λ). The hat and tilde versions of the GRID invariants of the Legendrian link Λ are then defined [OSzT08] by and λ ± (G) := [x ± (G)] ∈ GH(G). In particular, λ ± (G) = j * ( λ ± (G)), which implies that Ozsváth, Szabó, and Thurston proved that λ ± (G) are invariants of the Legendrian isotopy class of Λ. Specifically, if G 0 and G 1 are grid diagrams representing Legendrian isotopic links then there is an isomorphism [OSzT08, Theorem 1.1] GH(G 0 ) → GH(G 1 ) of Maslov-Alexander bidegree (0, 0) that sends λ ± (G 0 ) to λ ± (G 1 ). This map is defined combinatorially, in terms of chain maps on the grid complex associated to grid diagram versions of the Legendrian Reidemeister moves. Their argument also gives rise to the following statement for the tilde flavor of the GRID invariants. Note that the homomorphism above may not be an isomorphism. The GRID invariants also behave as follows under stabilization [OSzT08, Theorem 1.3]. Proofs of main results To define the map Φ L in Theorem 1.5 associated to a decomposable Lagrangian cobordism L, we first define maps associated to Legendrian isotopies, pinches, and births, as discussed in the introduction. The maps associated to Legendrian isotopies were defined previously by Ozsváth, Szabó, and Thurston in [OSzT08], and are described in Proposition 2.1, so we will restrict our attention below to the maps associated to pinches and births. Proposition 3.1. Suppose Λ + is obtained from Λ − via a pinch move. For any grid diagrams G + and G − representing Λ + and Λ − , respectively, there is a homomorphism that sends λ ± (G + ) to λ ± (G − ). This map has Maslov-Alexander bidegree where |Λ ± | is the number of components of Λ ± . Proof. By Proposition 2.1, it suffices to show that there exist some grid diagrams G + and G − representing links Legendrian isotopic to Λ + and Λ − , respectively, for which the conclusions of Proposition 3.1 hold. For this, note that there are grid diagrams G ± representing Λ ± that are identical except for the positions of two markings in adjacent rows, as shown in Figure 6. Since L is an oriented cobordism, these two special markings must either both be Xs, which we refer to as Case I, or both be Os, which we refer to as Case II. Figure 6. The grid diagrams for G ± corresponding to a pinch move; Case I on the top, Case II on the bottom. We may combine the grid diagrams G − and G + into a single toroidal diagram, as shown in Figure 7, which we will refer to as the combined diagram. From this perspective, the markings in X, O are fixed and G − and G + differ in a single horizontal circle. We denote these differing horizontal circles by β and γ, as shown in Figure 7. Let a and b be the intersection points of β with γ shown in the figure. Below, we define the map Φ for each of Cases I and II. Figure 7. The grid diagrams G ± combined; Case I on the left, Case II on the right. 3.1.1. Case I. For x ∈ S(G + ) and y ∈ S(G − ), let Pent(x, y) be the space of pentagons in the combined diagram with the following properties. Pent(x, y) is empty unless x and y coincide in exactly n−2 intersection points, where n is the grid number of G ± . An element p ∈ Pent(x, y) is an embedded pentagon in the toroidal diagram whose edges are arcs contained in the vertical and horizontal circles, and whose five corners are points in x ∪ y ∪ {a}. We require that, with respect to the induced orientation on ∂p, the boundary of this pentagon may be traversed as follows: start at the point in x on β and proceed along an arc of β until arriving at a; next, proceed along an arc of γ until arriving at a point in y; next, follow an arc of a vertical circle until arriving at a point in x; next, proceed along an arc of a horizontal circle until arriving at a point in y; finally, follow an arc of a vertical circle back to the initial point in x. See Figure 8 for such pentagons. Let Pent o (x, y) be the subset consisting of p ∈ Pent(x, y) with Proof. To show that φ is a chain map, we must prove the equality of coefficients, for every pair of generators x ∈ S(G + ) and y ∈ S(G − ). The coefficients on the left and right count concatenations of rectangles and pentagons from x to y of the forms p * r and r * p, respectively, where p is a pentagon of the sort used to define φ, and r is a rectangle of the sort used to define the differentials. Every domain in the combined diagram that decomposes as the juxtaposition of a rectangle and pentagon in this way admits exactly one other such decomposition, exactly as in the proof of commutation invariance for grid homology [MOSzT07, Lemma 3.1]. In particular, the concatenations of pentagons and rectangles contributing to the coefficients above cancel in pairs, proving the lemma. Proof. There is a unique pentagon contributing to each of φ(x + (G + )) and φ(x − (G + )), shown in Figure 8, that certifies that Figure 8. Left, the pentagon certifying that the map φ sends x + (G + ) in black to x + (G − ) in white. Right, the pentagon from Lemma 3.4. φ is homogeneous of Maslov-Alexander bidegree Proof. This is a straightforward calculation from the definitions of the Maslov and Alexander gradings in (4) and (5), and the map φ; see the proofs of [OSSz15, Lemma 5.3.1] and [Won17, Lemma 6.6], for example. 3.1.2. Case II. For x ∈ S(G + ) and y ∈ S(G − ), let Tri(x, y) be the space of triangles in the combined diagram with the following properties. Tri(x, y) is empty unless x and y coincide in exactly n − 1 intersection points. An element p ∈ Tri(x, y) is an embedded triangle in the torus whose edges are arcs contained in the vertical and horizontal circles, and whose three corners are points in x ∪ y ∪ {b}. We require that, with respect to the induced orientation on ∂p, the boundary of this triangle may be traversed as follows: start at the point in x on β and proceed along an arc of β until arriving at b; next, proceed along an arc of γ until arriving at a point in y; finally, follow an arc of a vertical circle back to the initial point in x. See Figure 9 for such triangles. Note that all such triangles automatically satisfy be the linear map defined on generators by counting such triangles, Lemma 3.6. φ is a chain map. Proof. This follows from an argument identical to that in the proof of Lemma 3.2, except that here we consider canceling concatenations of rectangles with triangles rather than pentagons. See the proof of [Won17, Lemma 3.4] for details in this case. Proof. There is a unique triangle contributing to each of φ(x + (G + )) and φ(x − (G + )), shown in Figure 9, that certifies that Figure 9. Left, the triangle certifying that the map φ sends Lemma 3.8. φ is homogeneous of Maslov-Alexander bidegree Proof. As with Lemma 3.4, this is a straightforward calculation from the definitions of these gradings in (4) and (5), and the map φ; see the proof of [Won17, Lemma 6.6] for details. The map Φ induced by φ therefore satisfies the conclusions of Proposition 3.1. Proof. By Proposition 2.1, it suffices to show that there exist some grid diagrams G + and G − representing links Legendrian isotopic to Λ + and Λ − , respectively, for which the conclusions of Proposition 3.9 hold. For this, let G − be any grid diagram representing Λ − , with marking sets X, O. Fix a marking X 1 ∈ X. Let G + be the grid diagram obtained from G − by inserting two rows and two columns to the immediate bottom right of X 1 , with four new markings X 2 , X 3 , O 2 , O 3 , as shown in Figure 10. Let a and b be the intersection points between the new vertical and horizontal circles indicated in the figure. Note that G + represents the disjoint union of Λ − with the tb = −1 Legendrian unknot, which is Legendrian isotopic to Λ + . Figure 10. Left, part of a grid diagram G − for Λ − . Right, the corresponding part of the grid diagram G + for Λ + . The generating set S(G + ) can be expressed as a disjoint union, This induces a decomposition of the vector space GC(G + ) as a direct sum, where these summands are the vector spaces generated by the corresponding subsets of S(G + ). Note that we have a sequence of subcomplexes, This follows immediately from the observation that any rectangle either starting at b or terminating at a must pass through one of the new markings or X 1 (and therefore does not contribute to the differential). Let ( AB, ∂ AB ) be the quotient complex of AB ⊕ NB by NB. In other words, for x, y ∈ AB, the coefficient counts the number of rectangles in Rect o G + (x, y), as usual. Note that there is a bijection between generators in AB and generators in S(G − ), given by This bijection extends linearly to an isomorphism of chain complexes, since for x, y ∈ AB there is also a natural bijection which identifies rectangles avoiding the O and X markings in G + with rectangles avoiding the O and X markings in G − . Moreover, it follows readily from (6) and (7), together with this bijection of rectangles, that e is homogeneous with respect to the Maslov-Alexander bigrading. For x ∈ NB and y ∈ AB, let be the subset consisting of rectangles p satisfying Let ψ : NB → AB be the linear map defined on generators by counting such rectangles, y. Let Π : GC(G + ) → NB be projection onto the summand NB, and define to be the linear map given as the composition Lemma 3.10. φ is a chain map. Proof. Since e is a chain map, it suffices to prove that ψ • Π is a chain map. Note that both vanish for generators x / ∈ AB ∪ NB. The first vanishes on such generators x because Π(x) = 0. The second vanishes on such generators x because Indeed, for every y ∈ NB, the coefficient ∂ G + (x), y = 0 since every rectangle from a generator not containing b to a generator containing b must pass through a marking. To prove that φ is a chain map, it therefore suffices to prove the equality of coefficients , y for every pair of generators x ∈ AB ∪ NB and y ∈ AB. These coefficients on the left and right count concatenations of rectangles from x to y of the forms p * r and r * p, respectively, where p is a rectangle of the sort used to define ψ, and r is a rectangle of the sort used to define the differentials. For x ∈ NB, every domain in G + that decomposes as the juxtaposition of the form p * r admits exactly one other decomposition into rectangles p and r, of the form r * p, exactly as in the proof that the grid differential squares to zero [MOSzT07, Proposition 2.10]. There are additional domains that decompose as the juxtaposition of the form r * p, but these cancel in pairs as well. In particular, the concatenations of rectangles contributing to the coefficients above cancel in pairs, proving the lemma in this case. For x ∈ AB, the first coefficient is zero since Π(x) = 0, and there are exactly two concatenations of the form r * p contributing to the second coefficient. These two cancelling concatenations correspond to vertical and horizontal annular domains of width 2, as shown in Figure 11. Figure 11. The vertical and horizontal annular domains corresponding to the two cancelling concatenations of the form r * p for x ∈ AB shown in black and y ∈ AB shown in white. There is a unique rectangle contributing to each of ψ(x + (G + )) and ψ(x − (G + )), as shown in Figure 12. It is then clear from the figure that e(ψ(x ± (G + ))) = x ± (G − ). Proof. It is clear from the definition of ψ, together with (6) and (7), that ψ is homogeneous. Since e and Π are also homogeneous, the same is true of φ. The bidegree of φ is then determined by the fact that this map sends x + (G + ) to x + (G − ), and the lemma follows immediately from (9) and (10), together with the facts that tb(Λ − ) = tb(Λ + ) + 1, The map Φ induced by φ therefore satisfies the conclusions of Proposition 3.9. Proof of Theorem 1.2. As explained in the introduction, this theorem follows from Theorem 1.5 and the fact that λ ± (G) = 0 iff λ ± (G) = 0 for any grid diagram G, as in (11). Proof of Corollary 1.3. A decomposable Lagrangian filling of Λ is a decomposable Lagrangian cobordism from the empty link to Λ, and can thus be described as a composition of elementary cobordisms starting with a birth. The rest of the filling is therefore a decomposable Lagrangian cobordism from the tb = −1 Legendrian unknot Λ U to Λ. The corollary then follows from Theorem 1.2 combined with the fact that λ ± (Λ U ) = 0. Proof of Corollary 1.4. This follows immediately from Theorem 1.2 and the fact that λ ± vanishes for positive and negative Legendrian stabilizations, respectively, as in Proposition 2.2. Examples In this section, we illustrate the effectiveness of Theorem 1.2 via examples, proving Theorem 1.6 along the way. It follows that Λ 0 and Λ 1 are not Legendrian isotopic despite having the same classical invariants, by [OSzT08]. These are among the smallest crossing examples known that demonstrate the effectiveness of the GRID invariants in obstructing Legendrian isotopy. Ng, Ozsváth, and Thurston further observed, using an argument by Ng and Traynor from the proof of [NT04, Proposition 5.9], that these Legendrians are orientation reversals of one another, (13) and (14) with Theorem 1.2, we obtain the following. Proof. The first is obstructed by λ − , the second by λ + . Note that the Thurston-Bennequin and rotation numbers do not obstruct the existence of decomposable Lagrangian concordances between Λ 0 and Λ 1 via (3). Remark 4.2. One of the two directions in Proposition 4.1 is proven by Baldwin and Sivek [BS18b] and independently Golla and Juhász in [GJ19, Proposition 1.7]. In particular, they use the equivalence between λ + and L to obstruct a decomposable Lagrangian concordance from Λ 1 to Λ 0 . 4.2. Examples of genus one. Let K denote one of m(10 132 ) or m(12n 200 ), and let Λ 0 and Λ 1 be the Legendrian representatives of K discussed above. Front diagrams for these Legendrians are given in [NOT08, Figures 2 and 3]. By modifying these front diagrams for Λ 0 and Λ 1 first by a Legendrian Reidemeister I move, and then by adding a positive clasp, as in Figure 13, we obtain new Legendrian knots Λ 0 and Λ 1 , respectively, whose front diagrams are shown in Figure 14. These two Legendrian knots belong to the smooth knot type (The knot types were found using the program Knotscape by Hoste and Thistlethwaite [HT99].) Figure 13. A local modification of the front diagram of Λ i . The first move is a Legendrian Reidemeister I move; the second introduces a positive clasp. Observe that there is a smooth cobordism of genus one between K and K since the latter is obtained from the former via the addition of a positive clasp. Moreover, it is easy to see that the local modification in Figure 13 increases the Thurston-Bennequin number by 2 and preserves the rotation number. Combined with (12), this implies that for any i, j ∈ {0, 1}, tb(Λ i ) = tb(Λ j ) + 2 and r (Λ i ) = r (Λ j ). In particular, the classical invariants and smooth topology do not obstruct the existence of a decomposable genus one Lagrangian cobordism from Λ 0 to Λ 1 or from Λ 1 to Λ 0 . These calculations, combined with Theorem 1.2, lead immediately to the following. Proposition 4.3. There is no decomposable Lagrangian cobordism from Λ 0 to Λ 1 or from Λ 1 to Λ 0 . Proof. The first is obstructed by λ − , the second by λ + . Fix any positive crossing in the front diagram for Λ 0 . Let Λ 0 be the Legendrian knot representing the smooth knot type K obtained from Λ 0 by first performing a Legendrian Reidemeister Figure 14. Top, two Legendrian representatives of m(12n 199 ). Bottom, two Legendrian representatives of m(14n 5047 ). In each case, Λ 0 is shown on the left and Λ 1 on the right. Note that Λ 0 and Λ 1 differ only in the dashed boxes. We have also included the XO coordinates of the corresponding grid diagrams. With this, we may now prove Theorem 1.6. ... Figure 15. A modification of the front diagram of Λ 0 near a positive crossing. The first move is a Legendrian Reidemeister I move; the second introduces m positive clasps. The first is a Legendrian representative of K, the second of K , and there is a smooth genus g cobordism from K to K since K is obtained from K by adding g clasps, fulfilling the first bullet point of the theorem. The second bullet point is fulfilled by (18). 4.4. DGA versus GRID. We assume below that the reader is familiar with the Chekanov-Eliashberg DGA; for a survey, see [EN18]. As mentioned in the introduction, an exact Lagrangian cobordism from Λ − to Λ + induces a DGA morphism [EHK16] (A Λ + , ∂ Λ + ) → (A Λ − , ∂ Λ − ), so if the first DGA is trivial while the second is not then there cannot be such a cobordism. The DGA is trivial for stabilized Legendrians [Che02], so, as noted in Section 1.3, this functoriality cannot obstruct decomposable Lagrangian cobordisms between stabilizations of the examples in Sections 4.1-4.3, while the GRID invariants can. We mentioned in Section 1.3 that there are also cases in which the DGA obstruction applies where the GRID obstruction does not. For example, there is a Legendrian representative Λ − of the figure eight with tb(Λ − ) = −3 and r (Λ − ) = 0, such that (A Λ − , ∂ Λ − ) admits an augmentation, and is therefore nontrivial (see e.g. [CN13]). This example was pointed out to the authors by Steven Sivek. Now, let Λ + be the Legendrian representative of the right-handed trefoil with tb(Λ + ) = −1 and r (Λ + ) = 0, obtained by stabilizing the tb = 1 representative twice, once with each sign, so that (A Λ + , ∂ Λ + ) is trivial. These DGAs therefore obstruct an exact Lagrangian cobordism from Λ − to Λ + . There is a smooth genus one cobordism from the figure eight to the right-handed trefoil, as indicated in Figure 16, so the classical invariants and smooth topology do not obstruct such a cobordism. On the other hand, the figure eight has trivial Heegaard Floer tau-invariant, so that tb(Λ − ) + |r (Λ − )| < 2τ (Λ − ) − 1. Since the figure eight is also thin, this implies that the GRID invariants of Λ − vanish [NOT08, Proposition 3.4], and therefore do not obstruct a decomposable Lagrangian cobordism from Λ − to Λ + via Theorem 1.2. = Figure 16. Two oriented band moves certifying the existence of a genus one cobordism between the figure eight knot and the right-handed trefoil. Finally, in the interest of completeness, we remark that the DGAs of the Legendrian representatives of m(10 132 ), m(12n 200 ), m(10 145 ), m(10 161 ), and 12n 591 , which served as examples of Λ − in Sections 4.1-4.3 admit no augmentations and therefore no linearized Legendrian contact homologies. Indeed, Rutherford showed in [Rut06] that the Kauffman polynomial bound on tb(Λ) is sharp if and only if (A Λ , ∂ Λ ) admits an augmentation, and one can check on Knot-Info [CL] that the Kauffman bounds are not sharp for the Legendrians above. This does not completely rule out the possibility that their DGAs are nontrivial (and could therefore perhaps obstruct the Lagrangian cobordisms we are considering), but it eliminates the most tractable approach to proving nontriviality. Pan proved in [Pan17, Theorem 1.6] that if there exists an exact Lagrangian cobordism (with Maslov number 0) from Λ − to Λ + , then the number of graded augmentations of Λ − up to a certain equivalence is less than or equal to the number of graded augmentations of Λ + up to equivalence. The fact that the Legendrians above admit no augmentations at all also rules out the possibility of applying Pan's more refined obstruction in these examples.
2019-07-23T01:56:23.000Z
2019-07-23T00:00:00.000
{ "year": 2019, "sha1": "c2878295eac9afc96c3cc9c3584c5a757d09ef18", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1907.09654", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "720c811fbe8f39b3dd02c2a9403f9c469298c355", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
218830144
pes2o/s2orc
v3-fos-license
Diagnosis and Management of Graves’ Disease in Thailand: A Survey of Current Practice Background The data on clinical practice patterns in the evaluation and management of Graves' disease (GD) are limited in Asia. The aims of this survey were to report the current practices in the management of GD in Thailand and to examine any international differences in the management of GD. Methods Members of the Endocrine Society of Thailand who were board certified in endocrinology (N = 392) were invited to participate in an electronic survey on the management of GD using the same index case and questionnaire as in previous North American and European surveys. Results One hundred and twenty responses (30.6%) from members were included. TSH receptor antibody measurement (29.2%), thyroid ultrasound (6.7%), and isotopic studies (5.9%) were used less frequently to confirm the etiology compared with those in North American and European surveys. Treatment with an antithyroid drug (ATD) was the preferred first choice of therapy (90.8%). Methimazole at 10–15 mg/day with a beta-blocker was the initial treatment of choice. The preferred ATD in pregnancy was propylthiouracil in the first trimester and methimazole in the second and third trimesters, which was similar to the North American and European surveys. Conclusion Ultrasound and isotopic studies will be requested only by a small proportion of Thai endocrinologists. Higher physician preference for ATD is similar to Europe, Latin America, and other Asian countries. Geographical differences in the use of ATD, radioactive iodine, and thyroidectomy exist. Background Graves' disease (GD) is the most common cause of hyperthyroidism in iodine-replete areas [1]. e development of GD is thought to be due to complex interactions between genetic and environmental factors. Its autoimmune origin is well known, and the stimulation of autoantibodies to the TSH receptor (TRAb) on thyroid follicular cells is responsible for hyperthyroidism and development of a goiter. e clinical features of GD are shared by other etiologies of thyrotoxicosis. However, GD is associated with distinct extrathyroidal manifestations, including Graves' orbitopathy (GO), thyroid dermopathy, and acropachy. e diagnosis of GD can often be established on the basis of the clinical presentation, raised levels of thyroxine (T4), and suppressed levels of TSH. If the diagnosis is not straightforward, supplementary testing may include TRAb measurement, a radioactive iodine (RAI) uptake test, or color-flow Doppler ultrasonography of the thyroid gland [2,3]. e three therapeutic approaches for treating patients with GD are antithyroid drugs (ATDs), a RAI therapy, and surgical thyroidectomy. All three treatment options are effective, but each treatment approach has advantages and drawbacks. Patient-centered communication and shared decision making are becoming increasingly important in determining the most suitable treatment option. e treating physician and patients should discuss the logistics, cost of care, expected recovery time, benefits, disadvantages, and possible side effects for each of the treatment options. e decision may also be influenced by the severity of thyrotoxicosis. Persistent marked variations in the diagnosis and management of GD exist throughout the world [4]. Burch and colleagues conducted a 2011 questionnaire-based survey of actual clinical practice in the management of GD among international members of the Endocrine Society, the American Association of Clinical Endocrinologists, and the American yroid Association (ATA) [5]. In addition, a similar survey was performed in 2013 among members of the European yroid Association (ETA) [6]. In Asia, the results of surveys on clinical practice patterns in the management of GD are available only from Japan, Korea, China, and India [7,8]. To this purpose, we used the same questionnaire developed by Burch et al. [5] and distributed it among members of the Endocrine Society of ailand (EST) to investigate the clinical practice patterns in the management of GD in ailand. Survey. A survey administration application (Google Forms, Mountain View, CA, USA) was used to administer the survey. e survey included the index case (a 42-year-old woman with uncomplicated GD) with two variants, including a patient with GO and a patient anticipating pregnancy in the next 6-12 months, and the same questions as in the earlier surveys [5]. Description of the index case was "a 42-year-old woman presents with moderate hyperthyroid symptoms of 2 months duration. She is otherwise healthy, takes no medications, and does not smoke cigarettes. She has two children, the youngest of whom is 10 years old, and does not plan on being pregnant again. is is her first episode of hyperthyroidism. She has a diffuse goiter, approximately two to three times normal size, pulse rate of 105 beats per minute, and has a normal eye examination. yroid hormone levels are found to be twice the upper limit of normal (free T4 3.6 ng/dL, normal range 1.01-1.79 ng/dL), with an undetectable thyrotropin level (TSH <0.01 mIU/L)." Most questions required a single best response to be selected from multiple choices. Diagnostic preference questions allowed multiple items to be simultaneously selected. To limit bias, questions were carefully constructed to exclude phrasing that could influence the respondents' answers. e study was approved by the Committee on Human Rights Related to Research Involving Human Subjects of the Faculty of Medicine Ramathibodi Hospital, Mahidol University. e study was conducted in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice guidelines. Members of the EST who are board certified in endocrinology (N � 392) received e-mails from the EST administrators that included an electronic link to the questionnaire. e authors did not contact potential respondents directly. Survey responses were anonymously collected and stored electronically by the survey application service, accessible in a passwordprotected manner. Repeat submissions from the same Internet protocol address were automatically blocked by the survey service. Responses were then compared with those of 457 North American specialists extracted from the 2011 American survey [5] and 147 ETA members extracted from the 2013 survey [6]. e responses from respondents were collected from 26 June 2019 to 17 August 2019. Statistical Analysis. Summary statistics were prepared for responses to each question. Data were analysed using STATA software version 14 (StataCorp LP, TX, USA). Response Rate and Respondent Demographics. One hundred and twenty respondents (about 30.6% of the members of the EST who were board certified in endocrinology) participated in the survey, and 100% completed all sections. Due to lacking our own guidelines, respondents mostly followed ATA guidelines. Most respondents graduated from medical school in the 2000s (57%) with 22% graduating in the 2010s and 15% graduating by the 1990s. Forty-seven percent were currently working in a medical school, 28% were working in a private hospital, and 25% were working in a secondary or tertiary healthcare center. Eighty-eight percent of the respondents reported treatment of >10 new cases of GD yearly. Figure 1(a) shows the proportion of respondents requesting the listed laboratory investigations for the index case. Serum TSH and free T4 assays were the most frequently ordered measurements (95% and 81.7%, respectively), whereas serum free triiodothyronine (T3) or total T3 were less frequently requested (73.3% and 20.8%, respectively). In the initial evaluation of GD, serum TRAb measurements were requested by the minority of respondents (29.2%), whereas thyroperoxidase antibody (TPO Ab) and thyroglobulin antibody (Tg Ab) tests were ordered less frequently (10.8% and 9.2%, respectively). Figure 1(b) shows the proportion of respondents who ordered the listed anatomical or functional investigations for the index case. yroid ultrasound and RAI uptake were requested by 6.7% and 5.9%, respectively. Baseline assessments of the complete blood count (CBC) and liver function tests were acquired by 41.7% and 36.7% of the respondents, respectively. Preferred First-Line Treatment in the Index Case. A beta-blocker would initially be used definitely or possibly by the vast majority of respondents (90.8% and 7.5%, respectively). Propranolol was the preferred drug in 65% of the respondents, followed by atenolol in 32.5%. e target heart rate was 90-100 beats per minute for 40% of the respondents, 80-90 beats per minute for 34.2%, and 70-80 beats per minute for 23.3% of the respondents. ATD therapy was the preferred first-line approach (90.8%), and RAI treatment was selected as the initial treatment by only 9.2%, and thyroidectomy was not selected by any respondent (Figure 2). According to the practice settings and graduation years, there is no difference in the preferred therapy. ATD Treatment. Methimazole (MMI) was the preferred ATD for 100% of the respondents. It should be noted that carbimazole is not available in ailand. e preferred starting dose of MMI was 10-15 mg once daily by 89.2% of the respondents, followed by 20 mg once daily (6.7%) and 30 mg once daily (3.3%). e most frequent starting doses of propylthiouracil (PTU) were 50 mg three times daily by 30% of the respondents, 100 mg three times daily (27.5%), and 150 mg three times daily (19.2%). e titration regimen was selected by 80.8% of the respondents, whereas the blockand-replace regimen was always used by 0.8% of respondents and in selected cases by 18 Figure 1: Percentage of participants who would obtain the listed laboratory test (a) or functional and anatomic study (b) in a patient with uncomplicated Graves' disease. International differences in the selection of laboratory tests or imaging studies are also shown. USA and EU data are from references 5 and 6, respectively. CBC, complete blood count; EU, Europe; LFT, liver function test; T3, triiodothyronine; T4, thyroxine; Tg Ab, thyroglobulin antibody; TPO Ab, thyroperoxidase antibody; TRAb, TSH receptor antibody; USA, United States of America. USA and EU data are from references [5] and [6], respectively. ATD, antithyroid drug; EU, Europe; RAI, radioactive iodine; USA, United States of America. After initiating ATD therapy, the next measurement of serum thyroid hormone levels was performed after 4 weeks by 50.8% of respondents and after 6 weeks by 19.2%; after attaining euthyroidism, thyroid function tests would be most frequently performed every 2 (38.3%) or 3 (53.5%) months. Routine monitoring of CBCs and liver function tests during ATD treatment was performed by 19.1% and 5.8% of the respondents, respectively, whereas 80.9% of the respondents did not perform routine monitoring of either of these laboratory parameters. In the case of a pruritic macular rash not responding to antihistamine therapy, 77.5% of the respondents switched to an alternate ATD, 12.5% continued with the same ATD with additional antihistamine therapy, and 10% selected an alternative treatment option for GD, including RAI or thyroid surgery. ATD therapy was continued for 18 months by the majority of respondents (45%), 27.5% continued ATD therapy for 24 months, and 12.5% continued ATD therapy for 12 months. Adjunctive ATD Treatment in Patients Receiving RAI. In patients receiving RAI therapy, premedication with ATDs was used routinely by 66.7% of the respondents, selectively used (commonly in patients >65 years old, with underlying heart disease or with multiple comorbidities) by 30.8%, and not used by 2.5%. When using premedication with ATDs before RAI, 64.2% withdrew ATDs at 7 days before RAI treatment, and 30.8% withdrew ATDs at 3-5 days before RAI treatment. In the early posttreatment phase, ATDs were routinely used by 72.5% of the respondents, used only selectively by 26.7%, and never used by 0.8%. Perioperative Management of Patients Undergoing yroidectomy. When thyroid surgery was selected, 95% of the respondents rendered patients in a state of biochemical euthyroidism with ATDs prior to surgery, whereas 5% would not. Preoperative iodine drops, either Lugol's solution or saturated solution of potassium iodide (SSKI), were used by 40% of the respondents. After surgery, prophylactic doses of calcium and/or vitamin D therapy at the time of discharge were not used by 69.2% if the postoperative calcium level was normal. Variant 1: Hyperthyroidism with Concurrent GO or Risk Factors for GO. e index case was revised to include current cigarette smoking and the presence of moderately severe and active GO (Clinical Activity Score: 3 of 7 points; pain with eye movement, eyelid swelling, moderate conjunctival injection, and proptosis of 23 mm bilaterally). In this case, the majority of respondents (97.5%) received an ophthalmological consultation, and imaging evaluation of the orbit was requested by about 30% of the respondents (noncontrast computed tomography, 17.5%; magnetic resonance imaging, 12.5%; and ultrasound, 0.8%). e preferred primary treatment method for hyperthyroidism in the presence of moderately severe and active GO was ATD treatment (62.5%). yroidectomy (after attaining euthyroidism with ATDs) was selected by 14.2% of the respondents. RAI treatment without steroids was not used by respondents, whereas 10.2% selected RAI plus lowdose glucocorticoids, and 12.5% used RAI with high-dose glucocorticoids ( Figure 3 and Table 1). In the presence of mild and active GO, ATDs were selected by 76.7% of the respondents, RAI alone by 5% of the respondents, RAI with low-dose glucocorticoids by 15%, and RAI with high-dose glucocorticoids by 2.5% of the respondents. If the patient had no signs of GO, but risk factors for the development of GO (smoking, high TRAb titers, and high serum T3 levels), responses did not change dramatically, except for the fact that no respondent would administer high-dose glucocorticoids if RAI treatment was the selected modality of treatment for hyperthyroidism (Table 1). Interestingly, in patients with sight-threatening GO, a slight majority of respondents (43.3%) recommended thyroid surgery after attaining euthyroidism with ATDs. In the great majority of cases (70.8%), high-dose glucocorticoid treatment for active GO was administered by an ophthalmologist, and 26.7% was administered by an endocrinologist. Variant 2: Hyperthyroidism Management in a Patient Planning a Pregnancy. e index case was then changed to a young woman planning a pregnancy over the next 6-12 months. ATDs were the preferred treatment option by 53% of the respondents, followed by RAI with 30% and thyroidectomy with 17% ( Figure 3). In this situation, PTU was preferred by 63% of the respondents, and the remaining 37% preferred MMI. In addition, if the patient had a positive pregnancy test while on MMI treatment, the vast majority of the respondents (97.5%) shifted to PTU, but during the second and third trimesters, 67.5% of respondents switched back to MMI. Discussion e current study represents the perspectives of ai endocrinologists in the management of GD. To the best of our knowledge, this is the first survey conducted in Southeast Asia. Previous data on Asia were mostly obtained from Japan [9]. However, the nations in the Asian continent have high heterogeneity in geography, ethnicity, and economic profile. is highlights the importance of country-specific information. Measurement of TRAbs is a reliable and cost-effective laboratory investigation in the diagnosis of GD hyperthyroidism. yroid RAI uptake still offers definitive diagnostic imaging to determining the underlying cause of thyrotoxicosis. If a thyroid nodule is present, a thyroid scan should be added to determine the functional status of the nodule. Compared with North Americans and Europeans, the use of diagnostic tests for GD, such as TRAbs, isotopic studies were ordered less frequently in ailand. TRAb measurement was used as diagnostic tool by 94.5% of the Korean respondents, 93.9% of the Italian respondents, 85.6% of the European respondents, 54.3% of the North American respondents, and only 29.2% of the ai respondents [5,6,10,11]. Moreover, thyroid ultrasound and isotopic studies were requested only by a small proportion of respondents in ailand. Practicing medicine in resource-limited settings, such as ailand, is challenging. Where laboratory access is limited and there are cost constraints in healthcare systems, most physicians use the clinical recognition of findings to direct decision making. Universal healthcare coverage has improved access to care, but inequality exists between different health plans [12]. e treatment selection for hyperthyroidism should take into account the balance of risk of harm and potential benefits for each available treatment option, in addition to patient preferences, health status, and access to treatment options. In our study, ATD therapy was the preferred treatment option (90.8% of respondents) for a first episode of hyperthyroidism. Accordingly, ATD therapy as the preferred treatment option for respondents from Korea (97%), Japan (88%), Europe (77%), Australia (81%), United Kingdom (60%), New Zealand (59%), and the Middle East and North Africa (53%) varied [5,6,11,[13][14][15][16]. RAI has been the preferred first-line treatment of North American clinicians. However, in recent decades, preference for RAI treatment has declined in favor of ATDs [5]. Fear of radiation is a main reason for the low preference of RAI treatment in Asia [7]. In addition, the increased risk of GO development or deterioration, as well as increasing concerns about the risks of radiation-induced cancer, was observed after RAI therapy. e risk of RAI-induced GO can be prevented by administration of oral or intravenous glucocorticoid [3,17]. e recent data from a large, longitudinal cohort study showed RAI for hyperthyroidism could affect, in the long-term, increased I-131 dose-dependent mortality from solid cancers [18]. However, there were widespread criticisms on the previously mentioned study because of the lack of appropriate controls and novel nonvalidated analysis [19][20][21][22]. Several studies reported no correlation between the development of cancer and RAI [23][24][25][26]. Based on the current evidence, RAI treatment for GD is considered a safe procedure as recommended by ATA and ETA guidelines [2,3]. yroidectomy is never selected in ailand for the initial treatment of uncomplicated GD. Preference for initial thyroid surgery has remained low in many regions. Selection of surgery could be related to the fact that inevitable postoperative hypothyroidism requires less monitoring, regarding both follow-up visits and laboratory tests, than that during ATD therapy [27][28][29]. Moreover, thyroidectomy would be selected because of insufficiency of endocrinology and nuclear medicine centers in remote areas. As most ai endocrinologists followed the ATA guidelines, MMI was the only ATD recommended by the endocrinologists. After the 2011 guidelines were approved, MMI should have been used in virtually every patient, except during the treatment of a thyroid storm, in the first trimester of pregnancy, and in patients with minor allergic reactions to MMI [2]. is change in clinical practice results from the fact that PTU can induce fulminant hepatic necrosis, which might be lethal or require hepatic transplantation [30]. e results of this study were similar to other surveys [5,6,10,11]. e preferred initial daily dose of MMI (15 mg/ day) was lower than that reported in Caucasians [5,6,10]. A 15 mg dose of MMI not only resulted in a comparable inhibitory effect on thyroid function as those treated with a high dose (30 mg) of MMI in patients with GD but also caused fewer adverse effects [31]. However, the dose of MMI should be adjusted to disease severity because a dose that is too small is insufficient to restore euthyroidism in patients with severe hyperthyroidism [32]. Most respondents did not receive CBC tests during ATD therapy, corresponding with the ATA and ETA recommendations [2,3]. In Japan, routine monitoring of CBCs is recommended during the first 2 months of ATD therapy [32]. From the ATA and ETA guidelines, preoperative Lugol's solution or SSKI should be given prior to thyroidectomy in most patients with GD. is treatment is useful because it reduces thyroid vascularity and intraoperative bleeding during thyroid surgery [33]. However, this protocol is used only by approximately one-third of endocrinologists [5,6,10]. Approximately 30% of the respondents considered prophylactic treatment with oral calcium with or without oral calcitriol. As mentioned in the ATA statement, this approach is cost-effective and can hasten hospital discharge [34,35]. GO is the main extrathyroidal manifestation of GD, although fortunately, severe forms are rare. When GD is complicated by moderately severe and active orbitopathy, the majority of ai endocrinologists first consult with an ophthalmologist. is is similar to colleagues in other countries [5,6,10]. However, steroids were administered by ai endocrinologists to only 26.7% of the patients. is study revealed that the majority of respondents would treat patients who have associated GO with ATDs. ere was a more than 10-fold increased use of thyroidectomy when the index case was modified for a patient with moderate GO. Patients with moderate-to-severe active GO should receive prompt treatment using high-dose systemic glucocorticoids [36,37]. Almost onethird of the respondents proceeded to the ablative approach by either RAI or thyroidectomy. In a patient with mild active GO, most respondents related the opportunity for concurrent steroid prophylaxis with low-dose oral prednisone and indicated if RAI treatment was selected, as recommended by the European Group on GO [37]. If a GD woman under ATD treatment wished to become pregnant in the next 6-12 months, most respondents treated with an ATD, with a preference for PTU over MMI. is approach may minimize prenatal MMI exposure during the sensitive period of organogenesis. Conversely, definitive treatment by surgery was the treatment of choice for a women planning pregnancy by half of the Italian respondents [10]. e advantage of thyroidectomy is the gradual remission of circulating TRAbs occurring postsurgery [38]. Despite the fact that RAI will transiently raise TRAb titers for months to years, which may contribute to worsening GO or fetal risk [39,40], RAI was the second choice of treatment in the present study and the North American survey [5]. ere was a similar pattern between other regions in the preference for PTU during the first trimester of pregnancy, as well as in the decision to replace the treatment with MMI in the second and third trimesters. e majority of our respondents followed this approach, which is recommended by ATA guidelines [38]. In conclusion, geographical differences exist in the diagnosis and management of GD. ese differences in treatment options may be caused by the availability of nuclear medicine facilities and experienced thyroid surgeons. According to the substantial practice variations in the diagnosis and management of GD in ailand, compared with those in other countries, additional detailed studies investigating the cost-and risk-effective management of GD are needed. Data Availability e datasets generated during and/or analysed during the current study are available from the corresponding author upon reasonable request. Disclosure e opinions expressed in this study are solely those of the authors and do not express the opinions of the EST.. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-05-21T00:12:14.289Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "87577f98802f0e625e5bfaf4d67770c14032103b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jtr/2020/8175712.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "5353667efa4516c1d7097c26c14554ca47d5bb15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
152282353
pes2o/s2orc
v3-fos-license
Quasibound states of the Dirac field in Schwarzschild and Reissner-Nordström black hole backgrounds Ciprian A. Sporea ∗ West University of Timişoara, V. Pârvan Ave. 4, RO-300223 Timişoara, Romania Abstract In this paper we study the existence of (quasi)bound states in two spacetime geometries describing Schwarzschild and Reissner-Nordström black holes. For obtaining this type of states we search for discrete quantum modes of the massive Dirac equation in the two geometries. After imposing the quantization condition, an analytical expression for the energy of the ground states is derived. The energy of higher states is then obtained numerically. For very small values of the black hole mass M we compare the energy of the Reissner-Nordström black hole quasibound state with the Dirac-Coulomb energy and we have found the two to be in good agreement. In this paper we will study the quasibound states of the massive Dirac field in Schwarzschild and Reissner-Nordström black holes, with a focus on the last one. Using the discrete spectrum of the Dirac equation in these black holes we derive analytically the energy of the quasibound states. The formula obtained for the energy is only an approximative one so that we are restricted to make only a qualitative analysis of the quasibound state spectrum. The organization of the paper is as follows. In Sec. II we discuss the Dirac equation in a Reissner-Nordström background and the derivation of the discrete quantum modes, while in Sec. III using a quantization condition an approximative formula for the energy of the quasibound states in this geometry is obtained. In the next Sec. IV we discuss the results obtained and the final Sec. V is kept for Conclusions. II. SOLUTIONS OF THE DIRAC EQUATION: DISCRETE QUANTUM MODES The first solution of an electrically charged black hole was the Reissner-Nordström solution, having as the line element the following expression and were M stands for the mass of the black hole and Q it's total electrical charge. By setting h(r) = 0 we find that the RN black hole has two horizons, a Cauchy inner horizon (r − ) and an outside event horizon (r + ), given by Being electrically charged the black hole will produce in it's exterior an electrostatic potential of Coulomb type Q/r and a fermion with elementary electric charge e = ± √ α [82] will have the following potential energy Using the Cartesian gauge [76,78] it was showed that in a curved space-time of a spherically symmetric black hole [20,21,76] the Dirac equation iγ a D a ψ−mψ = 0 can be separated into an angular and radial part. The angular part turns out to be the same as in relativistic flat spacetime theory, thus that the fundamental solutions for the particle-like energy eigenspinors of energy E will be of the form were f ± E,κ (r) are the unknown radial wave functions, while Φ ± m,κ (θ, φ) are the usual 4component angular spinors [79,81] and for κ we use the convention κ = ±(j + 1/2) with The reaming radial part of the Dirac equation can be brought to a Hamiltonian form The above exact radial problem has no known analytically solutions. For obtaining analytical solutions we are forced to resort to a method of approximation. We will focus here on finding the discrete quantum modes that allow for the existence of quasibond states in a region far away from the Reissner-Nordström black hole. The continuous modes and the scattering of fermions by Reissner-Nordström black holes were discussed in our previous paper [21]. After introducing the Novikov variable one multiplies the resulting equation with x −1 (1 + x 2 ) and then for large values of x take a Taylor expansion with respect to 1 x of the resulting equation. Keeping only the dominant terms and neglecting the terms of the order O(1/x 2 ) and higher, the radial Dirac equation where we have introduced the notations By introducing the functionsf ± E,κ (x) defined by [21,76] f one can solve analytically the resulted equations for the new radial wave functionsf ± (x). A bound/quasibound state is in fact a normalisable (or at least square integrable) solution to the massive Dirac equation that behaves as an ingoing wave at the horizon, while in the far field region falls off exponentially. The discrete quantum modes that give rise to this kind of states with discrete energy levels are obtained by solving the above radial Dirac equation in the case µ > ε. As in Ref. [76] where the Schwarzschild discrete modes were found, by applying the same method and after some computations the following particular solutions of the Dirac equation in a Reissner-Nordström background are found [77] f that depend on the parameters while the two normalisation constants must satisfy In the next section we will use these solutions to find bound/quasibound states in the geometry of a Reissner-Nordström black hole. III. ENERGY OF THE QUASIBOUND STATES The solutions (11) obtained in the previous section are similar to the discrete wave functions of the hydrogen atom [79]. Using this observation, the bound state energies of the fermions in the geometry of a Reissner-Nordström black hole can be found by imposing the standard quantization condition that the first argument of the hypergeometric function be a negative integer with n r = 0, 1, 2, 3... the radial quantum number, related to the principal quantum number by n r = n − |κ| = n − j − 1/2. Substituting the parameters defined in eq. (12), the quantization condition (14) becomes This is a messy equation for the energy E (contained in ε) of the bound states. The existence of a nonvanishing inward current at the black hole horizon [2] implies that the bound state must decay and it's more appropriate to name these states quasibound states. For that reason the energy of the quasibound state will have a real and an imaginary part [2,80]. However, eq. (15) can have also pure real solutions for particular sets of parameters, but we must keep in mind that eq. (15) is only approximative. In Section IV we will focus on finding and discussing only the real solutions to eq. (15). Let us now briefly discuss analytically the limiting cases of eq. (15). After some manipulations, relation (15) can be brought into the form [77] Furthermore, if we assume that the energy of the quasibound state is close to the rest energy of the fermion mc 2 , then by taking the limit ε → µ in the right hand side part of eq. (16), the following approximative expression for the energy of the quasibound state of a fermion in Reissner-Nordström geometry is obtained The Schwarzschild result for the energy of a quasibound state, first derived in ref. [76], is recovered by canceling the black hole's electric charge i.e. making Q = 0 in eq. (17) to Eq. (17) reduces in the limit M → 0 to the formula for the discrete energy levels of the relativistic Dirac-Coulomb problem (see ref. [81]) Moreover, if M = 0 the approximative solutions (11) also reduce to the exact solutions of the Dirac-Coulomb problem [81]. IV. DISCUSSION OF THE RESULTS In Figs. 1-5 we present the real solutions of the energy E/mc 2 as a function of the gravitational coupling α g = mM G c (labeled on the plots as mM) for several quasibound states. The plots are obtained by solving analytically (for n r = 0) or numerically (if n r = 0) equation (15) and the states are labeled with the standard spectroscopic notation nL j . Let us start by analysing what happens with the energy of the ground state, that has n r = 0. In this case eq. (15) has two solutions given by the following expression By imposing the condition − κ 2 + (mM) 2 < eQ < κ 2 + (mM) 2 (21) the ground state will always have a real energy, otherwise the energy of the state becomes complex. For certain values of the parameters in eq. (20) and after imposing the condition 0 < E 0 mc 2 < 1, only one of the two solutions is physical. In Fig. 1 we compare the energy of the Reissner-Nordström ground state with the relativistic Dirac-Coulomb energy for the 1S 1/2 and 2P 3/2 stats. We observe that the two energies agree quite well for a small value of the gravitational coupling mM. However, as the mass of the black hole is increased, the energy of the ground states start to depart from each other. appears, who's width increases for higher states. The maximum is present only for the case eQ > 0 (see also Fig. 5). For states with n r = 0, equation (15) can not be solved analytically. Searching for numerical solution using Maple, we have found that for values of mM ≤ α 0 , eq. (15) has only one positive real solution. For values outside this interval, we were unable to find numerical solutions with Maple. We found that for the states with n > |κ| the value of α 0 < n. In In Fig. 4 we compare the energies of Schwarzschild and Reissner-Nordström quasibound states for 1S 1/2 , 2P 3/2 and 3D 5/2 states. We observe that for a fixed value of the gravitational coupling (i.e. the parameter mM) the effect of adding only a few negative charges to the black hole (eQ > 0), assuming a negatively charged fermion, will result in a Reissner-Nordström quasibound state with a higher energy compared with the energy of the same Schwarzschild quasibound state. Furthermore, adding positive charges (eQ < 0) to the black hole will lower the energy of the Reissner-Nordström quasibound state. If more charge is added to the black hole, the difference in energy will increase further, as can be observed in the right panel of Fig. 4. two different values of the gravitational coupling α g . This effect can be observed only for black holes that have the same type of charge as the fermion (i.e. eQ > 0). This is the case corresponding to electromagnetic repulsion, when the fermion scattering intensity takes higher values [21]. Furthermore, as one varies the parameter mM the energy of a RN quasibound state increases for small values of mM up to a maximum and then it starts again to decrease. In the case of eQ < 0 these effects are not observed as can be seen from the figure in the right panel. V. CONCLUSIONS In summary, we have studied the (quasi)bound states of the Dirac filed in Schwarzschild and Reissner-Nordström black hole geometries, with a focus on the later one. These states were obtained by applying a quantization condition to the discrete quantum modes of the Dirac equation in the two geometries. For the ground state (having the radial quantum number n r = 0) we were able to find an analytical expression for the energy of the state. For the states with n r = 0 we used Maple to compute numerically the energy of the states. We have found that the Reissner-Nordström quasibound states have higher energies compared with the Schwarzschild quasibound states if the black hole and the fermion have chargees with the same sign, otherwise the energy of the state is lower.
2019-05-13T15:27:59.000Z
2019-05-13T00:00:00.000
{ "year": 2019, "sha1": "d955f071c1b629c9bc9790a83dce560ea979b8f4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1905.05086", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d955f071c1b629c9bc9790a83dce560ea979b8f4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239016625
pes2o/s2orc
v3-fos-license
Renormalization Scheme Dependence of $\beta$-Functions In Lorentz-Violating Quantum Field Theory Effective quantum field theories that allow for the possibility of Lorentz symmetry violation can sometimes also include redundancies of description in their Lagrangians. Explicit calculations in a Lorentz-violating generalization of Yukawa theory show that when this kind of redundancy exists, different renormalization schemes may lead to different expressions for the renormalization group $\beta$-functions, even at only one-loop order. However, the renormalization group scaling of physically observable quantities appears not to share this kind of scheme dependence. Introduction The special theory of relativity is one of the cornerstones of modern physics, but ever since Einstein introduced it in 1905, there has always been interest in asking whether the Lorentz symmetry underlying special relativity is truly exact, or whether it is just a highly useful approximation. Apparent symmetries that eventually turn out to be merely approximate have played extremely important roles in the development of the standard model of particle physics, and it is natural to wonder whether Lorentz symmetry may represent a similar case. Over the last quarter century, owing to developments in effective field theory (EFT) techniques, it has become possible to describe possible violations of rotation invariance and Lorentz boost invariance in a systematic way. Along with this theoretical development, there came a burst of experimental interest in Lorentz symmetry tests-because it became apparent that there were a great many possible forms of Lorentz violation that had hardly been previously constrained at all. The renewed experimental searches have, thus far, not identified any particularly compelling evidence in favor of Lorentz violation. However, it remains a significant area of experimental research, because we know that if Lorentz violation is ever really demonstrated to exist, that will be a colossally important discovery, opening up whole new avenues for the study of physics at the most fundamental levels. The most general local effective field theory for describing Lorentz-violating modifications to the physics of the well-known fields that are part of the standard model of particle physics has already been laid out in detail. This theory, which is called the standard model extension (SME), is also capable of describing stable, unitary, local forms of CPT violation involving standard model fields, since there can be connections between CPT violation and Lorentz violation [1]-although the two occur separately in certain theories [2,3,4]. The SME is a quantum field theory (QFT), whose action contains operators build out of the fermion and boson fields of the standard model [5,6]. The minimal SME is the sector of the SME that is expected to be renormalizable; it contains only the finite number of local, Hermitian, gauge-invariant operators that are of mass dimension four or less. The terms in the minimal SME Lagrangian are actually similar in structure to those found in the usual standard model-the key difference being that the SME operators may have uncontracted Lorentz indices. The minimal SME is very often the most useful test theory framework for evaluating the results of experimental Lorentz tests. Really understanding a QFT entails understanding the role that quantum corrections play, and this, in turn, means understanding renormalization. Quite a bit of progress has been made toward describing the renormalization of the minimal SME, particularly at one loop order [7,8,9,10,11,12,13,14,15]. And yet, some questions related to the renormalization of the minimal SME are still outstanding, and this is particularly true in relation to some of the less commonly discussed sectors of the theory. For example, the explicit calculations needed for the one-loop renormalization of a Lorentz-violating gauge theory coupled to charged scalar fields have not been carried through, and so the renormalization group (RG) scalings of several Lorentz-violating couplings in the SU(2) L weak gauge sector remain undetermined. There are also questions related to higher-loop calculations-such as whether perturbative renormalizability may be proven to all orders using the same techniques as in conventional relativistic QFT. We do not expect that attempts to resolve these kinds of outstanding issues will reveal any fundamental problems with the minimal SME. For example, power counting arguments that would be very hard to evade suggest that all sectors of the minimal SME are fully perturbatively renormalizable. However, these unsolved problems nonetheless present interesting avenues for ongoing and future research, and the detailed solutions to the problems may reveal new insights into the general structure of QFT. For instance, symmetries (internal and external) occupy central roles in our understanding of the renormalization of the usual Lorentz-invariant, CPT-invariant standard model. However, these roles will necessarily be subject to some changes in the context of the SME, since the SME obviously does not have the same symmetry structure as the unmodified standard model. Studies of the interplay between renormalization and symmetry in the SME have already yielded fundamental insights into the nature of finite radiative corrections in QFT [16]. Among the terms in the quantum electrodynamics (QED) sector of the minimal SME Lagrange density, the Lorentz-violating Chern-Simons term is extremely peculiar; the term depends directly on the vector potential A (not just on the electromagnetic field strength F ), so that it is not gauge invariant as a density-although the integrated action is completely gauge invariant. The peculiar structure of the Chern-Simons term means that many of the usual symmetry arguments that may normally be applied to the evaluation of radiative corrections are inoperative in the Lorentz-violating setting. This provoked quite a bit of controversy over whether there could ever be a purely radiatively generated Chern-Simons term in the theory and, if so, what its size should be. Without Lorentz and gauge symmetries to constrain the structure of loop corrections, it was discovered that with different high-momentum regulators applied to superficially divergent loop integrals, the theory could produce different finite radiative corrections [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. It was subsequently thought that a nonperturbative approach might be capable of resolving this confusing situation; however, any nonperturbative regulation procedure that could lead to a nonvanishing radiative correction term at leading order had to produce an unphysical Lorentz-violating mass-like term in the photon sector at higher orders. In this paper, we shall be concerned with another, entirely separate puzzle that appears in the course of calculating the radiative corrections to the minimal SME. Different arguments, which appear to be equally valid-a direct one based on the evaluation of specific Feynman diagrams, and an indirect argument based on known facts about transformation properties of the SME-seem to give different results for the RG β-functions for certain Lorentz-violating operators in the action. Our treatment is organized as follows. In section 2, we introduce the relevant portion of the SME and lay out an indirect argument for why the RG β-functions for certain couplings should be related. In section 3, we demonstrate, however, that direct calculations of β-functions do not seem to bear out the expected relationship. We extend the Feynman diagram calculations to higher orders in the small SME parameters in section 4, and in section 5, we demonstrate how to resolve the conflict. The key result is that the βfunctions may depend on the renormalization scheme; this kind of dependence is common in higher-loop quantum corrections, but here it exists already at one-loop order. Some of these results are extended to even higher order in the Lorentz violation in section 6, and finally, section 7 presents an outline of our conclusions. 2 The β-Function Puzzle for the SME f In this paper, we are trying to address a puzzling observation that has been made about the behavior of quantum corrections in the SME. The puzzle concerns radiative corrections that arise in the presence of two different kinds of terms in the SME fermion sector. The Lagrange density for a fermion species in the minimal SME is where Γ and M can include terms with all possible Dirac matrix structures. However, the puzzle we are interested in concerns only a few of the possible terms, and we may limit attention to theories with M = m and The quantities c and f represent fixed tensor and axial-vector backgrounds that distinguish physically between different spacetime directions. In most cases, whether a local Lorentz-violating operator (in the fermion sector or elsewhere) violates CPT symmetry is determined by whether it has an odd or even number of outstanding Lorentz indices. A CPT-odd term will ordinarily have an odd number of indices, and conversely. However, we shall see below that this correspondence does not hold in quite the way we might expect for the f term that lies at the center of our puzzle. The fermionic action must be supplemented with a bosonic propagation action and a fermion-boson vertex in order to have nontrivial interactions, including radiative corrections. The puzzle arises whether the quanta in the boson sector are vector or scalar. In the former case, the QED sector of the minimal SME has a Lagrange density The presence of the full Lorentz-violating Γ (as opposed to the usual γ) in the interaction term is required by gauge invariance. However, (3) is not otherwise the most general Lorentz-violating Lagrange density for the photon sector, even in just the minimal SME. The extremely interesting Chern-Simons term mentioned previously has been omitted, since its discrete symmetries preclude its being involved in the resolution of the puzzle with f and c. Moreover, only a k F taking the form would need to be considered, for similar reasons. Note that, not entirely coincidentally, the terms that are included in our L A are exactly those minimal SME photon terms that do not lead to any birefringence for electromagnetic waves propagating in vacuum. The QED sector of the SME has gotten more attention in the past; however, for the present work, it probably makes more sense to look at a Yukawa theory, with bosonic Lagrange density The renormalization of a fully general SME Yukawa theory is more complicated than the analogous renormalization problem for the SME QED sector. The reason is that the most general theory also includes additional Yukawa-like interactions, with nontrivial Dirac matrices sandwiched between the fermion operators in theψ · ψφ term. (Beyond one-loop order, the presence of the four-φ coupling λ also adds another complication.) However, once again, none of those nonstandard fermion-boson coupling terms can play a role in our puzzle, and they have been omitted from L φ . Then Feynman diagram calculations in the Yukawa theory-when we specifically limit consideration to diagrams with only the normal Yukawa coupling g-are actually simpler than the analogous calculations in the QED sector. The reason now is that the Lorentzviolating c and f in the Yukawa theory only appear as insertions into the fermion propagator in the scalar theory. In contrast, in the gauge theory c and f appear in both the propagator and the vertex, and including diagrams with Lorentz-violating vertices in matrix element calculations will significantly increase their complexity. For this reason-and because there does not appear to be any reason to expect any conceptual differences in how the β-function puzzle plays out in the two theories-we shall perform all our explicit one-loop diagram computations in the Yukawa theory. Among the results obtained in the course of studying the renormalization of the minimal SME were the RG β-functions for the Lorentz-violating operators in the two specific theories outlined above. In both of these theories, it was found that the β-function for the fermion f vanished at one loop and linear order in the Lorentz violation itself. At the same time, the β-functions for the c terms were generically nonzero. On one hand, there is nothing seemingly surprising about the β f result. It was observed fairly early on that there were very frequently no physical consequences to having a f term in the fermion sector (at least at first order in f ). The reason seemed straightforward; there were generally no operators representing physical observables in the theory that had the right structure to be sensitive to the f coefficient. Note that the operator corresponding to f 0 (that is, iψγ 5 ∂ 0 ψ) is not merely odd under C, P, and T separately, but it is actually odd under any reflection, regardless of orientation. This is quite different from the behavior of most P-odd operators-such asψγ j ψ, which is odd under a reflection R j of the x j -axis but even under reflections along the the two other perpendicular directions. The spacelike operators iψγ 5 ∂ j ψ likewise have discrete symmetries (under the full set of inversions C, T, R 1 , R 2 , and R 3 ) that do not match those of any other normally-available observable. In fact, the observation that if µψ γ 5 ∂ µ ψ has no observable effects at linear order is actually related to another remarkable property of the theory. With a redefinition of the fermion field, it is actually possible to rewrite the Lagrange density with the f term as one with a c term instead [33]. In other words, the f term is really a c term, combined with a change in the representation of the Dirac matrices! A transformation where G(ξ) = 1 √ ξ tan −1 √ ξ is an analytic function of ξ = −f 2 , (along with a corresponding transformation forψ ′ ) transforms a Lagrange density for the field ψ with a f term into one for ψ ′ with no f term but instead a c term taking the form The approximate form on the right-hand side of (7) is valid when all the components of f are small, but the full expression with the radical is an exact result, and the transformation is permitted for all f 2 < 1 (that is, all f that do not change the signature of the bilinear form that couples the two factors of the four-momentum in the fermion dispersion relation). Thus, there should be no phenomena that are specific to the presence of a f term. In fact, this could actually be inferred just from the energy-momentum relations for fermions in the presence of either a c term or a f term. The two dispersion relations are for c; and for f , From these dispersion relations, the equivalence for small f is once again clearly evident. Note another interesting feature of the f theory that is illuminated by this equivalence. While the (single-index) f term in the Lagrangian superficially appears to be odd under CPT, it does not actually give rise to any CPT-odd effects, because it is equivalent to a two-index, CPT-even c term at O(f 2 ). Note also that both dispersion relations, with c and f , are quadratic in the energymomentum p. Of course, the ordinary dispersion relation derived from the Lorentzinvariant Dirac equation, p µ p µ − m 2 = 0, is quadratic. However, in the presence of more a general Lorentz-violating kinetic operator iψΓ µ 1 ∂ µ ψ in the fermion action, the dispersion relation-since it is derived from the determinant of a 4 × 4 matrix-can be quartic. That the quartic dispersion relation can have four roots (two positive and two negative for small Γ 1 ) indicates that the energy can depend on both the fermion-antifermion identity and spin of an excitation. However, for (8) and (9) this is not the case; the energies are both independent of the spin orientation and the same for particles and antiparticles (thus C even). It is also evident from (8) that at O(c 1 ) the energy-momentum relation depends only on the symmetric part c (νµ) = c νµ + c µν of the Lorentz-violating background tensor c. In fact, it can be shown, just as with f , that there are no physical manifestations of c [νµ] = c νµ − c µν at first order. The reasons are actually quite similar. As noted, at linear order f is equivalent to a change in the representation of the Dirac matrices. Since {γ µ , γ 5 } = 0, the matrices Γ µ = γ µ + if µ γ 5 in the presence of just a f (no c) obey the same Clifford algebra relations as the γ µ , up to corrections that are O(f 2 ). In 3+1 dimensions, there are five mutually anticommuting Dirac matrices, and at first order, f µ is just an infinitesimal rotation of the effective Dirac matrix Γ µ away from the γ µ direction and toward the γ 5 direction. The field redefinition (6) just absorbs this rotation of the Dirac matrices back into the field ψ ′ . An infinitesimal rotation of Γ µ toward a different γ ν (with ν = µ) direction is similarly represented by the inclusion of c [νµ] . With only a c [νµ] present, so that Γ µ = γ µ + 1 2 c [ν µ] γ ν , we again have (11), up to corrections that are second order in the Lorentz violation coefficients. We shall thus henceforth explicitly assume that c νµ = c µν is symmetric in its Lorentz indices. A full understanding of quadratic Lorentz-violating dispersion relations, including the contributions in (8) that are second order in c, is further enabled by making a comparison with the dispersion relation for the free scalar sector of the L φ from (5), and this connection can be efficiently explored using supersymmetry [34]. If the c and f terms are actually physically equivalent, then it seems like this ought to show up in their RG scalings. If the RG β-function for a quantity x is expressed as β x = xΨ(x), then it appears that we should have, at leading order, Ψ(f µ ) = 1 2 Ψ(c νρ ) so that the RG evolution of c νµ and f µ f ν will be equivalent. However, this has not been borne out by the explicit calculations of the β-functions; in particular, β f has been found to be vanishing with either a QED or Yukawa coupling, while β c was not. Obviously, the problem has something to do with the fact that the calculation of β f only considered terms of first order in f , but it is extremely puzzling that there is this discrepancy between two different ways of finding the function β f . The resolution must lie with a fact about QFT that has long been known, but which has not previously been applied to a situation quite like this one. In actuality, the β-functions (and other RG functions) for a theory are not observable objects unto themselves. A β-function can depend on the renormalization conditions used to define the theory, and it may also potentially depend on the gauge in a gauge theory. Ordinarily, this issue does not crop up at one-loop order (and often not even at two-loop order) [35,36,37,38,39,40]. To get a one-loop β-function, it is typically sufficient just to take a linear combination of the coefficients of the logarithmically divergent diagrams; this results in an expression that is independent of quantities like the renormalization scale. While that same approach does not appear to be wrong here, neither is the β-function the approach produces uniquely correct and unambiguous. The exact equivalence between fermion theories with f or c terms marks an ambiguity in how the theory may be represented, and that ambiguity evidently carries over to the β-functions, even at just one-loop order. The fact that there is a degree of reparameterization invariance makes this situation qualitatively similar to that seen in certain generalized gauge theories. There is a family of physically equivalent of actions, the parameter in question being the relative sizes of the equivalent f and c terms. The parameter may be changed, without affecting the physical observables of the theory, by a rotation of the basis vectors in the Dirac Grassmann algebra. When a reparameterization symmetry of the action exists at the classical level, there are often interesting and nontrivial complications in the lifting of that symmetry to the classical level, and it becomes a question whether the relations implied by a classical symmetry are stable under radiative corrections. In the canonical formulation, these complications arise from operator ordering issues, while in the path integral formalism, the new terms can appear out of the transformed integral measure [41,42]. It could be interesting to study the consequences of the f -c duality in this context, but the details are beyond the scope of this paper. The main relevant question would be whether or not the relation (10) is radiatively stable, provided we begin with a theory which only had a f term. Approaching this through the path integral, it appears that there should be no additional terms appearing in the action when we make the transformation (6), and there appear to be no anomaly-like term from the Jacobian. The full technical details of this analysis are potentially interesting, but they would take us too far afield from the main thrust of our analysis in this paper. Here we are focusing specifically on the renormalization group β-functions, because of the particularly prominent part that the β-functions play in the modern understanding of quantum corrections, and because our results are actually contrary to some general expectations about how β-functions ought to behave. [Conveniently, the insertion of additional perturbative Lorentz-violating vertices will not change the general (l 2 − ∆) n structure of the higher-order denominators.] The evaluation of the remaining expression is standard, yielding the logarithmically divergent the divergence is encapsulated within Along with the conventional self-energy diagram, it is also necessary to evaluate the amputated one-loop diagram with the same Lorentz-violating structure as f itself. This is a diagram with a single insertion of the CPT-odd f -dependent vertex −γ 5 f µ p µ . (Full k p k + p Feynman rules for the Lorentz-violating Yukawa theory, including both f and c vertices, are given in [14].) The diagram, with the Lorentz-violating insertion represented by a dot, is shown in figure 2. Its value is The integrand of (16) was simplified by using (/ k+ / p+m) Then, as already noted, the use of Feynman parameters to simplify the denominator is unchanged from Σ 0 , since the fermion propagators situated on either side of the f vertex have exactly the same momenta. This is responsible for the simplification of the magnitude of the infinite part of Σ f to a form very similar to that previously seen in Σ 0 . Extracting the divergence in the self-energy, we have The LV ∼ notation in (18) indicates that the two expressions have the same Lorentz-violating divergent parts. To isolate the divergence we evaluate η at a renormalization scale ∆ = M 2 , as usual. Then the fact that the infinite coefficients of iγ 5 f µ p µ and / p in the two expressions are the same is what produces the vanishing β-function in this renormalization scheme. Following the usual procedure for renormalization, there must be a fermion field strength renormalization counterterm, δ ψ = Z ψ − 1 given by the infinite part of the / p coefficient in Σ 0 , renormalized at M 2 , There is also a counterterm δ µ f = (Z f − 1)f µ , Following the usual procedure of just reading off the coefficients of Γ(ǫ/2)/(M 2 ) ǫ/2 ≡ log(Λ 2 /M 2 ) (reexpressing the dimensional cutoff ǫ in terms of an effective energy-momentum cutoff scale Λ), This seems naively like an unambiguous result, but the true situation is actually more subtle, and it is possible to set physically motivated renormalization conditions in a different way, so as to obtain an entirely different answer! 4 Self-Energy at O(f 2 ) and O(c 1 ) The renormalization conditions used in section 3 were so standard that we did not even spell them out explicitly. In essence, the renormalization condition that set δ f was one that forced the tree-level plus one-loop contributions to the f term in the fermion propagator to take a certain value. Normally, in the discussion of some other parameter (such as a coupling constant like g, or another SME parameter like c), we might say that those particular renormalization conditions forced the parameter in question to take its "physical" value. However, that is not possible with f , because there is no "physical" value of f at the order we have so far considered. Remember, there is no physical observable in our theory that differs from its value in the Lorentz-symmetric theory at O(f 1 ); nothing we can measure in the theory depends linearly on f . The next natural question is how the renormalization conditions may be set in order to guarantee that we instead have Ψ(f µ ) = 1 2 Ψ(c νρ ). In fact, there is a continuum of possible renormalization frameworks for the theory. The source of the ambiguity is precisely the fact that nothing physically observable depends on the value of δ f at O(f 1 ). The presence of a δ f counterterm in the Feynman rules does not lead to any physically meaningful changes in a theory, unless it appears in a diagram in conjunction with at least one more factor of f . This is just a consequence of the form taken by the counterterm; although the counterterm contains a formal infinity, it has the same Lorentz structure as a bare f term in the Lagrange density-which we know is not observable on its own. What this ultimately means is that the value δ f is not actually uniquely determined (or even at all restricted) by the structure of any O(f 1 ) loop diagrams. The one-loop counterterm may instead be chosen so as to absorb the physical infinities that arise from a diagram with not one, but two, f vertex insertions. So in this section we shall evaluate a fermion self-energy diagram in which there are two f insertions along the internal fermion line. We shall also look at the diagram with a single c insertion, since-under generic renormalization conditions-the f and c parameters will mix under radiative corrections, with the lowest-order c that may be generated purely from f being of O(f 2 ). (The full evaluation of the fermion self-energy at this order should really also include the evaluation of a diagram with a K insertion on the boson propagator. However, while c and K do mix under the action of the RG, the K-dependent radiative corrections do not play any essential role in the resolution of the β-function puzzle. We shall therefore not consider them any further.) It is not, of course, unexpected that radiative corrections at O(f 2 ) can give rise to a c-type term in the fermion self-energy. The product f ν f µ has exactly the right discrete symmetries to generate an effective c νµ . In general, beyond first order, the SME coefficients may mix in increasingly complicated ways [15]. What is novel to this discussion is the observation that there is actually a freedom to assign certain radiative corrections to be renormalizations of either c or f . The equivalence between the insertion of two f vertices and a single c vertex on an on-shell fermion line is easy to see. A fermion line carrying momentum p with three propagators and two f insertions takes the form moving a γ 5 past the middle propagator in order to cancel it out in (23). Alternatively, since ( / p − m) and ( / p + m) commute, we may write the overall numerator of (23) as if ν f µ p µ ( / p + m)p ν ( / p + m)( / p − m). When p is on shell, by further invoking the closure identity for Dirac spinors, we may sandwich p ν between momentum eigenspinors. Then using the Gordon identity at zero momentum transfer,ū (p)p ν u(p) =ū(p)(mγ ν )u(p), (26) and the Dirac eigenvalue condition for the spacetime-independent spinor u(p), which is ( / p − m)u(p) = 0, we may rewrite part of the numerator from (23) with Returning to the full expression (23), including the denominators, we have, for on-shell p, This has exactly the form of a c insertion into a fermion line, and the preceding calculation is actually another way of demonstrating an exact equivalence between a theory with a k + p k p fermion propagation Lagrangian containing a c term and one containing a f term. The tree-level fermion two-point function for a fermion field with a f term will involve a sum of diagrams with all possible numbers of f insertions along the propagator line. According to (28), the resummation of all the diagrams with even numbers of f vertices will proceed in exactly the same way as the resummation of terms with various numbers of c insertions in a fermion theory with a c coefficient. (The terms with odd numbers of f insertions are, on the other hand, never directly observable.) The coefficient of the middle term in parentheses in (28) also matches (10), although one might conceivably wonder why then (10) is not exact, rather than an O(f 2 ) approximation. The reason for this last apparent discrepancy is actually that the presence of a c or f term in the action affects the canonical normalization of the fermion field at higher orders; the higher-order corrections in (7) are correspondingly only needed to correct for these normalization differences. The preceding equivalence was handled entirely at the classical level, but it will provide some useful illumination for our evaluation of the O(f 2 ) and higher loop corrections. In particular, we shall apply the results (23)(24) to help us evaluate counterterm diagrams that include f µ δ ν f , which possess a Lorentz structure identical to (22). The O(f 2 ) diagram with a single one-particle irreducible loop is shown in figure 3. Its value is To extract the divergent part of the self-energy (which is what determines the RG behavior), we can restrict attention to the terms in the numerator that are logarithmically divergent by power counting-that is, the terms quadratic in the shifted integration momentum l. This lets us reduce the numerator in the integrand of (31) to Inside a symmetric integration, we may make the usual replacement of l α l β with l 2 g αβ /4. (Dimensional regularization corrections to this expression vanish as ǫ → 0, and so can only contribute to unimportant finite terms.) Moreover the second and third terms on the right-hand side of (32) are equal when contracted with f µ f ν . This leaves a reduced numerator The term with g µν contributes, after contraction with f µ f ν , only to O(f 2 ) modifications of the fermion mass and field strength renormalization. The second term, in contrast, has a structure corresponding to a radiatively generated c νµ term. So the surviving Lorentz-violating contributions to N can be inserted back into (31) to give The ǫ → 0 infinity in this radiative correction needs to be canceled through the use of a counterterm, although there are actually several ways that the cancellation may be achieved, combining δ f and δ c counterterms in a potentially intricate way. However, since it looks as if the O(f 2 ) contribution to the fermion self-energy may include a radiatively-generated c term, we should also look at the renormalization of the theory with a c in the fermion sector. If the renormalized theory is to contain an effective c term generated by a logarithmic divergence, then the action for the bare theory must already include a c, so that the infinite correction can be absorbed. We shall therefore consider the theory with c (in addition to f ), up to O(c 1 ). The diagram we need to compute is again the one depicted in figure 2, except that we now interpret the dot on the fermion line as a ic νµ γ ν (k + p) µ insertion. Like the O(f 1 ) loop diagram, this is equivalent to a calculation already outlined in [14]. Structurally, the diagram with the single c vertex on the internal fermion line is quite similar to the Σ f f diagram. [In fact, we almost could have used (28) directly to convert the propagator with two f insertions into one with a single c-like insertion. However, the derivation of (28) made assumptions about the propagation being on the mass shell, which actually make things a bit more subtle than they might initially appear.] Evaluating the c diagram directly, we find p p Figure 4: The two diagrams, incorporating both f (dot) and δ f (circled cross) vertices, that have the correct structure to cancel the Lorentz-violating divergence in Σ f f . the wrong spacetime structure to cancel a term like (34). The presence of both a divergent δ f and an additional tree-level f are actually necessary to effect the cancellation. The diagrams involved are shown in figure 4. According to (28), ā counterterm provides an equally plausible way of canceling the divergence from figure 3. To verify the coupling constant flow-whether for c or f -we ultimately need to look at the Callan-Symanzik equation (CSE). For the theory with only a c, the CSE for the fermion two-point correlation function takes the usual form, Using the value of δ c read off from (44) for the case of the theory with c-type Lorentz violation only, This agrees with the result found in [14], in spite of the rather different bookkeeping for divergences used in that paper. If we posit a theory in which the Yukawa coupling g and the Lorentz-violating f are the only couplings, then the same general form for the CSE (46) still applies. However, when we include the f -dependent diagrams from figures 3 and 4, we find that solving for β f actually gives The key element is the factor of two on the left-hand side of (49). The factor comes about because there are the two diagrams containing δ f shown in figure 4 that both contribute to the two-point correlation function. It is really as if we were calculating a β-function for the power f 2 , rather than f itself. The mathematical effect is seemingly to spread the RG flow across the two powers of f , so that the rate of RG flow for the physically observable tensor quantity f ν f µ is the same as the flow for the c νµ in an equivalent theory. Because there are two δ f diagrams that contribute, the scaling coefficient of the δ f needed to cancel the O(f 2 ) divergence is half the δ c required for the cancellation. This is what we argued for above on physical grounds. [It is actually possible to include the diagram from figure 2 as a piece of a pair of (one-particle reducible) diagrams at O(f 2 ) with additional f insertions along the external legs, without changing the β-function result (50). Including these diagrams just adds and subtracts extra terms looking like (20) in various places, without changing the βfunctions.] However, with the general structure of all the (one-loop) O(f 2 ) and O(c 1 ) radiative corrections worked out, we are actually in a position to make a more general statement. In a theory with (the possibility of) both c and f , we actually have a continuous family of choices for how to handle the counterterms. The divergences will be adequately canceled by any combination of counterterms The parameter X can take any real value, with X = 0 representing the physically motivated choice we have now discussed extensively. Correspondingly, the RG β-functions are In a theory with c and f simultaneously present, those two Lagrangian parameters cannot be measured independently. To the order we have considered so far, the energymomentum relation and other physical quantities only depend on the combination 2c νµ − f ν f µ . According to (53-54), the RG flow for the physically meaningful combination is described by which is independent of the renormalization scheme parameter X and structurally the same as the RG flow in a theory with just a c tensor and no f at all. What we have uncovered is that the RG functions for the SME with a fermion f term are not unique; they depend on the particular renormalization scheme. In particular, there are multiple ways to select the counterterm diagrams that will cancel the divergences that appear in the O(f 2 ) fermion self-energy. From one viewpoint, this is actually rather unsurprising. Even at tree level, the description of the theory contains redundancies; it is possible to exchange a f coefficient for an equivalent c in any classical perturbative calculation. What these results show is that the ambiguity extends to the quantum level. However, on the other hand, explicit scheme dependence is not something that is usually seen in the one-loop RG structure of perturbatively coupled theories. Explicit scheme dependence typically enters at two-or three-loop orders. We have therefore identified another way in which the SME can provide new insights into the general structure of QFTs. There is also a degree of commonality with previous results, in that explicit scheme dependence is frequently associated with situations in which the physical phenomena are distinctly nonlinear functions of the scale-dependent coupling paramaters; this is also exactly what happens in the SME with the effects of the f term, which can only contribute to physically observable effects nonlinearly. Higher-Order Radiative Corrections In approaching this problem, we initially thought that finding the alternative renormalization conditions that would ensure Ψ(f µ ) = 1 2 Ψ(c νρ ) might be facilitated by extending the one-loop radiative correction calculations to O(f 3 ) and O(c 1 f 1 ). However, after some further consideration, it became clear that the resolution to the puzzle could not really involve anything beyond O(f 2 ) and O(c 1 ) in a fundamental way. The reason is that the O(f 3 ) loop corrections can only produce potentially divergent corrections to the fermion propagator that involve the structure iγ 5 f 2 f µ p µ . The corrections will have this structure even in the lightlike, f 2 = 0, case, but in that case, those corrections are manifestly vanishing. In this special case the solution cannot involve imposing conditions on the O(f 3 ) terms. However, if the method of solution involves a power-series expansion in the components of f , then it should apply just as well at f 2 = 0 as for other values of f 2 . (In fact, one might actually expect the theory with f 2 = 0 to have the most straightforward behavior. The lightlike case has the simplest and best-behaved correspondence between the f and an effective c, because the c νµ equivalent of a lightlike f is exactly − 1 2 f ν f µ ; higher order corrections are impossible simply by virtue of f 2 being zero. Moreover, the quantity − 1 three f insertions along in the internal fermion line. The calculation for this diagram goes in a very similar way to ones we have already done so far. The self-energy is Performing the l = k + xp substitution again, with the usual algebra, gives (58) Again taking only the quadratic part of the numerator then simplifies the necessary numerator to N = (1 − x)(l µ l ν p ρ + l µ l ρ p ν + l ν l ρ p µ ). Since the numerator N is contracted with f µ f ν f ρ , the three terms in the numerator contribute equally. Hence we obtain the equivalent numerator so completing the calculation gives the infinite part as We see that this indeed has the same Lorentz structure as the term (18) at O(f 1 ). However, we will get additional divergences from cross terms at O(c 1 f 1 ), which have the same natural order. There are two diagrams with one c and f insertion each on the internal line (corresponding to the two orders in which the insertions may appear). The first such diagram (with c then f along the direction of fermion number flow) yields in the RG β-functions. Moreover, this scheme dependence already occurs at one-loop order. Depending on what is more calculationally convenient, it may be preferable to use a prescription that yields vanishing β f , or one in which c νµ and f ν f µ have the same RG evolution. However, as seen in (55), which governs the scale dependence of the physically observable quantity 2c νµ − f ν f µ that appears in the fermion kinetic energy, physical predictions should not depend on the choice of scheme; we are talking about differences in accounting, not physics. The reason for the scheme dependence in the RG functions is ultimately that the underlying action for the theory contains redundancies in its parameterization. The minimal SME Lagrange density includes all the superficially renormalizable terms that it is possible to write down involving Dirac matrices and derivatives acting on standard model fields. However, not all parameters in the Lagrange density are physically distinguishable. In particular, the same physics can be described with either a c term or a f term-or an intermediate combination of both. When quantum corrections are included, this ambiguity naturally persists. A physical effect that occurs due to virtual particle interactions may be similar to the effect of a tree-level f term, so it may make sense to treat the quantum modification as a radiative correction to f itself. However, since the effects of a field operator with the Lorentz structure of a f term-whether at tree level or radiatively generated-cannot be distinguished from the effects of a c, any radiatively induced contribution to f could be alternatively interpreted as a radiative correction to c. The explicit one-loop β-functions (53-54) show how the the radiative corrections to a Lorentz-violating fermion kinetic term can be parceled out as contributions to either c or f . A more cumbersome alternative way of demonstrating the existence of the ambiguity also exists. It is possible, beginning with a bare theory containing both c and f , to use a transformation like (6) to rotate all the Lorentz violation into the bare c term. Then the quantum corrections can be calculated and any radiative corrections to c determined, before performing another rotation in the Dirac space, to convert the renormalized c operator into an appropriate combination of renormalized c and f terms. As previously noted, we chose in this paper to work with a Yukawa theory (with no explicit Lorentz-violating terms appearing in the fermion-boson vertex) purely for reasons of simplicity. There does not seem to be any reason to expect that the resolution we found for the puzzle concerning β f will not apply more generally, including to the gauge sectors of the SME. Obviously, the accounting of Feynman diagrams in a gauge theory-in which the c and f terms appear in the vertex as well as the fermion propagator-will be quite a bit more intricate than in the Yukawa theory considered here. However, we anticipate that the interplay between c and f should remain qualitatively the same. It would nonetheless be interesting to understand the details of this interplay in more general Lorentz-violating theories. Extension of the these results to the gauge sector is one obvious area where further research is possible, but there are also questions still to be answered in the Yukawa sector of the SME. When other SME terms are present in the Yukawa action (in either the fermion propagation sector or in the interaction vertex), the f may mix nonlinearly with these additional coefficients. The general pattern of scheme dependence in the RG structure should persist in these more general SME Yukawa theories, but the precise details of which terms are involved remain to be worked out. Answering these various questions can provide further insights into the structure of the SME, as well as nonlinear regimes in QFT more generally.
2021-10-19T01:15:44.459Z
2021-10-16T00:00:00.000
{ "year": 2021, "sha1": "d9e85638ecfef44907c58af92429c5e63a32d4f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d9e85638ecfef44907c58af92429c5e63a32d4f7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251364728
pes2o/s2orc
v3-fos-license
Deterministic and Entanglement-Efficient Preparation of Amplitude-Encoded Quantum Registers Quantum computing promises to provide exponential speed-ups to certain classes of problems. In many such algorithms, a classical vector $\mathbf{b}$ is encoded in the amplitudes of a quantum state $\left |b \right>$. However, efficiently preparing $\left |b \right>$ is known to be a difficult problem because an arbitrary state of $Q$ qubits generally requires approximately $2^Q$ entangling gates, which results in significant decoherence on today's Noisy Intermediate Scale Quantum (NISQ) computers. We present a deterministic (nonvariational) algorithm that allows one to flexibly reduce the quantum resources required for state preparation in an entanglement efficient manner. Although this comes at the expense of reduced theoretical fidelity, actual fidelities on current NISQ computers might actually be higher due to reduced decoherence. We show this to be true for various cases of interest such as the normal and log-normal distributions. For low entanglement states, our algorithm can prepare states with more than an order of magnitude fewer entangling gates as compared to isometric decomposition. I. INTRODUCTION The inefficient preparation of quantum states is a bottleneck that currently prevents many quantum algorithms from achieving quantum supremacy over their classical counterparts. For example, the exponential speed-up of the Harrow-Hassidim-Lloyd (HHL) [1] algorithm presumes the existence of an efficient way to encode the components of a classical vector b in the amplitudes of a quantum state |b . However, such an oracle remains elusive. Beyond HHL, amplitude encoding is used in many other diverse areas including singular value estimation [2], least-squares fitting [3], and machine learning such as support vector machines [4] and autoencoders [5]. All these algorithms presume accurate state preparation but circuits that are overly deep threaten to introduce unacceptable decoherence in today's NISQ computers [6], resulting in an unusable set of states that hinders correct execution of the main algorithm. In this work, we demonstrate a state preparation algorithm that can allow states to be prepared with higher fidelities on current NISQ computers by allowing the user to, if necessary, optimally prepare the state with fewer quantum resources, which can result in improved fidelities on actual NISQ computers due to reduced experimental decoherence. Over the years, numerous state preparation schemes have been proposed, including preparation via factorization of the Hilbert space [7], decomposition of isometries [8], quantum generative adversarial networks (qGAN) [9,10], variational quantum circuits [11,12], and matrix product states (MPS) inspired approaches [13,14]. However, these methods all suffer from various problems and are less than ideal. Variational methods, like qGAN and most variational quantum circuits, use a fixed cir- * lee_jun_yi@imre.a-star.edu.sg cuit ansatz with parametrized gates that are progressively tuned through iterative optimization steps to prepare the target state. Although these approaches have enjoyed some success in preparing relatively small quantum registers (∼ 3 qubits), their general applicability is still doubtful for several reasons. Firstly, optimization of these circuits requires the use of a loss function, but it is known that the gradient of many loss functions vanish exponentially for large system sizes (the "barren plateau" problem), making it extremely difficult for the optimizer to make any progress [15]. Secondly, besides vanishing gradients, numerical experiments also suggest that the loss function's landscape is pockmarked with a multitude of local minima, making the optimizer's task even more challenging [16]. Thirdly, there is no compelling reason to choose one circuit topology over another and since there is no guarantee that the chosen ansatz will eventually yield the desired state, it is difficult to know when to stop optimizing and to try a different topology instead. While attempts have been made to also optimize the circuit's topology [12,17], it is likely that this will only aggravate the severeness of the optimization problem when scaled to larger system sizes. Finally, we note that in all these variational approaches, there is an unspecified cost of classical training time, which could well negate the theorized speed-up of quantum algorithms. In contrast to these variational methods, there are other analytical approaches that do not require intense optimizations and whose results are mathematically guaranteed (in an ideal noiseless quantum computer). They are, in that sense, "deterministic" as compared to variational methods where there is no guarantee of a solution's existence given a specified circuit topology. The chief disadvantage of these methods is that exact preparation of an arbitrary Q-qubit state generally requires ∼ 2 Q entangling gates [7,8,18], which is prohibitively expensive in the NISQ era. More recently, there have been proposals to shorten the depth of these circuits by utilizing additional ancillary qubits [19]. Nevertheless, it should be noted that the final state of such routines is entangled with the ancillary qubits. Consequently, it cannot be accepted by all algorithms expecting a usual amplitude encoding without additional modifications or measurements. A common deficiency of all the state preparation routines discussed thus far is their insensitivity to the entanglement of the target state. By this we mean that although a generic Q-qubit state may require an exponential number of entangling gates to prepare, it is clear that not every state requires such extensive resources to prepare. Indeed, a product state with no entanglement can be prepared with a single layer of non-entangling single qubit gates, but current deterministic approaches do not explicitly account for this difference between states and can, as we show in sections IV and V, potentially use an order of magnitude more entangling gates than necessary. Furthermore, although a highly entangled state may require a substantial number of entangling gates to prepare, it is worth asking if one might find an optimally approximate state (in the amplitudes of the state) that has lower entanglement, and is therefore easier to prepare with fewer entangling gates. In this paper, we utilize the properties of matrix product states to prepare arbitrary states with varying resources depending on the desired fidelity. Although our algorithm can prepare a target state exactly, its main utility in the NISQ era is to prepare an approximate state with fewer resources because one might ironically achieve higher fidelity to the target state by preparing an approximate state than by preparing an exact state that requires significantly more entangling gates (see, for example, our tests on actual NISQ computers in section IV and the results in Table I). Compared to other matrix product state based circuits that use Q − 1 sequential fixed d qubit gates [20], or multiple layers of fixed 2-qubit gates [13], our approach uses one layer of Q − 1 sequential d q qubit gates, where d q is a variable that depends on the entanglement of the target state and q = 1, . . . , Q − 1 for a Q qubit system. In the case of a separable state, our algorithm automatically reduces to a single layer of single qubit gates i.e. d q = 1 for all q. As we show in section V, this flexibility of using variable sized gates can allow for more efficient state preparation compared to having multiple layers of 2-qubit gates, and can frequently prepare states with more than an order of magnitude fewer gates compared to isometric decomposition, which is to our knowledge one of the most efficient deterministic state preparation algorithm available requiring to leading order 2 Q CNOT gates for preparing an arbitrary Q qubit state. This performance is comparable to 23 /24 2 Q CNOT gates for even Q [7], 2 Q CNOT gates [18], and better than 2 Q+1 CNOT gates [21]. During preparation of this manuscript, we were made aware of a similar circuit utilizing variable sized gates in the appendix of a recently published work, although the authors there did not study the use of such a circuit for state preparation [22]. The rest of the paper is structured as follows: in section II, we review relevant properties of matrix product states, and in section III, we describe the details of our state preparation algorithm. We then demonstrate how our algorithm can outperform isometric state preparation on current NISQ computers in section IV and systematically compare our method against other approaches in numerical simulations of 8, 12, and 16 qubit registers in section V. Finally, we conclude in section VI. II. MATRIX PRODUCT STATES Matrix product states, or tensor-trains as they are known in computational mathematics literature [23,24], are used widely in many areas including generative modeling in machine learning [25], estimation of singular values [26], studying of phase transitions in field theories [27], and perhaps most famously, in the density matrix renormalization group technique for studying 1-dimensional quantum many-body systems [28]. At the heart of a MPS is the re-writing of the expansion coefficients A(j 1 , . . . , j Q ) of the Q-body state j1,...,jQ A(j 1 , . . . , j Q ) |j Q , . . . , j 1 into a sum-product of Q, 3-dimensional tensors A n α n−1 ,jn,α n (also sometimes referred to as a MPS core) where the α n are summation indices (also known as virtual or bond indices) with dimensions dim(α n ) ∈ N for 1 ≤ n < Q, n ∈ N, and dim(α 0 ) = dim(α Q ) = 1. j n here refers to the computational basis state of the n th qubit, and dim(j n ) is therefore always 2. More generally, we shall use lowercase latin alphabets to represent physical indices and greek letters to label bond indices. Numerical superscripts are used to enumerate the Q + 1 different bond indices while numerical subscripts shall identify a particular qubit. We shall also refer to the α n−1 and α n bond indices of A n α n−1 ,jn,α n as the left and right bond indices of A n respectively. Analytical decomposition of the expansion coefficients A(j 1 , . . . , j Q ) into a sum-product of MPS cores are known for only a few special cases [29], but decomposition of A can always be accomplished numerically via successive singular-value decompositions (SVD) [24,30]. Since we wish to emphasize that our approach works for arbitrary quantum states, we briefly illustrate how this is accomplished. Moreover, it highlights a simple way in which locally optimal approximations to the target state can be systematically achieved. We begin by noting that the expansion coefficients A(j 1 , . . . , j Q ) can always be reshaped into a matrix. We shall use the notation A(j 1 , . . . , j Q ) to denote A as a Q-dimensional tensor, and A(j 1 , . . . , j n ; j n+1 , . . . , j Q ) as a matrix with n i=1 dim(j i ) rows and Q i=n+1 dim(j i ) columns. Given the expansion coefficients A(j 1 , . . . , j Q ), we then perform the following series of SVD and reshaping operations . . . In the first line, we have reshaped A(j 1 , . . . , j Q ) into a matrix B(j 1 ; j 2 , . . . , j Q ) and performed a SVD so that B(j 1 ; j 2 , . . . , j Q ) = α 1 U (j 1 ; α 1 )B(α 1 ; j 2 , j 3 , . . . , j Q ). Notice that dim(α 1 ) is the number of columns of U , and is also the number of singular values. Next, we reshape U (j 1 ; α 1 ) into the first of our desired MPS core We then perform a SVD on B(α 1 , j 2 ; j 3 , . . . , j Q ) in the third line, and an analogous reshaping operation on the fourth line. This process is then iterated until A(j 1 , . . . , j Q ) is fully decomposed into Q MPS cores as desired. This explicit decomposition suggests a natural and easy way to make locally optimal approximations: at each n th SVD step, keep only the k n largest singular values out of all dim(α n ) = r n singular values From the Eckart-Young theorem [31], this is locally optimal in the sense that B kn is the best k n rank approximation to B n (as defined in Eq. (3)) under both the Frobenius and spectral norm, with the error in the Frobenius norm simply given by the quadrature sum of the truncated singular values. Physically, we should understand these truncations as effectively finding optimally approximate states (in the Frobenius norm of the expansion coeffcients A(j 1 , . . . , j Q )) that have lower entanglement than the original state. Lastly, we note that the decomposition of A(j 1 , . . . , j Q ) into the MPS form is not unique. One easy way to see this is to recognize that at each SVD in Eq. (2), we may insert the identity I = XX −1 (where X is any unitary matrix) in between U and B and re-define U X → U , X −1 B → B such that each A n α n−1 ,jn,α n in the decomposition can be changed without altering the quantum state. This gauge freedom allows a MPS state to be transformed such that the cores A n for 1 < n ≤ Q obey the orthonormal relations Moreover, if the MPS state is normalized, A 1 also obeys the relation A MPS state that obeys these relations is said to be a right-canonical MPS [32] and we always shall assume that our MPS state is in the right-canonical form. Our requirement that the MPS state is right-normalizable places constraints on the order of SVD truncations when approximating a state. In particular, the dimension of the left bond index of A n in any right-normalizable MPS must obey the inequality To see why this is true, we note that the right normalization relations (Eq. (4) and (5)) can be interpreted to mean that each A n core has dim(α n−1 ) orthogonal C dim(jn)×dim(α n ) vectors. Since a C dim(jn)×dim(α n ) vector space cannot have more than dim(j n ) × dim(α n ) orthogonal vectors, the inequality Eq. (6) naturally follows. Given a Q qubit system, there are Q − 1 non-trivial SVD to perform. To obtain the next best approximation, the smallest singular value, relative to the largest singular value in the same SVD, should be dropped. However, inequality Eq. (6) implies that this should only be done when the resulting MPS obeys Eq. (6). We note that this only affects the order of the truncations; ultimately, any MPS state can be truncated all the way down such that dim(α n ) = 1 for all n. III. MATRIX PRODUCT INITIALIZER We now show how to prepare an arbitrary quantum state that has been expressed as a right-canonical MPS. Previously, Ran showed how a MPS consisting of cores with bond dimensions of 2 may be prepared exactly with sequential 2-qubit gates [13]. However, this sequence was unable to exactly prepare MPSs with cores of arbitrary bond dimensions, which limits the usefulness of the technique. Here, we show how MPSs with cores of arbitrary bond dimensions may be prepared, which implies that any arbitrary state may be prepared using our technique. More importantly, it also allows for optimally approximate states with lower entanglement than the original state to be prepared, which can be very useful in the current NISQ era. The problem of state preparation may be described as finding a unitary matrix U such that U |0, . . . , 0 = |ψ , where |ψ is the desired target state to prepare. There is however no unique solution to U . On a gate-based quantum computer, U must be formed from the product of several d-qubit gates, where d ∈ N, d ≤ Q. However, multi-qubit gates are experimentally difficult to implement and error-prone in the NISQ era. We therefore seek to find a U that only uses d > 1 qubit gates when "absolutely necessary". By "absolutely necessary", we mean that in general, d > 1 qubit gates are only needed when there is significant entanglement in the target quantum state. After all, a separable product state can be prepared with just one layer of 1 qubit gates. We thus want a U that is made out of variable d qubit gates with d > 1 only when the entanglement of the state calls for it. Moreover, it is plain that every qubit needs at least one gate to operate on it. Consequently, we write down as an ansatz that U can be given by the quantum circuit that is shown in Fig. 1. Essentially, the ansatz consists of Q sequential d n -qubit gates operating on the n th to n + d n − 1 th qubit for n = 1, . . . , Q. We denote the unitary matrix represented by these gates as G [n,dn] . Notice that the ansatz is linear in the number of qubits. Moreover, although it is sequential as shown in Fig. 1, it can potentially be parallelized for appropriate values of d n . For example, if d n = 1 for all n, then the ansatz is simply one layer of 1-qubit gates. Lastly, we point out that although the circuit in Fig. 1 seems to suggest some restriction on d n (for example, G [Q−1,dQ−1] cannot have d n > 2), this is, as we shall show later, not a limitation. Armed with this ansatz, we may write down the unitary matrix of each gate in the initializer. For the first gate, we have Consistent with our notation above, j ′ n above denotes the initial quantum number of the n th qubit (typically 0). On the other hand, j n is the final quantum number of the n th qubit after state preparation. This is distinct from b m n , which is an intermediate quantum number of the n th qubit after operation of the m th gate (see Fig. 1). Likewise, for n = 2, . . . , Q − 1, G [n,dn] has the form where Notice that we have not in Eq. (10) defined j for the case where d n−1 − 1 > d n . As we will show later, this particular case is unnecessary, but for now we have for where The unitary matrix U is then In the summation above, we have used the notation {b m } to mean the set of all b indices with superscript equal to m. In other words, all the intermediate quantum numbers are summed over in Eq. (13). Also, since G [n,dn] , as defined in Eq. (7), (8), and (11) are 2 dn -dimensional matrices, but U in Eq. (13) is 2 Q -dimensional, we have implicitly assumed in Eq. (13) that each G [n,dn] has been appropriately expanded via the correct Kronecker products in accordance to the qubits that it operates on. For state preparation, we want U |0, . . . , 0 = |ψ , where |ψ is a right-canonical MPS as in Eq. (1). More precisely, we want U such that when j ′ n = 0 for all n. Here, we are viewing A as a tensor and the G [n,dn] in their "native" space. Comparing Eq. (14) with Eq. (2), we observe that in both cases, we have Q free indices (j 1 , . . . , j Q ), Q MPS cores and gates, as well as Q − 1 non-trivial summation indices, which suggests that Eq. (14) can be accomplished through a mapping between each n th MPS core and gate. For this mapping to occur, we need to ensure that the space spanned by {b n } is at least as large as dim(α n ). This can be enforced by choosing This choice of d n also ensures that d n−1 − 1 ≤ d n ∀ n so that our omission of the case d n−1 − 1 > d n in Eq. (10) is justified. To see this, we note that for any right-canonical MPS, the inequality Eq. (6) must hold. Taking log 2 and adding one to both sides of the inequality, we have after rounding up We remark here that the algorithm presented in [13] can, for the case of just a single layer, be considered to be a special case of our algorithm with d n = 2 for all n. If the exact MPS representation of the target state has bond dimensions larger than 2, the algorithm in [13], for the case of a single layer, approximates the target state to a matrix product state with uniform bond dimensions of 2. One consequence of this approximation in [13] is that it is unable to exactly prepare any arbitrary state with bond dimensions larger than 2, and to better approximate states with more entanglement, the algorithm in [13] needs to use multiple layers of 2-qubit gates. Conceptually, this is distinct from our approach which is able to exactly prepare any arbitrary state of Q qubits with one layer of Q − 1 sequential d q qubit gates where d q is a variable that depends on the entanglement of the target state. On the other hand, the circuit presented in [20] is equivalent to the case of d n = d for all n where d is some constant that is sufficiently large to allow a faithful representation of the target state with a MPS. However, compared to our approach, the circuit in [20] is inefficient in that it does not take into account variations in the bond dimensions of individual bonds. Obviously, for the case of a single layer, the algorithm in [13] approaches the circuit of [20] as d n increases uniformly for all n and becomes increasingly capable of faithfully reproducing states with higher entanglement. Nevertheless, such a procedure is also inefficient for the same reason why [20] is not ideal. Physically, the choice of d n in our Eq. (15) means that the number of qubits the n th gate operates on is directly dependent on the entanglement between the subsystems {j 1 , . . . , j n } and {j n+1 , . . . , j Q }. For example, if dim(α n ) = 1 ∀n, i.e. the target state is a product state, then d n = 1 ∀n by Eq. (15), and the circuit reduces to simply one layer of 1-qubit gates. Thus our circuit by construction only uses d > 1 gates when the entanglement of the target state calls for it. It remains to prescribe the mapping from a MPS core A n to a gate G [n,dn] . We begin by observing that each MPS core can be reshaped into a matrix A n (α n−1 ; α n , j n ) = A n (i; j). In converting a multiindex like α n , j n into a single index j, we shall adopt a C-indexing convention, that is, indices on the right (j n in this case) are incremented first before those on the left (α n in this case). Each A n core can thus be viewed as a matrix with dim(α n−1 ) rows and 2 dim(α n ) columns. Now by construction, G [n,dn] has at least dim({b n−1 }) ≥ dim(α n−1 ) columns [33]. Furthermore, it has 2 dim({b n }) ≥ 2 dim(α n ) rows [34]. We can therefore map the dim(α n−1 ) rows of A n (α n−1 ; α n , j n ) into the first dim(α n−1 ) columns of G [n,dn] , zero-padding any remaining rows. With this accomplished, any remaining columns of G [n,dn] can be filled by requiring that G [n,dn] is unitary, as is necessary for a quantum gate. To do this, we note that since the MPS is right-canonical, each of the dim(α n−1 ), 2 dim(α n )-dimensional row vector in A n are orthonormal to each other. Consequently, a trivial embedding of them in a potentially larger 2 dim({b n })dimensional space by zero-padding the extra dimensions will mean that they are still orthonormal in the larger space. We thus have dim(α n−1 ) filled columns that are orthonormal vectors in a C 2 dim({b n }) vector space, and 2 dim({b n }) − dim(α n−1 ) remaining columns to fill such that G [n,dn] is unitary. Clearly, this can be accomplished by filling the still empty columns with the remaining 2 dim({b n }) − dim(α n−1 ) orthonormal vectors in C 2 dim({b n }) . We conclude this section with a remark on the time complexity required to obtain the MPS cores required for our algorithm to prepare a state with N basis states, equivalent in this context to loading a classical vector of length N into a quantum computer. In analyzing the time complexity, we are mainly interested in the asymptotic complexity for large N , and we therefore (like the HHL algorithm) assume that the classical input data is sparse since dense inputs would require exponentially increasing classical memory requirements. As discussed in section II, the MPS cores are obtained through a series of SVDs. It is therefore not difficult to show that to leading order, the complexity required to obtain the MPS cores is the sparse SVD of a √ N × √ N matrix, which has a time complexity ofÕ(s + k 2 κ 4 k √ N /ǫ 2 ), wheres is the number of non-zero elements in the √ N × √ N matrix, k is the number of singular values requested, κ k is the ratio of the largest singular value to the k th largest singular values of the √ N × √ N matrix, and ǫ is the error [35]. As a comparison, the time complexity of HHL isÕ(log(N )s 2 κ 2 /ǫ), where s is in this case the maximum number of non-zero entries per row of A, the N × N matrix representing the linear system of equations that HHL is attempting to "solve" [1]. Similarly, κ is here the condition number of A. Evidently, the time complexity of obtaining our MPS cores has a worse scaling in N than HHL. However, we note that it is still better than the complexity of inverting a sparse matrix classically using the conjugate gradient method, which has a time complexity of O(N sκ log(1/ǫ)) [1]. Consequently, using our state preparation algorithm together with HHL can still potentially provide a speedup over classical matrix inversion although an important caveat here is that the output of HHL does not give the full solution that classical matrix inversion provides. Finally, we note that the time complexity of computing the MPS cores scales quadratically with the number of singular values requested of the SVD. Physically, this implies that states with less entanglement have lower time complexity so that the classical cost of our algorithm is least for states with low entanglement. IV. PERFORMANCE ON NISQ COMPUTERS Although our algorithm is capable of preparing arbitrary states, it is generally most efficient in terms of quantum gate operations when preparing states with low entanglement. This degree of entanglement for a Q qubit system with density matrix ρ and qubits {j 1 , . . . , j Q } can be quantified with the mean normalized bipartite entropy, which we define as where ρ n ≡ Tr n (ρ) is the reduced density matrix of the sub-partition {j 1 , . . . , j n }, and the average is over all sub-partitions {j 1 }, {j 1 , j 2 }, . . ., {j 1 , . . . , j Q−1 }. Note that S(ρ n )/ min(n, Q − n) is just the normalized bipartite von Neumann entropy, which is a measure of the entanglement between the bipartitions {j 1 , . . . , j n } and {j n+1 , . . . , j Q }, with zero indicating a separable state and one indicating a maximally entangled state.S, which ranges between zero and one, is therefore the average normalized bipartite von Neumann entropy over all bipartitions of the system. Generally speaking, our algorithm yields the most savings in terms of gate count and circuit depth compared to other deterministic methods whenS is small. An example of a target state with a relatively low mean normalized bipartite entropy is the sinusoidal probability distribution shown in Fig. 2d, withS = 0.076. In this case, we were able to, without any approximation, prepare the target distribution with an order of magnitude less CNOT gates and with a circuit that is shallower by about three times compared to isometric decomposition [8] (details in Table I). Not surprisingly, our algorithm was also able to achieve a higher fidelity, which can be visualized in Fig. 2d that shows the target distribution in a solid blue line, as well as the measured probability distribution obtained via our algorithm (green crosses) and isometric decomposition (red pluses). Error bars shown in this and other plots give the expected 1-sigma deviation of the measured probabilities based on the number of shots and theoretical probability of obtaining each state. In all our tests, we simply chose the least busy quantum hardware at execution time without special consideration for the error rate and qubit connectivity and we used the standard transpilation routine on Qiskit [36] with the highest level of optimization. Although the results summarized in Table I were all obtained from quantum devices with a one-dimensional qubit layout, we have also obtained similar results using backends like ibmq_lima that have two-dimensional layouts. For states with larger entanglement, optimally approximate states can be found using the procedure outlined in appendix II, and using this approximate state, our method can sometimes still achieve better fidelities than state preparation via existing deterministic methods. In the first row of Fig. 2, we compare our preparation of a normal (a) and log-normal (b) distribution, which are both of interest to the quantum finance community [37][38][39][40]. In both cases, the target probability distributions are plotted as blue solid lines while their matrix product state approximations are given by orange dots. As discussed in appendix II, these are states that are optimally (in the Frobenius norm) approximate to the target state but with lower entanglement. To ensure a fair comparison, the isometric state preparation shown in Fig. 2a and Fig. 2b was also used to prepare the same approximate MPS target state as our algorithm. However, as Table I shows, there is actually little difference in the experimental fidelity of the isometric method when it prepares the exact target state instead. This is not surprising since isometric decomposition does not specifically optimize based on the entanglement of the target state. We emphasize that although our algorithm prepared an approximate state, it nevertheless performs substantially better than an exact state preparation using isometric decomposition as the results in Table I shows. In particular, for the case of the normal distribution in Fig. 2a, our method used more than an order of magnitude less CNOT gates, with about 2.5 times shallower depth, and achieved a significantly higher fidelity to the target state. As the number of qubits grows, any classical input data to a quantum algorithm like HHL must also be sparse since dense classical inputs would require exponentially increasing memory requirements that does not exist. It is therefore typically assumed that classical inputs to quan- Table I. Benchmark of our state preparation algorithm compared to isometric decomposition. Gate (CNOT, Rz, and SX) counts and circuit depth for each method, as well as their experimentally measured fidelity to the target state are tabulated. When preparing an approximate state is used, the fidelity of the approximate state to the exact state is listed under "Fidelity (theory)" (an entry with 1 means the exact state was used). The mean normalized bipartite entropy of the exact target state, defined in Eq. (17), is also given. tum algorithms like HHL are sparse, and it is hence interesting to see if our method can generate substantial savings in such cases. To test this, we randomly generated a vector with a sparsity of 0.9 where both the location and magnitude of the non-zero elements were randomly chosen. Although not every sparse vector will necessarily have lowS, some sparse vectors may have very low entropy, and in this case, our algorithm can enable significantly better results over isometric state preparation. For example, the sparse vector state visualized in Fig. 2c has a very lowS of 0.014, and in this case, we were able to exactly prepare the state with just 2 CNOT gates, compared to 70 with an isometric decomposition. This is a striking demonstration of the utility of entanglementsensitive routines like ours. With a circuit depth that is more than six times shallower than isometric decomposition, we were able to achieve a significantly higher fidelity of 0.959 over 0.717. As we demonstrated on NISQ computers in Fig. 2 and Table I, having the ability to prepare optimally approximate states is important in the NISQ era since one may obtain higher fidelities to the target state by preparing approximate states instead of the exact state. In our algorithm, this is because such approximate states have lower entanglement and require fewer entangling gates to prepare, which results in reduced experimental decoherence on NISQ computers. One feature of our algorithm is its ability to successively approximate the target state so that successive approximate states can, at the expense of their fidelity to the target state, be prepared with fewer gates. From a practical point of view, one can therefore run our algorithm till the (theoretical) fidelity to the target state falls below some acceptable threshold, and then choose the circuit produced by the preceding iteration as the one to use. We emphasize that while this theoretical fidelity is an upper limit to the achievable fidelity, a circuit produced by our algorithm may on NISQ computers nevertheless experimentally outperform another algorithm that gives a higher theoretical fidelity but suffers from significantly more decoherence experimentally due to its deeper circuit and use of considerably more gates. V. PERFORMANCE IN NUMERICAL SIMULATIONS In Fig. 3, we plot the minimum number of basis gates and circuit depth (color of markers) required to prepare a randomly generated state with fidelities above 0.9, 0.95, and 0.99, using our algorithm (circular markers), Ran's algorithm [13], and isometric decomposition [8] ('+' markers). The entropies (S from Eq. (17)) of these states is plotted on the horizontal axis. To test the scaling of the algorithms, we performed the study for 8, 12, and 16 qubits. Clearly, isometric decomposition consistently performs worse in requiring more gates and deeper circuits as compared to both Ran's algorithm and our method. To ensure a fair comparison, the isometric decomposition method here was used to prepare the same approximate state as that produced by our algorithm. We checked that preparing the exact state will generally require more resources even for isometric decomposition, and we have therefore for clarity not shown this poorer result in Fig. 3. It is interesting to contrast Ran's circuit, which consists of successive layers of sequential 2-qubit gates, with our algorithm. As Fig 3 demonstrates, Ran's method generally requires a consistent number of gates and depth with little dependence on the entropy of the target state. On the other hand, our approach shows markedly better performance at low entropies, and similar to worse performance at higher entropies, with increasing frequency of better performance at decreasing fidelity requirements. We highlight that these are theoretical fidelity requirements, and that on NISQ computers, our algorithm that prepares a state with theoretical fidelity > 0.9 can experimentally outperform another algorithm with theoretical fidelity > 0.99 by using significantly fewer gates and by having a much shallower circuit (see, for example, the tests in section IV and the results in Table I). Physically, the difference between our algorithm and Ran's method is that whereas Ran uses successive layers of sequential 2-qubit gates to increasingly approximate the target state, we use a single layer of sequential d q -qubit gates, where d q is an entanglement-dependent variable and q = 1, . . . , Q − 1 for a Q qubit system. Since each gate operates on a variable number of qubits, and d q is sensitive to the entanglement of the state, we can prepare low entanglement states with very few gates. On the hand, because Ran's method can only use discrete layers of 2-qubit gates (this can be observed in Fig. 3 as discrete bands in the of number of gates for the Ran method), it is unable to prepare low entanglement states as efficiently as our algorithm. Finally, we note that our method can, in principle, prepare any state exactly, whereas Ran's method can only asymptotically become exact with increasing number of layers. Nevertheless, we acknowledge that this ability to prepare states exactly is largely moot in the NISQ era due to environmental decoherence. VI. CONCLUSION Arbitrary quantum states are difficult to prepare with high fidelities on today's NISQ computers. Due to noise from multi-qubit entangling gates, deep quantum circuits with a large number of such gates rapidly become incoherent and useless. Although analytical methods to exactly prepare arbitrary states have been known for a while, these approaches suffer from deep circuits and high entangling gate counts that increase exponentially with the number of qubits. This translates in practice to significant decoherence of the qubits that makes them unusable for further computation. In response to this, variational circuits with parametrized gates have since been proposed for state preparation. Nevertheless, these circuits are not a panacea and are instead plagued with their own problems. For example, finding the loss function's global minimum in these methods is highly challenging due to the curse of dimensionality, gradients that vanish exponentially with the number of qubits, and a proliferation of local minimas in the loss function's landscape. More importantly, there is no obvious guiding principle for the selection of the circuit's topology, and there is no guarantee that any given parametrized circuit will actually be able to prepare the state exactly. Furthermore, 17)) so as to achieve theoretical fidelities > 0.9, 0.95, and 0.99 using our method, Ran's method [13], and isometric decomposition [8]. As in Fig. 2, the isometric decomposition prepares the same approximate MPS states rather than the exact states for a fair comparison. none of these existing approaches take into account the fact that states with less entanglement should in theory be easier to prepare with less entangling gates. In this work, we use a state preparation circuit that uses only the same number of nearest-neighbor gates as the number of qubits, which minimizes the number of expensive SWAP operations due to limited qubit connectivity. Although each of these nearest-neighbor gates may be an expensive multi-qubit gate that further decomposes into multiple one and two-qubit gates, our routine only uses such gates sparingly when the entanglement of the target state calls for them. In cases where a target state (or a sub-system of it) is separable, our circuit automatically takes advantage of this by utilizing single qubit gates and parallelizing gate operations where possible. In short, our state preparation routine allows states with less entanglement to be prepared more efficiently compared to other deterministic but entanglementinsensitive methods such as isometric decomposition. For states with low entanglement, our tests on actual quantum computers hosted on IBM's cloud service show that our circuit is capable of achieving higher state fidelities compared to standard isometric decomposition by utilizing an order of magnitude less CNOT gates and having a significantly shallower circuit. For states with higher entanglement, we can still sometimes achieve significantly better performance by preparing an optimally approximate state with lower entanglement, including states that represent a normal and log-normal distribution. Finally, we emphasize that unlike variational approaches, our method does not require challenging minimizations in a high dimensional space where the existence of a solution is uncertain, and exponentially vanishing gradients and a multitude of local minima make progress towards a global minimum difficult. Our state preparation algorithm is therefore a valuable tool in the NISQ era for preparing arbitrary states with acceptable fidelities without having to incur significant classical optimization costs.
2022-08-06T15:14:56.629Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "79e51e5df7908fbb27e72a9423ebe02fd82d4b29", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "67ab01786edcd1acfb08154d9126b24a4aec9979", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
233966778
pes2o/s2orc
v3-fos-license
Seismic Performance of LRS-FRP–Concrete–Steel Tubular Double Coupling Beam : To improve the ductility and seismic performance of a double coupling beam, the authors applied a polyethylene terephthalate (PET) sheet and steel tube to form fiber-reinforced polymer (FRP)–concrete–steel double-skin tubular (DST) composite coupling beams. A low-cyclic reversed experimental program was carried out which factored in the member form, steel tube diameter, and construction methods. The results indicate that the ductility and energy dissipation performance of double coupling beams—whether wrapped with a PET-FRP sheet or surrounded by an FRP–concrete– steel DST composite system—is a substantial improvement over the traditional reinforced-concrete double coupling beam (RC-DCB). The ductility coefficient and accumulated energy dissipation of the DST-DCB members improved above 170% and 2300%, respectively. These percentages compare to the RC-DCB and are based on the rupture of a PET-FRP sheet. The results are similar to those of the large rupture strain double coupling beam (LRS-DCB). Meanwhile, the external wrapped PET-FRP sheet does not affect the initial stiffness and peak strength of the RC-DCB. Relatively, the inner steel tube will improve the initial stiffness, yielding strength, and peak strength. DST-DCB members still have considerable deformability after 85% of peak strength since the external PET-FRP sheet provided an effective constraint effect on the core concrete and the inner steel tube could bear excellent shear deformation. sheet or the inner steel tube. Since the limited constraint effect is provided by the PET-FRP material, the normalized stiffness of the LRS-DCB specimen dropped faster than those with embedded steel tubes in the initial stage. With the deformation increase, the increasing confined stress limited the stiffness degradation, which caused the slope of the curves to flatten out. The initial stiffness of the DST-CB specimens was mainly controlled by the inner steel tube, which dropped at a slower rate than that of the LRS-DCB specimen. With the increase of deformation, the inner steel tube yielded, and the PET-FRP material constraint effect controlled the stiffness degradation of the specimens. Introduction Shear wall structures and frame-shear wall structures are widely used in high-rise buildings. As an important structural member of these structures, the coupling beam is usually designed as the energy-dissipating member that produces a plastic hinge after yielding, making it the first line of defense for anti-seismic energy dissipation. Under the action of seismic load, the coupling beam will yield and fail, and act as energy dissipation, to avoid the premature failure of the wall limb. Due to the characteristics of the small span-height ratio and large stiffness of the connected shear wall, the coupling beam is prone to brittle shear failure. To improve the mechanical behavior of the coupling beam, scholars have developed many strategies since the 1970s. The research plans generally focus on improving the following three aspects: new reinforcement forms [1,2], steel-concrete composite members [3], and using high-performance concrete [4]. However, these strategies are not ideal for improving the ductility or energy consumption of the coupling beam [4], and some of them even limit the thickness of the beam. With the development and application of fiber-reinforced polymer (FRP) in civil engineering, Teng et al. developed a new type of hybrid FRP-concrete-steel double-skin tubular (DST) composite member in 2004 [5,6], which consists of an external FRP tube, internal steel tube, and reinforced concrete (RC) in the sandwich. Different components in the system work well together. The outer FRP tube has a good restraint effect on the sandwich concrete, which improves the strength, plasticity, and toughness of concrete. At the same time, the confined concrete can also prevent the buckling of the internal steel tube. This type of composite member presents excellent anti-seismic performance, ductility, and corrosion resistance. In 2006, Yu et al. [7] researched the flexural mechanical behavior of hybrid glass FRP (GFRP)-concrete-steel beams. They found this DST-GFRP composite beam to have excellent flexural capability and ductility, especially when the inner steel tube is reasonably offset toward the tensile zone. However, the bonding behavior between concrete and its steel tube interface needs to be improved. The test results indicate that interface slippage would result in obvious load fluctuations-even sudden drops, which is a threat to structural safety. The research carried out by Idris and Ozbakaloglu [8,9] shows that the bond behavior could be improved significantly by using mechanical connectors. Meanwhile, the novel composite system presented in this article also presents good ductility under cyclic loading, and the ductility performance of the composite beam would not be significantly weakened by the inner steel tube filled with concrete. Moreover, the compressive behaviors of FRP-concrete steel double-skin tubular columns under both monotonic and cyclic loads have also been investigated by some scholars [10][11][12][13][14][15][16][17][18]. Except for the good ductility and anti-seismic performance, [13,14] also indicated that the ductility could be improved by filling concrete at both ends of the inner steel tube, while [19,20] recommended using a stiffener-reinforced steel tube. For the compressive behavior of large rupture strain FRP (LRS-FRP), the rupture strain was usually larger than 5% (as in the case of a polyethylene terephthalate, PET) wrapped concrete-steel column, some researchers [21][22][23] have also reported that the corresponding composite system has a good bearing capacity and ductility, with an ultimate compressive strain in some cases reaching up to 15%. Notably, Wang et al.'s [24] test results show that the failure mode of the composite column tends to be a brittle failure when using ultra-high-performance fiber-reinforced concrete instead of traditional concrete. As just mentioned, the application of an FRP-concrete-steel DST composite member is mainly concentrated in the RC column and beam. For this type of coupling beam with a small span-height RC member, the corresponding research is rarely reported. Based on the excellent performance of the DST system on the flexural behavior of composite beams and the great ductility performance of DST composite columns, the authors applied LRS-FRP to the DST double coupling beam to form an LRS-FRP-concrete-steel DST composite coupling beam. In which, the double coupling beam not only facilitates the arrangement of pipeline lines but also significantly reduces the internal force of the coupling beam and improves the seismic performance of the coupling beam. However, it has an unavoidable disadvantage of rapid degradation of stiffness and bearing capacity at a later stage [25]. This paper features the authors' quasi-static low-cycle reversed loading experimental program carried out to research the bearing capacity, ductility, and energy dissipation of LRS-FRP-concrete-steel DST double coupling beams. Specimen Design A total of five coupling beams were manufactured and tested by considering the cross-section form, diameter of the inner steel tube, and construction technology. As shown in Figure 1, the geometry of the coupling beam is 200 mm in width, 900 mm in length, and 600 mm in height. Figure 2 illustrates the brief manufacturing processes of specimens. The control specimen is one double RC double coupling beam (RC-DCB). The specimen, LRS-DCB, is one double RC coupling beam wrapped with one PET sheet. The difference between RC-DCB and LRS-DCB is the extra wrapped PET sheet. The details of these two specimens are illustrated in Figure 1a. The other three coupling beams are LRS-FRP-concrete-steel DST double coupling beams, as shown in Figure 1b, the differences between them are the diameter of the inner steel tube and the manufacturing method. Thus, DST-DCB-89 is one double coupling beam with an inner steel tube of 89 mm diameter and wrapped with one PET-FRP sheet; DST-DCB-76 is one double coupling beam with an inner steel tube of 76 mm diameter and wrapped with one PET-FRP sheet; DST-DCB-89P is one precast double coupling beam, as shown in Figure 2, with an inner steel tube of 89 mm diameter and wrapped with one PET-FRP sheet. The details of all specimens are listed in Table 1. Material Property The materials used in this test program include concrete, polyethylene terephthalate (PET) FRP sheets, steel bars, and steel tubes. The average compressive strength of concrete was 20.2 MPa, determined by testing three cylinders with a diameter of 150 mm and a height of 300 mm based on the test code of China GB50152-2012T [26]. According to the Chinese code GB50010-2016 [27], the modulus of elasticity of concrete is taken as 30GPa. The material properties of PET-FRP are listed in Table 2 and Figure 3, in which the coupon tested data were obtained by the axial tensile test of five specimens with a width of 25 mm based on the test code of American society for testing and materials (ASTM) D3039-2015 [28]. The material properties of reinforcement and steel tube are tested based on China's code GB/T 228-2010 [29] and illustrated in Table 3 and Figure 3. Material Property The materials used in this test program include concrete, polyethylene terephthalate (PET) FRP sheets, steel bars, and steel tubes. The average compressive strength of concrete was 20.2 MPa, determined by testing three cylinders with a diameter of 150 mm and a height of 300 mm based on the test code of China GB50152-2012T [26]. According to the Chinese code GB50010-2016 [27], the modulus of elasticity of concrete is taken as 30GPa. The material properties of PET-FRP are listed in Table 2 and Figure 3, in which the coupon tested data were obtained by the axial tensile test of five specimens with a width of 25 mm based on the test code of American society for testing and materials (ASTM) D3039-2015 [28]. The material properties of reinforcement and steel tube are tested based on China's code GB/T 228-2010 [29] and illustrated in Table 3 and Figure 3. Figure 4 illustrates the test setup for the quasi-static low-cycle reversed loading experiment. To make sure the loading scheme replicates the real situation of the coupling beam, as shown in Figure 4a,b, the equipment consists of four parts that were designed, fabricated, and applied in the test. Notably, the coupling beam was rotated 90 • and erected along the longitudinal axis of the specimen. From this point, the coupling beam was connected with the bottom foundation and top L-shaped loading girder through a four-bar mechanism. The top L-shaped girder transmitted the load from the 500 kN hydraulic servosystem to the coupling beam. In addition, the four-bar mechanism can ensure that the horizontal force applied in the loading process is always in the loading plane and remains horizontal. Meanwhile, the four-bar mechanism can prevent torsion from occurring in the top steel beam due to the failure of the ends of the coupling beam under large deformation. The displacement control was adopted during the whole testing process illustrated in Figure 4c according to the following actions: 1. One relatively small cyclic horizontal force was applied to the coupling beam to check whether the loading equipment, instrumentation, and acquisition equipment were working properly. 2. The horizontal displacement of each cycle was set to increase by 1 mm in each turn until loading to 5 mm. 3. The amplitude was then set to 5 mm and cycled three times under each displacement level. 4. The specimen was examined and considered to be invalid until its ultimate bearing capacity dropped below 85% of the maximum bearing capacity or the wrapped PET-FRP sheet ruptured. The displacement control was adopted during the whole testing process illustrated in Figure 4c according to the following actions: 1. One relatively small cyclic horizontal force was applied to the coupling beam to check whether the loading equipment, instrumentation, and acquisition equipment were working properly. 2. The horizontal displacement of each cycle was set to increase by 1 mm in each turn until loading to 5 mm. 3. The amplitude was then set to 5 mm and cycled three times under each displacement level. 4. The specimen was examined and considered to be invalid until its ultimate bearing capacity dropped below 85% of the maximum bearing capacity or the wrapped PET-FRP sheet ruptured. Data Measurement The recorded data are mainly used for analysis of the hysteretic curve, shear and longitudinal deformation of the double coupling beam, and the anti-seismic performance of the DST coupling beam. The horizontal force V and displacement Δ were recorded by the dynamic acquisition box produced by DEWEsoft DAQ software ver. X2 SP. This system can present the real-time force-displacement curve, which helps to accurately determine changes in force and displacement. Strain gauges 5 mm in length were used to measure the strains of steel bars and PET sheet. Figure 5 illustrates the strain gauge locations. Figure 6 shows the horizontal deformation of the composite system, which was recorded by two linear variable differential Data Measurement The recorded data are mainly used for analysis of the hysteretic curve, shear and longitudinal deformation of the double coupling beam, and the anti-seismic performance of the DST coupling beam. The horizontal force V and displacement ∆ were recorded by the dynamic acquisition box produced by DEWEsoft DAQ software ver. X2 SP. This system can present the real-time force-displacement curve, which helps to accurately determine changes in force and displacement. Strain gauges 5 mm in length were used to measure the strains of steel bars and PET sheet. Figure 5 illustrates the strain gauge locations. Figure 6 shows the horizontal deformation of the composite system, which was recorded by two linear variable differential transformer displacement sensors (LVDT, LVDT-1, and LVDT-2 in Figure 6a). The longitudinal deformation was recorded by LVDT-3. The shear deformation of the coupling beam, as illustrated in Figure 6b, was obtained by transforming the test data from LVDT4-7 (as illustrated in Figure 6a) by calculating Equation. (1). where ∆ v and l are the shear deformation and initial length of the coupling beam, respectively; γ is the deflection angle between the initial and real-time state of the coupling beam, which can be calculated as γ = d (α + β)/(2l b ). As illustrated in Figure 6, d is the initial length of LVDT, while l and b are the initial projection lengths of the LVDTs in the corresponding direction. Finally, α and β are the length change of two LVDTs. where Δv and l are the shear deformation and initial length of the coupling beam, respectively; γ is the deflection angle between the initial and real-time state of the coupling beam, which can be calculated as γ = d'(α + β)/(2l'b'). As illustrated in Figure 6, d′ is the initial length of LVDT, while l′ and b′ are the initial projection lengths of the LVDTs in the corresponding direction. Finally, α and β are the length change of two LVDTs. Failure Modes The failure modes of the specimens are shown in Figure 7. Notably, only the failure behavior of specimen DST-DCB-89 appears in Figure 7 due to the similarity of the failure modes in each of the three coupled beams with inner steel tubes. Three different kinds of failure modes are illustrated in Figure 7. Compared to cross diagonal cracks observed in specimen RC-DCB (Figure 7a), specimen LRS-DCB was noticeably strengthened with the PET-FRP wrap as evidenced by the appearance of more flexural cracks and fewer shear cracks at both ends. Relatively speaking, specimen DST-DCB-89 presented one main bending crack in the plane perpendicular to the force direction and yet the concrete was crushed. where Δv and l are the shear deformation and initial length of the coupling beam, respectively; γ is the deflection angle between the initial and real-time state of the coupling beam, which can be calculated as γ = d'(α + β)/(2l'b'). As illustrated in Figure 6, d′ is the initial length of LVDT, while l′ and b′ are the initial projection lengths of the LVDTs in the corresponding direction. Finally, α and β are the length change of two LVDTs. Failure Modes The failure modes of the specimens are shown in Figure 7. Notably, only the failure behavior of specimen DST-DCB-89 appears in Figure 7 due to the similarity of the failure modes in each of the three coupled beams with inner steel tubes. Three different kinds of failure modes are illustrated in Figure 7. Compared to cross diagonal cracks observed in specimen RC-DCB (Figure 7a), specimen LRS-DCB was noticeably strengthened with the PET-FRP wrap as evidenced by the appearance of more flexural cracks and fewer shear cracks at both ends. Relatively speaking, specimen DST-DCB-89 presented one main bending crack in the plane perpendicular to the force direction and yet the concrete was crushed. Failure Modes The failure modes of the specimens are shown in Figure 7. Notably, only the failure behavior of specimen DST-DCB-89 appears in Figure 7 due to the similarity of the failure modes in each of the three coupled beams with inner steel tubes. Three different kinds of failure modes are illustrated in Figure 7. Compared to cross diagonal cracks observed in specimen RC-DCB (Figure 7a), specimen LRS-DCB was noticeably strengthened with the PET-FRP wrap as evidenced by the appearance of more flexural cracks and fewer shear cracks at both ends. Relatively speaking, specimen DST-DCB-89 presented one main bending crack in the plane perpendicular to the force direction and yet the concrete was crushed. As for the control specimen RC-DCB, some shear cracks were generated one by one before the longitudinal reinforcements yielded. Then, one small bending crack was generated after longitudinal bar yield. Finally, with the deformation increase, the spalling of concrete resulted in a sudden drop in bearing capacity. The damage processes of the other specimens during testing were similar to each other. With the extra wrapped PET-FRP material, no obvious shear crack appeared during the testing process. As shown in Figure 7b,c, the location of the observed first crack was at the junction of the coupling beam and end block. Then all these specimens reached the peak strength. With the deformation increase, the horizontal cracks were generated near the end of the coupling beam. As for the control specimen RC-DCB, some shear cracks were generated one by one before the longitudinal reinforcements yielded. Then, one small bending crack was generated after longitudinal bar yield. Finally, with the deformation increase, the spalling of concrete resulted in a sudden drop in bearing capacity. The damage processes of the other specimens during testing were similar to each other. With the extra wrapped PET-FRP material, no obvious shear crack appeared during the testing process. As shown in Figure 7b,c, the location of the observed first crack was at the junction of the coupling beam and end block. Then all these specimens reached the peak strength. With the deformation increase, the horizontal cracks were generated near the end of the coupling beam. After artificially removing the PET-FRP jacket and concrete, the specimens LRS-DCB and DST-CB-89 presented two different failure modes in Figure 7. In Figure 7b, many cracks are shown as generated at the plastic hinge areas at both ends of the specimen LRS-DCB. Furthermore, one main crack appeared at the junction of the beam and on the end block. Figure 7c shows serious concrete crushing and local buckling of the steel tube occurring in those specimens with inner steel tubes. The primary analysis concluded that the local buckling of the inner tube invalidated any blame on the constraining effect to concrete provided by the PET-FRP material; then, concrete started crushing as the deformation increased. When the crushed concrete filled the gap and the deformation increased, the constraining effect of the PET-FRP sheet controlled the attenuation of the bearing capacity. Hysteretic Curve The load-displacement curves of all coupling beams under low-cyclic reversed loading, namely hysteretic curves, which could reflect the seismic performance of members, such as the ductility, energy dissipation, and stiffness degeneration. Usually, the plumper After artificially removing the PET-FRP jacket and concrete, the specimens LRS-DCB and DST-CB-89 presented two different failure modes in Figure 7. In Figure 7b, many cracks are shown as generated at the plastic hinge areas at both ends of the specimen LRS-DCB. Furthermore, one main crack appeared at the junction of the beam and on the end block. Figure 7c shows serious concrete crushing and local buckling of the steel tube occurring in those specimens with inner steel tubes. The primary analysis concluded that the local buckling of the inner tube invalidated any blame on the constraining effect to concrete provided by the PET-FRP material; then, concrete started crushing as the deformation increased. When the crushed concrete filled the gap and the deformation increased, the constraining effect of the PET-FRP sheet controlled the attenuation of the bearing capacity. Hysteretic Curve The load-displacement curves of all coupling beams under low-cyclic reversed loading, namely hysteretic curves, which could reflect the seismic performance of members, such as the ductility, energy dissipation, and stiffness degeneration. Usually, the plumper the hysteretic curve is, the better the energy dissipation and the better the seismic performance. As illustrated in Figure 8, those curves of specimens with a double coupling beam (DCB) wrapped with LRS-FRP or consisting of double skin tubular (DST) beams, which are wider and have better ductility than the control DCB specimen. Since there is no obvious yielding point in the envelope curves of all specimens, the yielding point of these members is determined by the method reported by Park [30]. As illustrated in Figure 9, which shows the point set corresponding to the 75% peak strength at point B. Then, point A is the intersection of the OB extension line and the horizontal line corresponding to the peak strength. Finally, we can define the interaction of the load-displacement curve and the perpendicular line passing through point A near the displacement axial as the yielding point (point Y in Figure 9). The calculated yield strength and corresponding deformation of the specimens are listed in Table 4. yielding point of these members is determined by the method reported by Park [30]. As illustrated in Figure 9, which shows the point set corresponding to the 75% peak strength at point B. Then, point A is the intersection of the OB extension line and the horizontal line corresponding to the peak strength. Finally, we can define the interaction of the loaddisplacement curve and the perpendicular line passing through point A near the displacement axial as the yielding point (point Y in Figure 9). The calculated yield strength and corresponding deformation of the specimens are listed in Table 4. 1 V y and V p are the yield strength and peak strength of the specimens, respectively; V u is the ultimate strength based on the 85% of peak strength of the specimens; V u,frp is the ultimate strength based on the rupture of FRP; 2 ∆ y , ∆ p , ∆ u , and ∆ u,frp are the deformation corresponding to V y , V p , V u , and V u,frp , respectively; 3 µ u = ∆ u /∆ y and µ u,frp = ∆ u,frp /∆ y are the corresponding ductility factors. For the control specimen RC-DCB, as shown in Figure 8a, the force increases elastically before a crack generates. With the deformation increase, the cumulative damage expands and the stiffness degrades gradually. The specimen reaches the yielding stage when the longitudinal reinforcement yields. Then the load bearing capacity of the specimen can hardly be further improved. With the shear cracks generating and developing, the stiffness of the specimen is further reduced. Before the occurrence of shear failure, the load dropped suddenly, which also resulted in the ultimate lateral deformation being very small relative to the other strengthened specimens. Compared to the control RC-DCB, as illustrated in Figures 8-10, and Table 4, the ductility of the specimen LRS-DCB improved significantly. The ultimate lateral deformation increased 284% although its peak bearing capacity was similar to that of the RC-DCB specimen. Figure 10 shows that the pre-peak stage of LRS-DCB is almost the same as that of RC-DCB. This means that according to a lower Young's modulus of the PET-FRP material, a constraining stiffness develops but is not able to further improve the core RC beam's bearing capability in the initial stage. However, the bearing capacity presents a relatively stable state after a slight reduction. This indicates that with the deformation increase the constraint effect provided by the LRS-FRP composite jacket makes a great contribution toward maintaining the bearing capacity and improving the ductility of the coupling beam. For the cycling behavior of LRS-DCB, it presents an obvious pinching phenomenon, i.e., the LRS-DCB specimen needs only a small lateral force to restore its original position after the force unloads to zero. Notably, the well-known ultra-high rupture strain of the PET-FRP material could provide relatively high constraint stress to the core concrete in the stage of large deformation. In this case, even its Young's modulus is relatively much lower than that of traditional CFRP. Meanwhile, the existing experiments and theories [31] indicate that the compressive and shear strength of concrete could be improved due to its effective multi-axial constraining stress. This makes it possible to maintain its bearing capacity even under large deformation. However, the large lateral deformation is also accompanied by concrete damage. This damage also results in the reduction of tension and shear strength. After measuring the lateral deformation, go back to zero. The bearing capacity contributed by concrete changes from compression to shear. The concrete damage and the slippage between steel bar and concrete resulted in the pinching phenomenon. improved due to its effective multi-axial constraining stress. This makes it possible to maintain its bearing capacity even under large deformation. However, the large lateral deformation is also accompanied by concrete damage. This damage also results in the reduction of tension and shear strength. After measuring the lateral deformation, go back to zero. The bearing capacity contributed by concrete changes from compression to shear. The concrete damage and the slippage between steel bar and concrete resulted in the pinching phenomenon. Relatively, the hysteretic curves of the specimens DST-DCB-89, DST-DCB-76, and DST-DCB-89P illustrated in Figure 8 are similar to each other except for some details. Compared to the specimen, LRS-DCB, as shown in Figure 8, Figure 10, and Table 4, the yield stress, peak stress, ultimate stress, and corresponding deformation of DST-DCB-89, DST-DCB-76, and DST-DCB-89P improved significantly: (1) the yielding stress of these three specimens improved by 41%, 37%, and 27%, respectively. Meanwhile, the corresponding lateral deformation improved by 20%, 31%, and 20%, respectively; (2) the peak stress of these three specimens improved by 39%, 38%, and 27%, respectively. The corresponding lateral deformation improved by 8%, 23%, and 23%, respectively; (3) the ultimate stress was based on 85% of the peak strength of these three specimens improved by 18%, 17%, and 8%, respectively. The corresponding lateral deformation decreased by 71%, 115%, and 121%, respectively. Moreover, the ductility coefficient of specimens LRS-DCB, Relatively, the hysteretic curves of the specimens DST-DCB-89, DST-DCB-76, and DST-DCB-89P illustrated in Figure 8 are similar to each other except for some details. Compared to the specimen, LRS-DCB, as shown in Figure 8, Figure 10, and Table 4, the yield stress, peak stress, ultimate stress, and corresponding deformation of DST-DCB-89, DST-DCB-76, and DST-DCB-89P improved significantly: (1) the yielding stress of these three specimens improved by 41%, 37%, and 27%, respectively. Meanwhile, the corresponding lateral deformation improved by 20%, 31%, and 20%, respectively; (2) the peak stress of these three specimens improved by 39%, 38%, and 27%, respectively. The corresponding lateral deformation improved by 8%, 23%, and 23%, respectively; (3) the ultimate stress was based on 85% of the peak strength of these three specimens improved by 18%, 17%, and 8%, respectively. The corresponding lateral deformation decreased by 71%, 115%, and 121%, respectively. Moreover, the ductility coefficient of specimens LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P improved by 292%, 42%, 65%, and 82%, respectively, based on the 85% peak strength compared to the control specimen. Compared to the peak stress and ultimate stress of those DST-DCB specimens, we determined that the the bearing capacity enhancement was mainly caused by the embedded steel tube. As the lateral deformation and cycle times increase, the bearing capacity also increases until the inner steel tube buckling due to insufficient support inside. Compared to the LRS-DCB specimen, the force of those three specimens with DST dropped rapidly at first and then stabilized. However, as shown in Figure 10 and Table 4, the strengths of the four specimens are similar to each other when the wrapped PET-FRP sheet ruptures, indicating that the wrapped PET-FRP jacket's constraint effect could effectively prevent further reduction of the bearing capacity. Hence, the enhanced strength of those DST-DCB specimens could act as energy storage for the specimen design. As shown in Table 4, the deformation of DST-DCB-89, DST-DCB-76, and DST-DCB-89P corresponding to the PET-FRP rupture improved by 300%, 253%, and 300%, respectively, compared to the control specimen. These results are similar to those recorded for the LRS-DCB specimen. Meanwhile, compared to the control specimen the ductility coefficient of the LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P specimens improved by 292%, 232%, 171%, and 233%, respectively, based on the rupture of the PET-FRP sheet. This indicates that the ductility of DCBs can be improved greatly by the external PET-FRP wrapping or in the form of a DST composite member. Moreover, the hysteretic part of these curves is relatively wider than those of specimens DCB and LRS-DCB. The reason is that the steel tube could contribute to both shear and compression except for the concrete and reinforcement, which allows the component to absorb more energy under repeated loading. Compared to those curves in Figure 8c-e, the pinching phenomenon of specimen DST-DCB-79 is more prominent than specimens DST-DCB-89 and DST-DCB-89P. Notably, the prefabricated component has similar properties to the cast-in-place component. Shear Deformation The shear deformations of all specimens are illustrated in Figure 11. Compared to the control double coupling beam, the shear deformations corresponding to the ultimate strength of the specimens LRS-DCB and DST coupling beams are improved significantly. This means the wrapping PET-FRP sheet and inner steel tube make a great contribution to the shear ductility of the coupling beam. pared to the LRS-DCB specimen, the force of those three specimens with DST dropped rapidly at first and then stabilized. However, as shown in Figure 10 and Table 4, the strengths of the four specimens are similar to each other when the wrapped PET-FRP sheet ruptures, indicating that the wrapped PET-FRP jacket's constraint effect could effectively prevent further reduction of the bearing capacity. Hence, the enhanced strength of those DST-DCB specimens could act as energy storage for the specimen design. As shown in Table 4, the deformation of DST-DCB-89, DST-DCB-76, and DST-DCB-89P corresponding to the PET-FRP rupture improved by 300%, 253%, and 300%, respectively, compared to the control specimen. These results are similar to those recorded for the LRS-DCB specimen. Meanwhile, compared to the control specimen the ductility coefficient of the LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P specimens improved by 292%, 232%, 171%, and 233%, respectively, based on the rupture of the PET-FRP sheet. This indicates that the ductility of DCBs can be improved greatly by the external PET-FRP wrapping or in the form of a DST composite member. Moreover, the hysteretic part of these curves is relatively wider than those of specimens DCB and LRS-DCB. The reason is that the steel tube could contribute to both shear and compression except for the concrete and reinforcement, which allows the component to absorb more energy under repeated loading. Compared to those curves in Figure 8c-e, the pinching phenomenon of specimen DST-DCB-79 is more prominent than specimens DST-DCB-89 and DST-DCB-89P. Notably, the prefabricated component has similar properties to the cast-in-place component. Shear Deformation The shear deformations of all specimens are illustrated in Figure 11. Compared to the control double coupling beam, the shear deformations corresponding to the ultimate strength of the specimens LRS-DCB and DST coupling beams are improved significantly. This means the wrapping PET-FRP sheet and inner steel tube make a great contribution to the shear ductility of the coupling beam. The results shown in Figure 11 indicate that the corresponding shear deformations of those DST beam specimens are slightly larger than that of the LRS-DCB specimen. Meanwhile, as the diameter of the inner steel tube decreases, the corresponding shear deformation increases. However, compared to the results in Figures 8 and 11, it is evident that the ratio between the shear deformation and total deformation of the specimens LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P is 21%, 58%, 61%, and 49%, respectively. The flexural deformation of LRS-DCB is better than that of the DST coupling beams. Notably, the reason the shear deformation of the LRS-DCB specimen decreased before reaching the ultimate state, as shown in Figure 11b, is that the bending cracks at both ends of the LRS-DCB specimen (Figure 7b) resulted in a decrease in the recorded data by the linear variable differential transformers (LVDTs). Seismic Performance The mechanical behavior of the concrete members under low cyclic reversed loading is one of the important methods used to evaluate their seismic performance. The following four parameters, namely strength degradation, stiffness degradation, ductility performance, and energy dissipation, were analyzed to investigate the seismic performance of the composite coupling beams. Strength Degradation Three reverse cycles were carried out under each displacement level. With the cycle increase, the corresponding bear capacity of the specimen decreases. Usually, the coefficient of strength degradation λi is used to reflect the degree of strength reduction, which can be calculated as follows [32]: The results shown in Figure 11 indicate that the corresponding shear deformations of those DST beam specimens are slightly larger than that of the LRS-DCB specimen. Meanwhile, as the diameter of the inner steel tube decreases, the corresponding shear deformation increases. However, compared to the results in Figures 8 and 11, it is evident that the ratio between the shear deformation and total deformation of the specimens LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P is 21%, 58%, 61%, and 49%, respectively. The flexural deformation of LRS-DCB is better than that of the DST coupling beams. Notably, the reason the shear deformation of the LRS-DCB specimen decreased before reaching the ultimate state, as shown in Figure 11b, is that the bending cracks at both ends of the LRS-DCB specimen (Figure 7b) resulted in a decrease in the recorded data by the linear variable differential transformers (LVDTs). Seismic Performance The mechanical behavior of the concrete members under low cyclic reversed loading is one of the important methods used to evaluate their seismic performance. The following four parameters, namely strength degradation, stiffness degradation, ductility performance, and energy dissipation, were analyzed to investigate the seismic performance of the composite coupling beams. Strength Degradation Three reverse cycles were carried out under each displacement level. With the cycle increase, the corresponding bear capacity of the specimen decreases. Usually, the coefficient of strength degradation λ i is used to reflect the degree of strength reduction, which can be calculated as follows [32]: where j is the number of main cycles starting from the skeleton curve; i is the number of sub-cycle at each main cycle j; V 1 j corresponds to the strength on the skeleton curve, and V i j is the strength at sub-cycle i. Figure 12 represents all specimens by showing the coefficient of strength degradation decreasing when the cycles increased under constant deformation. Relatively speaking, the phenomenon between these coefficients presents an obvious difference based on the deformation increase. sub-cycle at each main cycle j; j V corresponds to the strength on the skeleton curve, and i j V is the strength at sub-cycle i. Figure 12 represents all specimens by showing the coefficient of strength degradation decreasing when the cycles increased under constant deformation. Relatively speaking, the phenomenon between these coefficients presents an obvious difference based on the deformation increase. For the RC-DCB, specimen only two displacement levels were carried out due to the ductility shortcoming. Meanwhile, the coefficient only exhibited a monotone decreasing phenomenon with the displacement increase due to the damage accumulation. For the other four specimens, as shown in Figure 12b-e, the coefficient tends to decrease first and then increase. Compared to the specimens embedded with a steel tube, the coefficient of the LRS-DCB specimen even shows two troughs, which also reflect the difference between the two systems: (1) the first trough indirectly reflects the peak strength of the LRS-DCB specimen. Before this trough, the damage accumulation of the LRS-DCB specimen was mainly caused by concrete, the concrete-steel interface, and yielding of reinforcement, which is similar to that of the RC-DCB specimen because the PET-FRP constraint had not been activated. After that, the further damage of concrete was limited by the increasing constraint stress provided by the PET-FRP material, which resulted in the value of the coefficient recovered. However, the constraint stiffness decreased when the strain of PET-FRP material reached the second stage in Figure 3a, then the damage accumulation during the subsequent cycles increased again. (2) Relatively speaking, there is only one trough in those specimens embedded with steel tubes. Notice that the values of λ i corresponding to the trough of LRS-DCB are similar to each other, which means that the relatively larger coefficients corresponding to the trough of the DST-DCB specimens were caused by the slippage of the concrete-tube interface and the buckling of the steel tube. After that, the constraint effect provided by the steel tube and PET-FRP sheet limited the further damage of concrete, which resulted in the decrease of the coefficient. Unlike the performance of LRS-DCB, the coefficient tends to show a monotone increase after the trough. This indicates that the state of the specimens tends to be stable. Stiffness Degradation The stiffness reflects the ability of the member to resist deformation under loading. The stiffness of specimens reduces with the fracture of concrete, the cracking of LRS-FRP, the buckling of steel tubes, and slippages between steel and concrete. The secant stiffness was used to analyze the stiffness degradation of specimens, which can be calculated with Equation (3). where, +F j and −F j are the positive and negative strength corresponding to the current maximum positive deformation +X j and the maximum negative deformation +X j , respectively. The normalized stiffness is the stiffness ratio between each main cycle K j and the first cycle K 1 . The normalized stiffness of all specimens is illustrated in Figure 13. Compared to the control specimen RC-DCB, the stiffness of the other strengthened specimens dropped slightly slower in the initial stage due to the effect of the external wrapped PET-FRP sheet or the inner steel tube. Since the limited constraint effect is provided by the PET-FRP material, the normalized stiffness of the LRS-DCB specimen dropped faster than those with embedded steel tubes in the initial stage. With the deformation increase, the increasing confined stress limited the stiffness degradation, which caused the slope of the curves to flatten out. The initial stiffness of the DST-CB specimens was mainly controlled by the inner steel tube, which dropped at a slower rate than that of the LRS-DCB specimen. With the increase of deformation, the inner steel tube yielded, and the PET-FRP material constraint effect controlled the stiffness degradation of the specimens. Energy Dissipation Energy dissipation performance reflects the energy consumption capacity of components under earthquake action. Usually, the fuller the hysteretic curve is, the stronger the energy dissipation performance, and the better the seismic performance. Under repeated loading, the member absorbs external energy during loading and releases energy during unloading. The subtraction of the energy during the cycle is defined as energy dissipation, Energy Dissipation Energy dissipation performance reflects the energy consumption capacity of components under earthquake action. Usually, the fuller the hysteretic curve is, the stronger the energy dissipation performance, and the better the seismic performance. Under repeated loading, the member absorbs external energy during loading and releases energy during unloading. The subtraction of the energy during the cycle is defined as energy dissipation, as shown in Figure 14, which is equal to the area surrounded by the hysteresis loop. The equivalent viscous damping coefficient, ξ eq calculate with Equation (4), and the cumulative energy dissipation, ΣS, can be used to evaluate the energy dissipation capacity of members in an earthquake. Energy Dissipation Energy dissipation performance reflects the energy consumption capacity of components under earthquake action. Usually, the fuller the hysteretic curve is, the stronger the energy dissipation performance, and the better the seismic performance. Under repeated loading, the member absorbs external energy during loading and releases energy during unloading. The subtraction of the energy during the cycle is defined as energy dissipation, as shown in Figure 14, which is equal to the area surrounded by the hysteresis loop. The equivalent viscous damping coefficient, ξeq calculate with Equation (4), and the cumulative energy dissipation, S, can be used to evaluate the energy dissipation capacity of members in an earthquake. As shown in Figure 15a, the cumulative energy dissipation has shared similarities before yielding. The DCB specimen yields first resulting in cumulative energy dissipation, which increased faster than the others. The total cumulative energy dissipation of the LRS-DCB specimen and the DST-DCB coupling beams were improved significantly relative to As shown in Figure 15a, the cumulative energy dissipation has shared similarities before yielding. The DCB specimen yields first resulting in cumulative energy dissipation, which increased faster than the others. The total cumulative energy dissipation of the LRS-DCB specimen and the DST-DCB coupling beams were improved significantly relative to the DCB control specimen. Compared to the control specimen, the cumulative energy dissipation of the LRS-DCB, DST-DCB-89, DST-DCB-76, and DST-DCB-89P specimens corresponding to ∆ u improved by 2019%, 376%, 496%, and 500%, respectively. Moreover, the cumulative energy dissipation of the LRS-DCB, DST-CB-89, DST-CB-76, and DST-CB-89P specimens corresponding to ∆ u,frp improved by 2352%, 2877%, 2345%, and 2660%, respectively, compared to the control specimen. This indicates that the energy dissipation performance of the coupling beam could be greatly improved by wrapping one PET-FRP sheet. The DST composite system also presents excellent energy dissipation. In addition, the results indicate that the tube's diameter has a slight positive effect on the energy dissipation behavior of the coupling beam. cumulative energy dissipation of the LRS-DCB, DST-CB-89, DST-CB-76, and DST-CB-89P specimens corresponding to Δu,frp improved by 2352%, 2877%, 2345%, and 2660%, respectively, compared to the control specimen. This indicates that the energy dissipation performance of the coupling beam could be greatly improved by wrapping one PET-FRP sheet. The DST composite system also presents excellent energy dissipation. In addition, the results indicate that the tube's diameter has a slight positive effect on the energy dissipation behavior of the coupling beam. On the other hand, compared to the control specimen, the equivalent viscous damping coefficients of the other four specimens are similar to each other when the displacement is less than 10 mm. The strengthened PET-FRP sheet delays concrete damage and steel bar yielding, which results in the coefficient of the control specimen being larger than that of the other four specimens. Then, the concrete damage results in the coefficients of the DCB and LRS-DCB specimens increasing faster than the coefficient of the DST coupling beams. With the deformation increase, the constraint effect provided by the PET-FRP sheet controlled any further concrete damage, and the coefficient of the LRS-DCB specimen became stable. Relatively speaking, the coefficients of those DST coupling beams tend to attain a linear increase with the deformation increase. The main reason for this is that the constraint effect provided by the PET-FRP sheet and inner steel tube decreased the early damage of concrete. Conclusions In this paper, a PET-FRP sheet and steel tube were applied to form FRP-concrete-steel double-skin tubular (DST) composite coupling beams. The seismic performance was tested on three types of double coupling beams (DCBs) (namely RC-DCB, LRS-DCB, and DST-DCB), through an experimental program. The following conclusions were drawn: 1. The ductility and energy dissipation performance of the double coupling beams either wrapped with a PET-FRP sheet or consisting of an FRP-concrete-steel DST composite system is improved significantly compared to the traditional RC-DCB. The ductility coefficient and accumulate energy dissipation of the DST-DCB members based on the rupture of the PET-FRP sheet are similar to that of the LRS-DCB, which improved above 170% and 2300%, respectively, compared to the RC-DCB. 2. The external wrapped PET-FRP sheet does not affect the initial stiffness and peak strength of the RC-DCB. However, under the same conditions, the inner steel tube will improve the initial stiffness, yielding strength, and peak strength. On the other hand, compared to the control specimen, the equivalent viscous damping coefficients of the other four specimens are similar to each other when the displacement is less than 10 mm. The strengthened PET-FRP sheet delays concrete damage and steel bar yielding, which results in the coefficient of the control specimen being larger than that of the other four specimens. Then, the concrete damage results in the coefficients of the DCB and LRS-DCB specimens increasing faster than the coefficient of the DST coupling beams. With the deformation increase, the constraint effect provided by the PET-FRP sheet controlled any further concrete damage, and the coefficient of the LRS-DCB specimen became stable. Relatively speaking, the coefficients of those DST coupling beams tend to attain a linear increase with the deformation increase. The main reason for this is that the constraint effect provided by the PET-FRP sheet and inner steel tube decreased the early damage of concrete. Conclusions In this paper, a PET-FRP sheet and steel tube were applied to form FRP-concretesteel double-skin tubular (DST) composite coupling beams. The seismic performance was tested on three types of double coupling beams (DCBs) (namely RC-DCB, LRS-DCB, and DST-DCB), through an experimental program. The following conclusions were drawn: 1. The ductility and energy dissipation performance of the double coupling beams either wrapped with a PET-FRP sheet or consisting of an FRP-concrete-steel DST composite system is improved significantly compared to the traditional RC-DCB. The ductility coefficient and accumulate energy dissipation of the DST-DCB members based on the rupture of the PET-FRP sheet are similar to that of the LRS-DCB, which improved above 170% and 2300%, respectively, compared to the RC-DCB. 2. The external wrapped PET-FRP sheet does not affect the initial stiffness and peak strength of the RC-DCB. However, under the same conditions, the inner steel tube will improve the initial stiffness, yielding strength, and peak strength. 3. The DST double coupling beams still have considerable deformability after 85% of peak strength has been distributed since the external PET-FRP sheet was able to provide an effective constraint effect on the core concrete and the inner steel tube allowing it to bear excellent shear deformation. The improved strength of the DST-CB can now serve as energy storage for the double coupling beam. What should be mentioned here is that, due to the limited number of test specimens, some parameters were not considered. For example, the influence of steel pipe diameterthickness ratio on the seismic performance of the composite structure, which could be analyzed by using the finite element method. However, the numerical method in reference [33] indicated that the finite element model should use a reliable three-dimensional constitutive model for the concrete, which needs to fully consider the unique characteristics of PET-FRP confined concrete. Furthermore, based on the test and finite element analysis, the equations for predicting the bearing capacity of the LRS-FRP-concrete-steel DST double coupling beams could also be further researched.
2021-05-08T00:02:51.289Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "e0b69c811b891b13a8548904815e1ed51a3575cb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/5/2024/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f3a035ef00501d6dac3415d0030bd4dc397c845f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
222300295
pes2o/s2orc
v3-fos-license
Somatosensory and psychological phenotypes associated with neuropathic pain in entrapment neuropathy Supplemental Digital Content is Available in the Text. The severity more than the presence of neuropathic pain is related to the extent of neuropathy and a compromise of emotional well-being in entrapment neuropathies. Background Entrapment neuropathies represent the most prevalent peripheral neuropathy and are the most common cause of neuropathic pain (neuP). Carpal tunnel syndrome (CTS) is the most prevalent entrapment neuropathy with a lifetime risk of ;10% that increases to 85% in patients with diabetes. 48 Patients mostly experience tingling and numbness in the hand and loss of dexterity. In addition, some patients experience pain, which can impact on their daily functioning. According to the neuP grading system, 16 patients with electrodiagnostically confirmed CTS and symptoms in their hand are automatically classed as having at least probable neuP and definite neuP if sensory abnormalities are present. However, the patients' description of their pain sometimes indicates the presence of nociceptive rather than neuP. 31 Indeed, the use of screening questionnaires revealed that the prevalence of neuP in patients with CTS varies with values reported from 31% to 77%. 21,37,49,50,56 This together with the absence or at best weak correlation between pain and measures of nerve pathology (eg, nerve conduction studies) 20,51 has led to the hypothesis that in some patients, pain may originate from structures other than the nerve such as the flexor tendons. 24 It currently remains unclear why some patients with CTS develop neuP, whereas others do not experience neuP but pain of predominant nociceptive character. One hypothesis is that neuropathy is more severe in patients with neuP compared with those without neuP. However, the evidence for this is currently controversial, 21,37,50 with most studies reporting no association between electrodiagnostic test severity and the presence of neuP. Of note, electrodiagnostic testing only examines loss of function of large nerve fibres, thus providing only limited information on the potential spectrum of nerve pathology. A better understanding of the prevalence of neuP and its underlying disease process is crucial to determine risk factors and guide management for these patients. This cross-sectional cohort study provides an in-depth evaluation of the somatosensory phenotype of patients with CTS with and without neuP. We thereby aim to (1) identify changes in the somatosensory structure and function specific to the presence and severity of neuP and (2) explore whether differences in demographic, clinical, and emotional wellbeing are related to the presence and severity of neuP. Participants One hundred and eight patients who met electrodiagnostic 1 and clinical criteria 2 for CTS participated in the study. Patients were recruited through the neurophysiology and hand surgery departments at Oxford University Hospitals, local print media, and public notice boards. Patients were excluded if electrodiagnostic findings were indicative of other peripheral neuropathies than CTS, if another medical condition affecting the upper extremity or neck was present (eg, tennis elbow or hand osteoarthritis), if a previous history of surgery or trauma to the upper limb or neck existed, or if CTS was caused by pregnancy or diabetes. Proportionally age-and sex-matched healthy controls (n 5 32) were recruited through public notice boards and media advertisements. The study was approved by the national ethics committee (London Riverside, Ref 10/H0706/35), and all participants gave informed written consent before participating. Primary publications containing parts of the Oxford CTS cohort have been previously published. 6,46 Patient subgroups Patients were divided into those with and without neuP using the DN4 questionnaire. 10 This questionnaire is composed of questions evaluating a range of sensory descriptors and a short sensory examination. Sensory descriptors include the presence or absence of burning and painfully cold-like pain, electric shocks, tingling, pins and needles, numbness, and itching. The sensory examination evaluates the presence or absence of hypoesthesia to touch, hypoesthesia to pinprick, and brush allodynia. A DN4 score of $4 was interpreted as neuP. Because the DN4 was designed to differentiate neuropathic from somatic pain, we interpreted a score of ,4 (no neuP) to represent pain of likely nociceptive character. We specifically decided against the use of the neuP grading system, 16 which has previously been applied to classify patients in similar studies. 22,42,55 This was particularly important in our cohort, as the grading system would automatically classify all patients with CTS as having at least probable neuP because of the presence of nerve conduction abnormalities. Moreover, our question was focused on neuP vs no neuP (nociceptive pain) rather than painful vs pain-free neuropathies as in previous studies. To examine the impact of neuP severity on the clinical phenotype, we further classed those with neuP as having mild (,4) or moderate/severe neuP using a cutoff of $4 on a visual analogue scale for average pain during the past 24 hours. 55 2.3. Symptom severity, functional deficits, and characteristics of neuropathic pain The Boston Carpal Tunnel Questionnaire was used to assess symptom severity and functional deficits in patients with CTS. 32 Characteristics of neuP were evaluated with the Neuropathic Pain Symptom Inventory, 11 which distinguishes superficial, deep, paroxysmal, or evoked pain as well as paraesthesia and dysaesthesia each on a scale from 0 to 10 with the total score ranging from 0 to 100. 17 Patients also marked their spread of symptoms on a hand and body diagram. The patterns were dichotomized into median and extramedian spread and the presence or absence of proximal spread of symptoms beyond the hand. 26 Spread of symptoms outside the distribution of the median nerve has previously been associated with central sensitisation in patients with CTS. 62 Clinical examination Light touch and pinprick were tested with cotton wool and a neurotip over the palmar surface of the index fingertip. As sensation may be altered even in hand areas not innervated by the affected median nerve in patients with CTS, 46 sensation was recorded as normal or reduced compared with the proximal ventral forearm. We further performed 3 commonly used clinical provocation tests: Tinel sign involved light percussion over the median nerve just proximal to the carpal tunnel. A positive test was recorded if characteristic paraesthesia or shooting pain radiating into the fingers was provoked. The Phalen test involved active end of range wrist flexion. A reproduction of symptoms (eg, paraesthesia and numbness) in the median nerve territory within 1 minute was considered a positive test. 39 For the carpal compression test, moderate pressure was exerted with the investigators' thumbs over the transverse carpal ligament with the wrist in a neutral position. 19 The test was deemed positive if paraesthesia or numbness was provoked in the median nerve territory of the hand within a maximum period of 30 seconds. Thenar wasting was graded as present or absent. Muscle strength of the abductor pollicis brevis was graded according to the Medical Research Council Manual Muscle Testing scale ranging from M0 to M5, with M3 indicating full range against gravity and M5 indicating activation against the examiner's full resistance with a full range of motion. 35 Quantitative sensory testing Quantitative sensory testing was used to determine somatosensory phenotypes according to the previously published protocol by the German Research Network on Neuropathic Pain. 45 Cold and warm detection thresholds as well as cold and heat pain thresholds and thermal sensory limen were measured with a ThermoTester (Somedic, Sweden, 25 3 50 mm thermode). We also recorded paradoxical heat sensations during thermal sensory limen testing. Mechanical detection was measured with von Frey hairs and mechanical pain thresholds with weighted pin-prick stimulators. Mechanical pain sensitivity was examined with a numerical pain rating scale (0-100) during 5 sets of 7 pseudorandom pin-prick stimulations. Intermingled with these pin-prick stimulations were 5 sets of 3 light touch stimulations with a cotton wisp, a cotton wool tip, and a standardized brush (Sense-lab) to determine the presence of allodynia. Pressure pain thresholds were evaluated with a manual algometer (Wagner Instruments, Greenwich, CT) and vibration detection threshold with a Rydel-Seiffer tuning fork. The wind-up ratio was measured as the mean numerical pain rating of 5 trains of 10 pin-prick stimuli divided by the mean rating of 5 single stimuli. The patients were familiarised with the quantitative sensory testing (QST) on the dorsum of the nonexperimental hand followed by testing on the palmar side of the affected index finger (innervated by the median nerve). We also evaluated QST in an extraterritorial area over the dorsum of the hand (innervated by the radial nerve). Whereas the testing area for temperature thresholds was smaller over the index finger (;10 3 50 mm) than the dorsum of the hand (25 3 50 mm), the areas were comparable between participant groups, hence not influencing our findings. Pressure pain thresholds were recorded over the thenar eminence and brachioradial muscle and vibration detection thresholds over the palmar side of the distal end of the second metacarpal or ulnar styloid for the median and extramedian areas, respectively. Quantitative sensory testing data (except for cold and heat pain and vibration detection thresholds) were log transformed to achieve normally distributed data. 33,38 Z scores ((value of the participant-mean value of healthy controls)/SD of healthy controls) 45 were calculated using the proportionally matched healthy control group. A small constant of 0.1 was added to the MPS to avoid loss of zero rating values. 45 Electrodiagnostic tests Electrodiagnostic testing (EDT) was performed with an ADVANCE system (Neurometrix) and conventional reusable electrodes. Hand temperature was standardized to .31˚C. Sensory orthodromic recordings were made by stimulating the index finger and recording from the wrist. Motor studies were performed by recording from the abductor pollicis brevis stimulated from the wrist and antecubital fossa. To determine the presence of a very mild EDT abnormality, an increased mixed latency of the median sensory nerve action potential compared with ulnar sensory nerve action potential on digit IV stimulation shown by a "double peak" was considered abnormal. 58 In addition, a difference of .0.4 ms in median vs ulnar motor latency measured over a fixed distance of 8 cm and recorded over the lumbrical and palmar interossei muscles was considered abnormal. 41 Electrodiagnostic test severity was graded on a scale from 1 (very mild) to 6 (extremely severe) according to previously published criteria. 8 Skin histology A 3-mm diameter skin biopsy was taken under subcutaneous anaesthesia on the ventroradial side of the proximal phalanx of the index finger innervated by the median nerve. The biopsy was fixed in fresh periodate-lysine-paraformaldehyde for 30 minutes. The tissue was then washed in phosphate buffer and stored for 2 to 3 days in sucrose in phosphate buffer. After embedding in optimal cutting temperature gel, the tissue was frozen and stored at 280˚C. Staining was performed using a previously described free-floating method, 46 using protein gene product 9.5 (PGP 9.5 Ultraclone, Isle of Wight, United Kingdom, 1:1000; Zytomed, Berlin, Germany 1:200) and myelin basic protein (Abcam, Cambridge, United Kingdom 1:500) as primary antibodies and Cy3 (Stratech, Ely, United Kingdom 1:1000) and Alexa Fluor 488 (Abcam, 1: 500) as secondary antibodies. Intraepidermal nerve fibre density (IENFD) was quantified in 50mm skin sections using an Axio LSM 700 microscope with an Observer Z1 imaging system (Zeiss, Cambridge, United Kingdom) by determining the amount of fibres per millimeter epidermis according to current guidelines. 30 We also quantified dermal innervation by evaluating the number of Meissner corpuscles per millimeter epidermis, the percentage of PGP 1 dermal nerve bundles containing MBP, and the mean nodal length as previously reported. 46 2.9. Statistical analysis SPSS Version 27 (IBM) was used for statistical analyses. Normality of data was assessed by visual inspection and using the Shapiro-Wilk test for normality. Demographic variables, skin histology data, psychological and sleep questionnaires, and EDT parameters were compared among groups (healthy, no neuP, mild neuP, and moderate/severe neuP) with one-way analysis of variance (ANOVA) or Kruskal-Wallis statistics followed by planned contrasts. As we were interested in effects of (1) the presence of neuP and (2) the severity of neuP, we used Helmert contrasts for planned follow-up comparisons. This type of contrast compares each level of our categorical variable "patient group" with the mean of the subsequent levels. As such, the planned contrasts included (1) healthy vs combined patient groups (to confirm differences between patients and healthy controls), (2) no neuP vs combined neuP groups (to evaluate the effect of the presence of neuP), and (3) mild neuP vs moderate/severe neuP (to evaluate the effect of neuP severity). The nonparametric equivalent for the Helmert contrast was used for non-normally distributed data, 47 and the significance cutoff was adjusted for multiple testing (Bonferroni correction). Symptom and function severity were only evaluated in the 3 patient subgroups using Kruskal-Wallis tests followed by 2 Helmert contrasts (no neuP vs combined neuP groups and mild neuP vs moderate/severe neuP). Findings of the clinical examination and medication intake were compared among groups with chi-square statistics or Fisher exact tests as appropriate. This was followed by 2 planned comparisons, Bonferroni adjusted for multiple testing (no neuP vs combined neuP groups and mild neuP vs moderate/severe neuP), reflecting Helmert contrasts. Quantitative sensory testing z scores were analysed with 4 one-way multivariate ANOVAs (MANOVAs) using the combined QST detection or pain thresholds as the response variables and patient group as the independent variable for both the median and radial territories. Pillai's trace statistics, which is robust to unbalanced designs, is reported. We followed the significant MANOVAs up with one-way univariate ANOVAs followed by Helmert contrasts to test the hypothesis that clinical phenotypes are most pronounced in patients with moderate/severe neuP, followed by those with mild and no neuP, whereas healthy participants show the least deficits. We also used a recently published algorithm 5,59 that allocates each patient into 1 of 3 sensory phenotypes: (1) loss of thermal and mechanical detection ("sensory loss"), (2) intact sensory function, often combined with thermal hyperalgesia or allodynia ("thermal hyperalgesia"), and (3) loss of thermal detection, but not mechanical detection, accompanied by mechanical hyperalgesia or allodynia ("mechanical hyperalgesia"). 59 The deterministic version of the algorithm was used, in which each patient is sorted to 1 phenotype and no mixed phenotypes are possible. Fisher exact tests were used to compare the frequency of QST phenotypes among groups. Most patients with carpal tunnel syndrome have neuropathic pain The demographic data are described in Table 1. Most patients with CTS had likely neuP (80%), whereas 20% were classified as unlikely neuP by the DN4 and therefore presumably have pain of nociceptive character. Of those patients with neuP, 63% were classified as having April 2021 · Volume 162 · Number 4 www.painjournalonline.com mild neuP, whereas 37% had moderate/severe neuP. The groups were comparable in regard to age, sex, height, and weight. Duration of symptoms was different among groups (H(2) 5 10.1, P 5 0.006), with Helmert contrasts demonstrating that this was caused by shorter symptom duration in patients with moderate/severe neuP than those with mild neuP. Pain medication to alleviate CTS symptoms was taken by 36% of patients (Supplementary Table 1, available at http://links.lww. com/PAIN/B203). Whereas no differences were apparent between patients with and without neuP, patients with moderate/ severe neuP reported more analgesic drug use than those with mild neuP; however, this marginally failed to reach statistical significance. There were no differences among groups for the types of medications used apart from paracetamol and opioids, which were more frequently taken by patients with moderate/ severe neuP compared with those with mild neuP. 3.2. Symptom severity and functional deficits are more pronounced with the presence and increasing severity of neuropathic pain Data for symptom severity and functional deficits are summarised in Table 2. Planned contrasts revealed that patients with neuP (combined group) experienced more pronounced symptoms than those with no neuP, except for the deep and evoked pain domain of the NPSI. In addition, symptom severity was consistently higher in patients with moderate/severe neuP compared with those with mild neuP. Similarly, functional deficits measured by the Boston Functional Status Scale were higher in patients with neuP compared with those without neuP. Clinical examination findings Patients with neuP exhibited more sensory abnormalities on light touch and pin-prick testing compared with those without neuP ( Table 2). The frequencies of motor signs as well as a positive Phalen test and compression sign were comparable among groups. The overall chi-square test for Tinel sign was significant; however, planned contrasts were not significant after Bonferroni correction for the number of planned comparisons. Somatosensory dysfunction of some parameters is greater in neuropathic pain Quantitative sensory testing data are shown in Figure 1. The most common somatosensory phenotype in patients with CTS was thermal hyperalgesia (45.4%), followed by sensory loss (33.3%) and mechanical hyperalgesia (21.3%). In the median nerve territory, the MANOVA for the detection thresholds showed a significant effect (V 5 0.32, F(15, 402) 5 0.327, P , 0.0001). Univariate ANOVAs followed by Helmert contrasts (Fig. 1A) revealed deficits in all detection thresholds in the combined patient groups compared with healthy controls (t(136) . 3.20, P , 0.002). Patients with moderate/severe neuP were different from those with mild neuP for cold detection (t(136) 5 2.09, P 5 0.032). Of note, all 3 Helmert contrasts were significant for mechanical detection thresholds in the median nerve territory, indicating that mechanical sensory deficits intensify with the presence and increasing severity of neuP (t(136) . 2.18, P , 0.03). In the radial nerve territory, the MANOVA for detection thresholds was significant (V 5 0.24, F(15,402) 5 2.36, P 5 0.003). Univariate ANOVAs followed by Helmert contrasts (Fig. 1B) demonstrated deficits in all detection thresholds except for vibration in the combined patient groups compared with healthy controls (t(136) . 2.98, P , 0.003). In addition, patients with moderate/severe neuP had stronger deficits in mechanical detection and thermal sensory limen compared with patients with mild neuP (t(136) . 3.65, P , 0.012). There was no difference in the proportions of somatosensory profiles among patient subgroups (Fisher exact test, P 5 0.540, Supplementary Table 2, available at http://links.lww.com/PAIN/ B203). Electrodiagnostic test severity is comparable There were no differences in electrodiagnostic test severity among patient groups (no neuP median [interquartile range] 3.0 No differences were present among groups for dermal measures including density of Meissner corpuscles, dermal nerve bundles containing myelin, or nodal length. Emotional well-being and sleep quality are more impaired with increasing neuropathic pain severity Data of questionnaires evaluating the psychological domain and sleep disturbance are shown in Table 4 and Figure 2. Discussion In our cohort of patients with CTS, 20% have no neuP, 50% have mild neuP, and 30% have moderate/severe neuP. The presence April 2021 · Volume 162 · Number 4 www.painjournalonline.com of neuP was associated with increased symptom severity and functional deficits as well as deficits in bedside sensory testing. Apart from a more pronounced deficit in mechanical detection, somatosensory profiles were largely comparable among patients with and without neuP. However, an increasing neuP severity was associated with more pronounced loss of function deficits in both the median and radial nerve territories. By contrast, no differences were identified in neurophysiological variables or structural nerve fibre integrity in skin biopsies among patient groups. Notably, many aspects of emotional well-being (eg, PCS rumination and helplessness, as well as PASS cognition and escape) and sleep were more affected with increasing neuP severity. Our findings indicate that apart from clear differences in symptom severity and function deficits, structural and functional somatosensory measures are largely comparable in patients with and without neuP. The severity of neuP is associated with somatosensory nerve dysfunction, but not structural nerve integrity. Of note, an increasing severity of neuP was accompanied by reduced emotional well-being, increased sleep disturbance, and the presence of extraterritorial symptoms, indicating a more dominant contribution of central mechanisms. The reported prevalence of neuP in patients with CTS varies substantially (31%-77%). 21,49,50,54 This is most likely attributable to the different screening tools used to detect neuP, none of which has been validated in patients with CTS. In the absence of a validated screening tool for patients with CTS, we decided to use the DN4. Unlike the painDETECT, it focuses on the number of neuropathic features rather than their severity, which is often low in patients with CTS and may thus underestimate the prevalence of neuP. Furthermore, the painDETECT was originally developed for spinally referred leg pain, 18 whereas the DN4 was validated in a mixed group of nerve disorders, 10 thus increasing its generalisability. The here-identified 80% of patients having neuP is comparable with other studies that also used the DN4 tool in patients with CTS (65%-77%). 37,56 The sample of convenience used in our study does not allow inferences about the general prevalence of neuP in CTS. Nevertheless, our data suggest that although most patients have neuP, a significant proportion has non-neuP, presumably of nociceptive origin. Compared with healthy controls, patients with CTS show loss of function to thermal and mechanical stimuli in the median nerve innervation territory independent of the presence of neuP. This represents the characteristic dysfunction of both small and large nerve fibers as previously reported in CTS 29,46 and other focal and systemic peripheral neuropathies. 22,42,54,55,57 Although somatosensory function was largely comparable between patients with and without neuP except for mechanical detection, the increasing severity of neuP was associated with a mechanical and thermal Table 4 Emotional well-being and sleep quality. loss of function phenotype. This progressive loss of function phenotype with increasing neuP severity is in line with previous reports in patients with focal and systemic peripheral nerve injuries 22,42,55 and has been interpreted as an indication that increasing neuP severity is associated with a more pronounced neuropathy. Intriguingly and consistent with previous reports from systemic polyneuropathies, 42,55 changes in nerve fibre integrity in the skin or the extent of neurophysiological changes were not associated with the presence or severity of neuP. Extramedian but not proximal spread of symptoms was more common in patients with moderate/severe neuP. Such extramedian spread of symptoms has previously been shown to be associated with extramedian mechanical and thermal hyperalgesia 62 and has thus been attributed to central mechanisms. 62,63 In addition, we found a more pronounced hypoaesthesia in the radial nerve territory in patients with moderate/severe neuP compared with those with mild neuP. We have previously reported such extramedian hyposensitivity in a smaller cohort of patients with CTS. 46 Although widespread hyperalgesia is commonly accepted as a sign of central mechanisms, hyposensitivity as a sign of nerve dysfunction is usually expected to be restricted to the area of the affected nerve. There is, however, growing evidence that sensory loss can also be found in unaffected areas in patients with neuP. 23,27,28,53,60,61 In such instances, the extraterritorial sensory loss has been attributed to centrally mediated mechanisms, for instance, to the suppression of normal sensitivity by ongoing pain. 60 Taken together, our data suggest that central mechanisms are more prominent in patients with moderate/severe neuP. More pronounced central mechanisms may thus be an alternative interpretation to an increased neuropathy severity in driving symptoms in patients with more severe neuP. Patients with neuP had more pronounced symptom severity and functional deficits than patients without neuP throughout a range of questionnaires. This is in line with previous reports in a range of chronic pain conditions. 4 As expected due to the neuP subgroup allocation being governed by symptom severity, increasing neuP severity was associated with more pronounced symptoms, but this was also the case for functional deficits. In addition, emotional well-being and sleep impairment was more compromised with increasing neuP severity. These results are in line with previous reports of patients with CTS 37 and other peripheral neuropathies. 4,22,42,55 Nevertheless, the average ratings in our cohort were low. Also, it remains unanswered whether the deficits in emotional well-being are a consequence of or a risk factor for more severe neuP. The previously reported decrease of depressive symptoms after carpal tunnel decompression and its correlation with symptom resolution 13 suggests that depression may be secondary to CTS. This is further corroborated in our own prospective data, which confirm improvements in most emotional well-being parameters after carpal tunnel decompression (Supplementary Table 4, available at http://links.lww.com/PAIN/B203). Some limitations of this study need to be considered. Our study is a post hoc analysis of 2 published cohorts of exploratory character and did therefore not include an a priori sample size calculation. Nevertheless, our study contains the largest deeply phenotyped CTS cohort to date, and its size was large enough to detect moderate effect sizes among patient groups. However, numbers in some patient subgroups were relatively low. This may have contributed to the absence of group differences for instance in the planned contrasts of neuP and no neuP groups. Another limitation to consider is that analgesic intake was not stopped before somatosensory profiling and may thus have influenced our readings, particularly related to hyperalgesia. Clinical implications Although it is clear that there are a proportion of patients with CTS who do not experience neuP (20%), most patients do. Treatment for patients with CTS is currently not stratified for the presence of neuP. Our data suggest that particularly patients with moderate/ severe neuP have a distinct phenotype characterised by a more pronounced and widespread somatosensory dysfunction and exacerbated deficits in emotional well-being and sleep quality. Given the excessive wait times for carpal tunnel surgery 3 and the detrimental effects of poor emotional well-being and sleep quality on general health and quality of life, 12,44 these patients may need to be prioritised. Indeed, in our cohort that was mostly recruited from surgery waitlists, symptom duration was over two-fold shorter in the moderate/severe subgroup, potentially reflecting an earlier escalation to surgery. Although surgical decompression is successful in around 75% of patients, 9 nonsurgical management including pharmacological options remains first-line treatment. 43 Current guidelines recommend corticosteroid injections but not oral nonsteroidal antiinflammatory drugs without mentioning neuP drugs. 36 In our cohort, patients with moderate/severe neuP took more paracetamol and opioids, which are not first-line pharmacological options for neuP. 14 It could be argued that patients with moderate/severe neuP may benefit from specific neuP drugs, which often target central mechanisms that seemed common in that group. However, trials into neuP medications such as gabapentin for patients with CTS show controversial results. 15,25 Future studies are required to determine whether stratification by the neuP phenotype may lead to more promising effects of these medications for patients with CTS and whether the risk/benefit of neuP medications outweighs that of surgery. Of note, our results suggest that the routine diagnostic tests for CTS (Phalen test, Tinel sign, carpal compression test, and electrodiagnostic tests) are not able to identify the presence of neuP. Therefore, simple screening tools such as the DN4 will facilitate the identification of patients who are more severely affected by neuP and may help guide management. Conclusions Our cohort has shown that neuP is common in patients with CTS and its presence is accompanied by more severe symptom and function deficits. Apart from a deficit in mechanical detection, the presence of neuP was not associated with substantial changes in somatosensory function or structural nerve pathology. The severity of neuP was accompanied by a more pronounced somatosensory dysfunction. Of note, neuP severity was related to more pronounced deficits in emotional well-being and sleep quality and the presence of extraterritorial spread of symptoms suggesting a more dominant contribution of central mechanisms. These differences between subgroups raise the question whether treatment stratification may help improve management for patients with CTS.
2020-10-13T13:05:53.444Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "811a2bedb893ccd81bd06e51724ed1d7900550bb", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/pain/Fulltext/2021/04000/Somatosensory_and_psychological_phenotypes.22.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "82bb241aedf2361ec8febd1b072ad01ef9e80bcb", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7019387
pes2o/s2orc
v3-fos-license
Rationale and Design of the Glycemia Reduction Approaches in Diabetes: A Comparative Effectiveness Study (GRADE) OBJECTIVE The epidemic of type 2 diabetes (T2DM) threatens to become the major public health problem of this century. However, a comprehensive comparison of the long-term effects of medications to treat T2DM has not been conducted. GRADE, a pragmatic, unmasked clinical trial, aims to compare commonly used diabetes medications, when combined with metformin, on glycemia-lowering effectiveness and patient-centered outcomes. RESEARCH DESIGN AND METHODS GRADE was designed with support from a U34 planning grant from the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). The consensus protocol was approved by NIDDK and the GRADE Research Group. Eligibility criteria for the 5,000 metformin-treated subjects include <5 years' diabetes duration, ≥30 years of age at time of diagnosis, and baseline hemoglobin A1c (A1C) of 6.8–8.5% (51–69 mmol/mol). Medications representing four classes (sulfonylureas, dipeptidyl peptidase 4 inhibitors, glucagon-like peptide 1 receptor agonists, and insulin) will be randomly assigned and added to metformin (minimum–maximum 1,000–2,000 mg/day). The primary metabolic outcome is the time to primary failure defined as an A1C ≥7% (53 mmol/mol), subsequently confirmed, over an anticipated mean observation period of 4.8 years (range 4–7 years). Other long-term metabolic outcomes include the need for the addition of basal insulin after a confirmed A1C >7.5% (58 mmol/mol) and, ultimately, the need to implement an intensive basal/bolus insulin regimen. The four drugs will also be compared with respect to selected microvascular complications, cardiovascular disease risk factors, adverse effects, tolerability, quality of life, and cost-effectiveness. CONCLUSIONS GRADE will compare the long-term effectiveness of major glycemia-lowering medications and provide guidance to clinicians about the most appropriate medications to treat T2DM. GRADE begins recruitment at 37 centers in the U.S. in 2013. T he epidemic of type 2 diabetes (T2DM) that has affected the U.S. and other populations, is associated with the relentless increase in obesity, and threatens to become the major public health problem of this century, affecting up to one in three Americans if current trends continue (1). The most recent estimate of T2DM prevalence in the U.S. is .24 million people, with an incidence of 1.9 million new cases per year (1). Major human and economic costs associated with the epidemic are related to the development of long-term complications, including retinopathy, nephropathy, and neuropathy, that cause more cases of blindness, renal failure, and amputations than any other disease (2). Cardiovascular disease (CVD) is increased by two-to fivefold in diabetes and is the leading cause of death (3). The 2012 estimated annual cost of diabetes in the U.S. was $245 billion, with the greatest cost related to its chronic complications (4). In 2007, the annual expenditure for glucose-lowering drugs in the U.S. was $13 billion, almost doubling since 2001 (5). The estimate in 2012 was .$18 billion (4). There are several reasons for guarded optimism in the setting of this ongoing epidemic. First, clinical trials have demonstrated effective means of delaying or preventing the development of diabetes (6)(7)(8). If these interventions were implemented successfully, they could decrease the annual incidence of diabetes substantially. Second, high-quality clinical trials have shown that lowering A1C to ;7% (53 mmol/mol), especially early after diagnosis, can substantially reduce the long-term complications that are characteristic of diabetes (9)(10)(11). Third, clinical studies have shown that antihypertensive and lipid-lowering medications can reduce CVD in T2DM as effectively as they do in the nondiabetic population (12) and that CVD risk in diabetes is decreasing (13). Finally, in the past two decades, the diabetes epidemic has spurred the development of eight new classes of glucose-lowering medications that may allow for more effective control of glycemia in T2DM and, thus, reduce complications (14). One of the major challenges for practitioners is to choose from the considerable armamentarium of glucose-lowering medications the best means of maintaining an appropriate level of glycemic control over time. Consensus algorithms have been developed to help clinicians to select among the numerous medications and their combinations for achieving and maintaining a target A1C of ,7% (53 mmol/mol) (15)(16)(17). Other published algorithms selected different glycemic goals and recommended different strategies to achieve them (18). Recent American College of Physicians guidelines suggest that metformin is the only drug supported by solid evidence and that data are insufficient to choose a second agent (19). The dearth of head-to-head comparator studies of glucose-lowering medications, either alone or in combinations, and of trials that have lasted .6-12 months to examine the durable effects of interventions on glycemic control (10,11,20,21) has hampered the development of all these algorithms. Because T2DM is a progressive disease with worsening metabolic control over time, the long-term glycemia-lowering effects of interventions are particularly important. Safety, side effect profiles, tolerability, patient acceptance, burden of therapy, and cost are other important factors in the long-term treatment of this chronic, degenerative disease. Finally, recent position statements have emphasized individualization and patient-centered approaches to therapy (15), but few studies have examine which patients might do better or worse with specific therapies. Comparative effectiveness research has been identified as a high national priority in the U.S. (22). Similarly, improved understanding of phenotypic and genotypic differences between patients that affect responses to medications has been identified as an important element in individualizing therapy for maximum effectiveness (23). Of note, most industry-sponsored studies have not addressed either long-term comparative effectiveness or interpatient differences that may affect responses to therapy. As a result, patients with T2DM are currently treated without taking into account individual characteristics that might direct the choice of more effective interventions. The Glycemia Reduction Approaches in Diabetes: A Comparative Effectiveness Study (GRADE) is a pragmatic clinical trial that will make head-to-head comparisons of major drug classes currently used to treat T2DM, with the overarching goal of providing better guidance to practitioners in the choice of medications. Specifically, GRADE will compare a sulfonylurea, dipeptidyl peptidase 4 (DPP-4) inhibitor, glucagon-like peptide 1 (GLP-1) receptor agonist, and basal insulin in patients with recently diagnosed T2DM treated with metformin and will examine their effectiveness in maintaining the glycemic goal (A1C ,7% [53 mmol/mol]) over time. Other outcomes will include relative effects on selected microvascular complications and cardiovascular risk factors; patient-centered outcomes, such as adverse effects, acceptability, and tolerability; and cost-effectiveness. Finally, GRADE will study the phenotypic characteristics that underlie the success, failure, and adverse effects of the different combinations to guide individualized treatment. A notice of opportunity was issued to solicit donations of medications within the four classes and other supplies, and the specific medications were selected by a subgroup with no dualities of interest. In addition, requests for applications were issued to clinical centers, central laboratories, and support units, which were subsequently selected by peer review. Additionally, study forms, model informed consents, and a manual of operations were developed. GRADE was reviewed by an independent external evaluation committee in December 2011, reviewed and recommended for funding by an NIDDK study section in August 2012, and approved by the NIDDK Advisory Council in September 2012. Funding of the study began in October 2012 through a U01 grant (coprincipal investigators D.M.N. and J.M.L.) to The George Washington University Biostatistics Center. The data and safety monitoring board, an independent review group appointed by NIDDK, first convened on 1 February 2013. The GRADE steering committee, comprising the principal investigators of the clinical centers, representatives of the NIDDK, and selected members of the study group approved the final study protocol in March 2013. GRADE will begin recruitment at 37 centers in mid-2013. Major specific aims The relative effects of four commonly used glucose-lowering medications with different mechanisms of action when added to metformin will be compared for the following: c Maintenance of metabolic control, defined as time to primary failure with an A1C $7.0% (53 mmol/mol), confirmed, while receiving maximally tolerated doses of both metformin up to 2,000 mg/day and the assigned medication; c The time to secondary metabolic failure with an A1C .7.5% (58 mmol/mol), confirmed, requiring the addition of basal insulin for oral agent-treated subjects and intensification of insulin therapy for those assigned to basal insulin at baseline; c The time to tertiary metabolic failure with an A1C .7.5% (58 mmol/mol), confirmed, requiring implementation of intensive insulin therapy with basal plus rapid-acting insulin, while treated with metformin, the assigned study medication, and basal insulin among those not originally assigned to basal insulin; c Cumulative incidence of diabetes complications, such as microalbuminuria; and c Other metabolic outcomes, adverse effects, and effects on CVD risk factors, quality of life, tolerability, and costeffectiveness. In addition, we will determine the phenotypic characteristics associated with response to and failure of the four different medication combinations and identify factors that determine the success and/or failure of specific regimens over time, including longitudinal mechanistic investigations of b-cell function. Design GRADE will be a pragmatic, parallel-group, clinical trial that compares as objectively as possible the effects of four different glucose-lowering medications in metformintreated patients with relatively recently diagnosed T2DM. Subjects will adjust metformin during the run-in phase to achieve maximum tolerated doses of 2,000 mg/day with at least 1,000 mg/day required for eligibility ( Fig. 1). The trial is unmasked for practical reasons because it will compare oral agents and injectable medications. Eligible subjects will be randomly assigned to one of the four medications shown in Fig. 1. The principal comparisons among these medications will start from the time of randomization. The trial will be conducted under an intention-totreat design. All randomized subjects will continue follow-up and complete all outcome assessments until the planned conclusion of the study (planned follow-up of 4-7 years, depending on the time of entry), including those who reach the primary outcome. Otherwise, analyses of all other outcomes would be susceptible to a healthy survivor effect because the only subjects evaluated in the out years would be those who had not yet experienced primary failure of the assigned regimen. To encourage retention in the study over time and ensure a longer exposure to the study medications for the purposes of analyses of other outcomes, assigned study medications will be continued until the need for intensification of insulin therapy with basal plus rapid-acting insulin (Fig. 2). GRADE was designed entirely by the planning group (the authors) with input from an NIDDK-appointed external evaluation committee and the investigators. No pharmaceutical manufacturers contributed to the planning or design or will participate in the conduct of GRADE. Medication and supply manufacturers were approached to donate product after the medications and supplies had been selected by members of the planning group without any dualities of interest. Study population and recruitment GRADE will compare the relative effects of the four interventions in relatively recently diagnosed T2DM subjects treated with metformin, with the recognition that earlier treatment is more likely to maintain endogenous insulin secretion and promote advantageous levels of glycemia over time (24). Eligibility criteria enumerated in the protocol (Supplementary Data) and summarized in Table 1 reflect a balance between the stringent requirements usually applied in recruiting a clinical trial population and the desire to create a pragmatic and easily translatable study. To be eligible, potential subjects must have an A1C of 6.8-8.5% (51-69 mmol/ mol), as measured in the central laboratory, after metformin therapy has been maximized, as tolerated, during the runin period. The study cohort (Fig. 1) of 5,000 subjects will include patients with ,5 years' diabetes duration who are treated with metformin but no other glucoselowering medications. The majority of potential subjects will be identified on the basis of a prior diagnosis of diabetes detected through reviews of medical histories and self-reports and aided by the use of electronic medical records and other databases. GRADE will aim to recruit as much representation as possible from racial and ethnic minority groups that are disproportionately affected by T2DM and a substantial fraction (.20%) who are $60 years of age. Recruitment and implementation of the GRADE protocol will take place at 37 clinical centers, which were selected by peer review through an open competition process. The GRADE clinical centers (Supplementary Data) are distributed throughout the U.S. (Supplementary Data) and were selected in part because of their ability to recruit a diverse population of research subjects, including patients .60 years of age. Each clinical center will enroll 150 eligible subjects to reach the study-wide total enrollment of 5,000 subjects over a period of ;3 years. Interventions Rationale. Metformin was selected as the foundation therapy according to the same rationale used in most of the recently developed consensus algorithms (15)(16)(17)(18), namely, its long-term clinical experience, effectiveness in lowering glycemia over a wide range of A1C levels without causing hypoglycemia, weight-neutral or weight-loss effect, putative cardiovascular risk reduction (10,11,25), safety and side effect profiles, high level of patient tolerance, and low cost. Recent surveys have shown that a large majority of patients with recent-onset T2DM are treated with metformin (26), making this choice both practical and clinically relevant. The selection of the other study medications from the ten classes of available agents to add to metformin was predicated on the most commonly used approved combinations and the availability of preliminary data to support their glycemia-lowering effectiveness, safety, and tolerability. Increasing concern about the future of pioglitazone, owing to the putative increased risk for bladder cancer (27) superimposed on previously established safety concerns regarding volume retention and bone loss, contributed to its elimination from the study design. The potential adverse impact on recruitment of including a drug that is receiving increasing and highly visible negative attention was an additional consideration. Because the four medication classes proposed capture the majority of glucoselowering medications prescribed, and all four combinations have been approved by the Food and Drug Administration and its European and Canadian counterparts, the study will be clinically relevant and generalizable, and its results immediately and widely translatable to practice. Medications. We selected specific agents within the four classes as dictated by their specific attributes. All have been studied (28)(29)(30)(31) and are approved by the Food and Drug Administration in their proposed initial combinations. The criteria by which specific agents were chosen within classes by members of the planning group without any dualities of interest included differences between the agents in the following: lowering glycemia, published side effect profiles, effects on CVD risk factors, clinical experience, ease of administration, and acceptability. In cases where there were no appreciable or substantive differences between agents within the classes, consideration was given to those agents that are used most frequently and were made available by the manufacturers. At the time of randomization, all subjects will be assigned to one of the following medications in each of the named classes: sulfonylurea (glimepiride), DPP-4 inhibitor (sitagliptin), GLP-1 receptor agonist (liraglutide), or insulin (glargine) (Fig. 2). The number of medications selected in GRADE was predicated on resource availability. The other classes of glucoselowering medications, aside from pioglitazone (discussed previously), that were considered but not chosen were the a-glucosidase inhibitors, nonsulfonylurea sulfonylurea receptor agonists, rapid-acting insulins, bile acid sequestrant colesevelam, and dopamine agonist bromocriptine. They were not selected for a number of reasons, including potential safety concerns, limited clinical use and experience in recent-onset T2DM, and relatively low efficacy, poor tolerability, and frequent side effects. No agents in the most recent class of glucose-lowering medications, the SGLT-2 inhibitors, had been approved during the planning phase of GRADE. Moreover, none of them had sufficient clinical use or experience to be acceptable in the study. Diabetes management strategy. All the medications will be used according to their labeling and/or usual practice (32). Adjustments of glimepiride or insulin will be based on self-monitoring of blood glucose, aiming for fasting glucose levels between 70 and 130 mg/dL without symptomatic hypoglycemia. Additionally, medications will be titrated to achieve A1C values ,7.0% (53 mmol/mol) up to the maximally tolerated dose (Table 2). GRADE staff at each clinical center will assume responsibility for glycemic management of subjects according to the GRADE protocol and will communicate this arrangement with the primary-care providers. Of note, GRADE staff will not be responsible for routine surveillance for diabetes complications or for the treatment of other cardiovascular risk factors; however, the results of clinically relevant physical examination and laboratory results will be communicated to subjects' care providers to aid clinical management. The randomly assigned medication and metformin will be continued until the secondary metabolic outcome (see OUTCOMES) has been reached (Fig. 2), at which time basal insulin (glargine) will be added for the three groups that were not originally assigned to insulin, using the same algorithm as in the original glargine-assigned treatment group. The rationale for the continued combination therapy is to maximize the time while receiving the assigned treatment and to enable further study of which combinations may delay further metabolic worsening to the need for insulin intensificationdthe tertiary metabolic outcome. Moreover, the use of three agents has become increasingly popular in routine clinical practice. For the group that was originally assigned to glargine, insulin intensification with rapid-acting (aspart) insulin will be started and adjusted by GRADE clinic staff according to the study protocol after the secondary metabolic outcome has been reached (Fig. 2). In the three groups originally assigned to treatment other than glargine, intensification of insulin therapy with rapid-acting insulin will be implemented when the tertiary metabolic outcome is reached. Their randomly assigned medication will be stopped at that time. Self-monitoring of blood glucose. Subjects assigned to insulin or sulfonylurea for safety reasons (to prevent hypoglycemia) will self-monitor blood glucose levels on a specified schedule and adjust doses to achieve glucose goals according to usual care recommendations (32). Selfmonitoring of blood glucose levels will also be recommended for safety reasons for all subjects with symptoms suggestive of hypoglycemia or hyperglycemia or during intercurrent illness likely to affect glucose control. Outcomes Metabolic outcomes. The primary outcome is the time to primary metabolic failure of the randomly assigned treatment, which is defined as the time to an initial A1C $7% (53 mmol/mol), subsequently confirmed at the next quarterly visit, while being treated at maximum tolerable doses of both metformin and the second randomly assigned medication. If the second (confirmatory) A1C is ,7% (53 mmol/ mol), then the primary outcome is not yet reached. If the initially observed A1C is .9% (75 mmol/mol), then confirmation will be performed within 3-6 weeks. Taking into account the need for confirmation, the earliest time that the primary end point can be confirmed is at 6 months after randomization for subjects whose A1C at 3 months is $7% and at 4 months if the 3-month A1C is .9%. All A1C results will be measured in the study central laboratory. The secondary outcome is the time to the observation of an A1C .7.5% (58 mmol/mol), subsequently confirmed, while treated with originally assigned medications and metformin. For the three groups originally assigned to medications other than insulin, the tertiary outcome is the time to an A1C .7.5% (58 mmol/ mol), confirmed as previously described, while receiving metformin, the originally assigned medication, and basal insulin. Each of the three metabolic outcomes will be counted regardless of adherence to assigned medications, according to the principles of intention-to-treat analysis. Other outcomes. A full list of the GRADE outcomes is included in the protocol (Supplementary Data). They can be considered in the following categories: metabolic, such as mean A1C and fasting plasma glucose levels, frequency of hypoglycemia, and measures of insulin secretion and sensitivity; cardiovascular, including risk factors and major events; microvascular, such as albuminuria, estimated glomerular filtration rate (eGFR), and peripheral neuropathy; adverse events specific to the medications under study; adverse effects; adherence and tolerability to metformin and the assigned medications and treatment satisfaction; health economics; and other outcomes, including mortality, hospital admissions, cognitive function, and cancer. Baseline and follow-up measurements of phenotypic variables (demographic, physiologic, and genetic) will facilitate the study of patient factors that may mediate responsiveness to different therapies. Oral glucose tolerance testing, performed annually, will contribute to our understanding of the mechanisms of medication success and failure. From these assessments, a number of different outcome measurements will be obtained with the goal of assessing the differential metabolic effects of each drug combination on b-cell function and insulin sensitivity over time. These measurements, combined with the phenotypic measures, will be used to determine patient-specific characteristics that are associated with responsiveness or failure to respond to specific agents and will facilitate an understanding of how to individualize therapy. Statistical analyses and power calculations All analyses will compare the randomly assigned treatment groups under the intention-to-treat principle with use of the treatment as assigned to each subject and all available data from all subjects. Primary outcome. The cumulative incidence of the primary outcome within each treatment group will be estimated with a modified, discrete-time Kaplan-Meier estimate, allowing for periodic outcome assessments (33). Differences between groups will be tested and relative risk estimates obtained from a Cox proportional hazards model for discrete time observations adjusted for the baseline A1C (33). A single overall omnibus test at the 0.05 significance level will be conducted as well years of age at time of diagnosis 2. Duration of diagnosed diabetes ,5 years determined as accurately as possible on the basis of available records at screening 3. A1C criteria (at final run-in visit, ;2 weeks before randomization): 6.8-8.5% (51-69 mmol/ mol) 4. Taking a daily dose of $1,000 mg metformin for a minimum of 8 weeks at final run-in 5. Willingness to administer daily subcutaneous injections, take a second diabetes drug after randomization, potentially initiate insulin, intensify insulin therapy if study metabolic goals are not met, and perform self-monitoring of blood glucose 6. A negative pregnancy test for all women of childbearing potential (i.e., premenopausal, not surgically sterile) 7. Provision of signed and dated informed consent before any study procedures Exclusion criteria 1. Suspected type 1 diabetes (lean with polyuria, polydipsia, and weight loss with little response to metformin) or secondary diabetes resulting from specific causes (e.g., previously diagnosed monogenic syndromes, pancreatic surgery, pancreatitis) 2. Current or previous (within past 6 months) treatment with any diabetes drug or glucoselowering medication other than metformin, including short-term insulin use during hospitalization 3. More than 5 years of treatment with metformin at time of randomization 4. History of intolerance, allergy, or other contraindications to any of the proposed study medications 5. A life-threatening event within 30 days before screening or currently planned major surgery 6. Any major cardiovascular event in previous year, including history of myocardial infarction, stroke, or vascular procedure, such as coronary artery or peripheral bypass grafting, stent placement (peripheral or coronary), or angioplasty 7. Plans for pregnancy during the course of the study for women of childbearing potential 8. History of or planning for bariatric surgery, including banding procedures or surgical gastric and/or intestinal bypass 9. History of congestive heart failure (New York Heart Association class III or IV) 10. History of conditions that are specific contraindications to any of the study medications 11. Serum creatinine level $1.4 mg/dL in women and $1.5 mg/dL in men or end-stage renal disease requiring renal replacement therapy 12. History of cancer, other than nonmelanoma skin cancer, that required therapy in the 5 years before randomization 13. Treatment with oral or systemic glucocorticoids (other than short-term treatment, e.g., for poison ivy) or disease likely to require periodic or regular glucocorticoid therapy (inhaled steroids allowed) 14. Treatment with atypical antipsychotics 15. Clinically or medically unstable with expected survival ,1 year *A complete list of the eligibility criteria is included in the protocol (Supplementary Data). as significance tests and relative risk (hazard ratio) estimates for each of the six pairwise drug group comparisons, with P values adjusted with the Holm closed sequential multiple testing procedure (34). If tests of the proportional hazards assumption do not apply, inferences (CIs and P values) will be obtained from robust information sandwich estimates of SEs (35). Other outcomes. Similar analyses will be applied to other secondary discrete timeto-event outcomes, such as the time to secondary metabolic failure or to microalbuminuria based on 6-monthly albumin: creatinine ratio measurements. For time-to event outcomes measured nearly continuously, such as the number of days to a cardiovascular event, this strategy will use the corresponding methods for continuous time observations. For longitudinal analyses of binary outcomes over time, such as the proportion of subjects (prevalence) at each visit who are still maintaining an A1C ,7% while receiving the originally assigned therapy, the odds will be compared between groups with use of a repeatedmeasures logistic model fit through generalized estimating equations with a robust estimate of the covariance structure (34). Longitudinal analyses of quantitative outcomes over time (e.g., A1C) will use a longitudinal normal errors repeated-measures model for the estimation of group mean levels over time (36). For longitudinal assessments of the rate of change of an outcome over time, such as the slope of the decline in eGFR, a randomeffects (random coefficient) model will be used to estimate the mean slope within each treatment group, allowing for random variation of slopes among subjects (36). Comparison of rates of events (e.g., hypoglycemia) will use Poisson regression models with the robust information sandwich variance estimates (33). Composite outcomes. A multivariate one-sided (or one-directional) test of stochastic ordering will be conducted to compare differences between groups for multiple outcomes simultaneously, such as A1C, weight, and hypoglycemia. The O'Brien mean rank score test (37) will be applied to an analysis of multiple quantitative (or ordinal) components at a single point in time. The Wei-Lachin test of stochastic ordering will be used to test other components, including proportions, rates, and event times (38). In addition, a single composite outcome can be defined from the components, such as the prevalence of subjects at 4 years who are still able to maintain an A1C ,7% without having experienced severe hypoglycemia or gained weight. A longitudinal analysis of the proportions meeting this criterion at each visit over time and a survival analysis of such outcomes will also be conducted. Proportional hazards and parametric regression models will be used to assess the ability of multiple variables simultaneously to predict the time to primary or to secondary failure. Subgroup and stratified analyses. Analyses will also assess the differences in study outcomes within subgroups defined by baseline characteristics, including race/ ethnicity, sex, age, diabetes duration, weight, body mass index, A1C, and measures of insulin sensitivity, insulin secretion, and the glucose disposal index. For each factor, the treatment groups will be compared separately within each subgroup (e.g., males, females) with a test of homogeneity between strata. For a quantitative variable (e.g., age), an additional analysis will be conducted with use of the quantitative covariate rather than simply of the discrete strata. Sample size and power With recruitment over 3 years and total study duration of 7 years, continued follow-up of all subjects to study end would provide 4-7 years of follow-up. To be conservative, sample size and power for the primary analysis were computed assuming a lag in recruitment, with 40% of subjects recruited in the first half of the 3-year recruitment period (39). Assuming that 4% will be lost to follow-up before reaching the primary outcome, the average follow-up time would be 4.8 years, with 15% of subjects lost to follow-up. Primary outcome. On the basis of the ADOPT (A Diabetes Outcomes Progression Trial) (20), we conservatively estimated a hazard rate of 0.0875 per year for the primary outcome. With the aforementioned assumptions, a sample size of 1,242 per group (rounded to 1,250) provides 90% power to detect a 25% risk difference at a significance level of 0.00833, adjusting for six pairwise tests. Secondary outcomesdmicroalbuminuria and clinical CVD. The hazard rate of onset of microalbuminuria is projected to be ;0.04 per year in whichever group has a higher event rate (40). The 5,000 subjects provide 88% power with a hazard rate of 0.04 per year, or 92% with 0.045 per year, to detect a 33% difference in risk for microalbuminuria between any pair of groups. In the ADOPT study (20), the incidence of major atherosclerotic cardiovascular events was 0.76% per year and of major atherosclerotic cardiovascular events plus congestive heart failure, 1.14% per year. Assuming an incidence rate of 1% per year, GRADE will provide 80% power to detect a 50% difference in the risk of CVD between any pair of drug groups, adjusted for six pairwise comparisons. CONCLUSIONSdGRADE is a comparative effectiveness study that aims to compare four major classes of glucoselowering medications in relatively recently diagnosed T2DM patients treated with metformin. The study is unique in comparing as many major diabetes treatments as possible, given available study resources, over a clinically relevant period. GRADE is also unique because it will study the totality of the effects of the medications, including an emphasis on patient-centered outcomes in addition to metabolic outcomes. Finally, its focus on individual demographic, clinical, and other factors that may influence a differential response to medications will add to our understanding of therapy for T2DM. GRADE results should not only help practitioners to choose the medications that are the most appropriate with regard to metabolic control and patient-oriented outcomes, but should also provide insights to allow individualization of treatment. The major aims of GRADE, which focus on a comparison of the effectiveness and other clinically important attributes of glucose-lowering medications, have major health economic implications in addition to their obvious public health impact. The cost of glucose-lowering medications accounts for a disproportionate share of medication costs, doubling from 6.3% of all prescribed drug spending in the U.S. in 2001 to 12.2% in 2007 (5). The planning process for GRADE differed from that for most large, multicenter trials sponsored by NIDDK. The U34 planning grant was used to allow a relatively small group of investigators to plan, design, and develop the study to the point of implementation. This process contrasts with the usual design of multicenter trials by a large group of investigators who have been selected on the basis of their response to a request for application. GRADE investigators will leverage the core study to amplify the range of scientific inquiry by actively promoting ancillary studies. These independently funded projects will take advantage of the study design and cohort. Some, such as genetics studies, will require minimal subject participation, whereas others may involve additional study procedures; however, all ancillary proposals will be judged on the basis of clinical and scientific value and burden to the subjects and centers.
2016-05-12T22:15:10.714Z
2013-07-11T00:00:00.000
{ "year": 2013, "sha1": "8974e092ca31b56358bda6db1e5825d7efe3af4d", "oa_license": "CCBYNCND", "oa_url": "https://care.diabetesjournals.org/content/diacare/36/8/2254.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7a3100d730cabf8d7a9e169dceafb897dd023fa0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264889738
pes2o/s2orc
v3-fos-license
Differences in Rhizosphere Microbial Community Structure and Composition in Resistance and Susceptible Wheat to Fusarium Head Blight Fusarium head blight (FHB) is a serious disease of wheat that threatens wheat production worldwide. In this study, high-throughput sequencing technology was used to analyze the rhizosphere soil microbial metagenomes of 4 wheat cultivars with di ff erent levels of resistance to FHB. The results showed that there were di ff erences in the diversity, structure Introduction Bread wheat (Triticum aestivum L.) is one of the most significant crops, and its global production ranks the third followed by corn and rice [1].Wheat diseases are a key factor affecting the quality and yield of wheat, and the occurrence of wheat diseases results in severe losses [2].FHB is a fungal disease caused by various Fusarium species, such as Fusarium graminearum, F. culmorum, and F. moniliforme.The different geographical distribution and climatic environment result in dominant species variation.For example, F. graminearum, F. culmorum, F. poae, and F. avenaceum are the most important dominant species in Europe [3,4], while the main pathogenic species is only F. graminearum in China [5].Infected wheat grains were contaminated by the fungal toxin produced by F. graminearum, which can chronically remain in food chain.The contaminated food is dangerous to animals and humans and can even cause death [6][7][8]. At present, chemical control has been regarded as the significant measures to control FHB in China.The longterm use of pesticides not only enhances the resistance of the pathogenic microorganisms but also pollutes the environment [7].Now, scientists focus on the cooperation with microbial populations in plant rhizosphere and the relative relationship applying to agricultural systems [9][10][11]. The soil is an essential material for plant survival.Besides, the soil microbial community has a direct impact on plant growth and development [12,13].There are significant differences between the community structure of different plants varieties and microorganisms.Both individual strains of bacteria and the rhizosphere soil microorganism community play important roles in plant health [14,15].Plant rhizosphere with a rich microbial diversity exhibits dynamic interactions [16].The microbiome is an extended genome or secondary genome, including bacteria, fungi, viruses, protozoa, and archaea [17,18].Isolation of wheat rhizosphere soil can yield rhizosphere microorganisms.Beneficial rhizosphere microorganisms not only enhance host resistance but also synthesizes hormones that benefit plant growth and promote plant metabolism [19,20].Zhao et al. found that plant endophytes are correlated with rhizosphere microorganisms to some extent through exploring the bacterial community structure between root endophytes and rhizosphere soil on the traditional rice (Oryza sativa) variety "Yuelianggu" in the Yuanyang terraces [21,22].Li et al. have examined the impact of rhizosphere soil microorganism diversity on resistant cotton Verticillium wilt, which shows that the dominant rhizosphere fungal species of diseaseresistant was significantly stronger than that of diseasesusceptible in control cotton Verticillium wilt fungus.Furthermore, they found that rhizosphere beneficial microorganisms could regulate the composition of the rhizosphere soil microorganism community, which is likely to control effectively cotton Verticillium wilt [23].The correlation between rhizosphere soil microorganism diversity differences and resistance disease could be explored through the diversity analysis on rhizosphere soil microorganism communities of different disease-resistant varieties on metabolic functions and structures [24]. With the development of molecular biology and bioinformatics, the plant microbiome has enhanced potentially agricultural production, which is expected to meet future food demand worldwide [20,25].In this study, the rhizosphere microbiomes of different FHB resistant in wheat varieties were analyzed to elucidate the differences in rhizosphere soil microbial diversity.This study provides the theoretical basis for green prevention and control of FHB. Experimental Materials. The seeds of the test wheat are retained in our laboratory.Wheat varieties included the high-resistant variety of FHB "Su Mai 3" (SM3), the medium-resistant variety "Yang Mai 16" (YM16), the medium-susceptible variety "Zheng Mai 9023" (ZM9023), and the high-susceptible variety "Zhou Mai 20" (ZM20).In the wheat seedling stage, the fertilizer, water, pests, diseases, and weed should be managed strictly according to the technical requirements of local agricultural production.Fivepoint sampling method was used to sample wheat at heading stage and flowering stage.In order to reduce the test error caused by environmental factors, different wheat varieties were planted in plot.The samples in this study were collected from the experimental plots of continuous cropping.The samples were taken by digging up an intact wheat plant and gently shaking the plant with soil to dislodge it completely.The soil sample that fell off was regarded as the nonrhizosphere soil, and the soil sample that adhered to the plant root systems was regarded as the rhizosphere soil [26].To maintain the integrity of the root systems, first the soil of wheat roots was dug at least 15 cm, and interrhizosphere soil with about 500 g was collected from each plot.Then, the soil was sieved by a 20 mesh screens, subsequently placed in sterile bags and numbered, and finally stored at -80 °C. Results and Discussion The effects of different FHB-resistant varieties on rhizosphere soil microorganisms were elucidated through sampling analysis of experimental fields.In this study, we explored the effects of different wheat varieties with resistance and susceptibility to FHB on the diversity, structure, and function of rhizosphere soil microbial population by high-throughput sequencing. 3.1.Sequencing Quality Evaluation.A total of 1,099,541 pairs of reads were obtained from 12 bacterial samples, and a total of 1,020,929 clean tags were generated by splicing and filtering of double-ended reads.Each sample included at least 62,894 clean tags, with an average of 85,077 clean tags.Tags were clustered at a 97% similarity level, and 1999 and 802 OTUs were obtained from bacteria and fungi, respectively (Figures 1(b) and 1(d)).There were 1637 OTUs shared by the bacteria of four varieties wheat, 1824 OTUs shared by SM3 and YM16, and 1750 OTUs shared by ZM9023 and ZM20.There are only 1 unique OTU in YM3, 52 unique OTU in ZM20, and no unique OTU in SM3 and ZM9023 (Figure 1(a)).A total of 543,495 pairs of reads were obtained from 12 bacterial and fungi samples of four wheat varieties, each with three biological replicates.A total of 469,498 clean tags were generated by quality control and double-end splicing.Each 3 Cellular Microbiology sample included at least 29,408 clean tags, with an average of 39,125 clean tags.The fungus has a total of 802 OTUs.We found that the four varieties share 318 OTUs, 1824 common OTUs in the two FHB-resistant varieties SM3 and YM16 and 436 common OTUs in the susceptible varieties ZM9023 and ZM20.SM3, YM16, ZM9023, and ZM20 have 20, 21, 16, and 66 unique OTUs, respectively (Figure 1(c)).The quality assessment of sequencing data is shown in Supplement Table 1 and 2. 3.2.The Bacterial Diversity of Resistant Varieties Was Higher than That of Susceptible Varieties, and the Fungal Diversity Was Lower than That of Susceptible Varieties.The species diversity and abundance of different wheat varieties with rhizosphere microorganisms were assessed using the ACE index, Chao1 index, Shannon index, and Simpson index.Bacterial ACE index and Chao1 index presented ZM9023 > SM3 > YM16 > ZM20, but there was no significant difference (P ACE = 0 077, P Chao1 = 0 064) (Figure 2).The Shannon index presented SM3 > YM16 > ZM9023 > ZM20.Besides, the Shannon index of resistant varieties was significantly higher than that of susceptible varieties (P Shannon = 0 005), which was consistent with the results of Li et al. [30].Wu et al.'s study also showed that the bacterial α-diversity of Chinese wheat yellow mosaic virus-(CWMV-) resistant varieties (FRW) was higher than that of susceptible varieties (FSW) [31].The Simpson index was YM16 > SM3 > ZM20 > ZM9023, in which there was also no significant difference (P Simpson = 0 072).The bacterial diversity of the resistant varieties SM3 and YM16 was higher than that of the susceptible varieties ZM9023 and ZM20.However, the highest bacterial abundance among the four varieties was ZM9023, followed by SM3, YM16, and ZM20 (Figure 2(a)).The fungal α-diversity indices in four varieties are shown in Figure 2(b), with the greatest species diversity in ZM20 and the highest fungal abundance in YM16 (P ACE = 0 67, P Chao1 = 0 64, P Shannon = 0 67, and P Simpson = 0 49, Figure 2(b)). The Microbial Community Structure of the Highly Susceptible Cultivar ZM20 Was Significantly Different from That of the Other Three Cultivars.We conducted PCoA analysis on the bacterial and fungal community structure in the rhizosphere soil of resistant wheat cultivars, and the result showed that both bacterial and fungal communities were primarily clustered by resistant varieties.The resistance varieties SM3 and YM16 and moderately susceptible variety ZM9023 had similar microbial community structure, while the highly susceptible variety ZM20 differed significantly to others (Figures 3(a) and 3(b)).We analyzed different wheat rhizosphere for bacterial and fungal PCoA based on binary-Jaccard algorithm.The samples were closer, and the similarity was greater.As shown in Figure 3, the cumulative contribution of the variance on the first three principal components for soil bacteria and fungi was 74.53% and 50.43%, respectively, while the remaining principal components 7 Cellular Microbiology contributed less and were ignored.Therefore, the first three principal components (PC1, PC2, and PC3) were regarded as the main factors exploring the mycota differences.SM3, YM16, and ZM9023 were clustered together; ZM20 were also clustered together.The results showed that SM3, YM16, and ZM9023 had similar community structure, and there were significant differences between them and ZM20 (Figure 3(a)).The PCoA analysis on soil fungal showed that the composition and structure of SM3 and YM16 soil fungi overlapped and their microbial community structure was similar to some degree.Furthermore, PC3 and PC2 were considered the significant factors leading to the difference in fungal communities between ZM9023 and resistant varieties.PC1 and PC3 were considered the key factors causing the difference in fungal communities between ZM20 and resistant varieties (Figure 3(b)).Kopecky et al. found that both resistant and sensitive species had the differences on the diversity, structure, and composition of soil bacterial communities [32][33][34].Differences in rhizosphere soil microorganisms of different wheat cultivars may be caused by different plant genotypes, which may affect the composition and abundance of annual plant microbial communities.The study of El Arab et al. showed that under controlled conditions, the population structure of two different genotypes of wheat root soil microorganisms was different [35].Genotypic effects were also found in soil rhizosphere microbial community composition of different soybean varieties [36].The composition, diversity, and abundance of bacteria and fungi in the rhizosphere of chickpea with different genotypes were also significantly different [37].At different stages of potato development, different varieties also affect the abundance of rhizosphere microflora [38]. The bacteria top 1 in SM3, YM16, and ZM9023 was subgroup 6, and in high-resistance species, SM3 was Alphaproteobacteria at the phylum taxonomic level.Proteobacteria are involved in the biosynthesis of plant hormones and polyamines, phosphate dissolution, and nitrogen fixation [9,20,22].Species in the genus taxonomic level top 10 followed Nitrosomonadaceae, Gemmatimonadaceae, Anaerolineaceae, Sphingomonas, Haliangium, and Nitrospira, but most of them were nonculturable bacteria (Figure 3(c)).The enrichment of Nitrosomonadaceae, the restoration of the rhizosphere environment [57], and the nitrification and use of soil micronutrient by plants [58,59] are discussed of Lovley et al. High-resistant varieties with microbial abundance were less than that of other varieties, while high-susceptible varieties ZM20 uncultured bacterium Latescibacteria and uncultured bacterium of Gemmatimonadaceae, Haliangium, Sphingomonas, and Gemmatimonas were higher than other varieties (Figure 4(a)). The top 10 of the phylum fungi were Basidiomycota, Ascomycota, Mortierellomycota, Glomeromycota, Olpidiomycota, Cercozoa, Chytridiomycota, Kickxellomycota, and Blastocladiomycota.Sordariomycetes dominated in the phylum taxonomic level.Ascomycota shows positive effects on facilitating plants nitrogen assimilation and participating in the decomposition of plant residues [60].Basidiomycota also rapidly metabolizes organic substrates in the rhizosphere soil, and its abundance is affected by the degradation of plant residues [61]. Agaricomycetes and Sordariomycetes dominated in the class interrhizosphere fungi.The biodegradability of Agaricomycetes has a profound effect on alleviating soil organic compound pollution [62]. Conclusions The α-diversity analysis showed that the bacterial diversity of resistant varieties was higher than that of susceptible varieties.The highest abundance of moderately susceptible varieties was ZM9023, followed by SM3 and YM16.The lowest abundance of highly susceptible varieties was ZM20.The Shannon index of rhizosphere bacteria in resistant varieties was significantly higher than that in susceptible varieties.The rhizosphere fungal diversity in resistant varieties was lower than that in susceptible varieties, but their abundance was higher than that in susceptible varieties (Figure 2(b)).We analyzed the differences of rhizosphere microorganisms in different wheat varieties via OTU and binary algorithm.The results showed that moderately susceptible varieties SM3, YM16, and ZM9023 had similar microbial community structure, while the highly susceptible variety ZM20 was significantly different to that of moderately susceptible varieties (Figures 4(a) and 4(b)).The principal component analysis (PCoA) on microbial community showed that resistant varieties changed the quantity and composition of wheat rhizosphere with bacterial and fungal communities. In this study, the differences of rhizosphere microbial communities of different resistant varieties of wheat were analyzed.In the next step, we will isolate and identify the microorganisms in these soils to determine which kind of microorganisms regulate the resistance to FHB in wheat. Figure 4 : Figure 4: Heat map of species abundance (abundance > 0 05) at the taxonomic level of rhizosphere bacterial (a) and fungal (b) genera in different wheat varieties.
2023-11-02T15:29:16.124Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "2cdf5114f77e6eeb954d613bedeee2fed7e3ef1e", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cmi/2023/9963635.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0878ec93dcf3052612a109c454b999823de04973", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
259645983
pes2o/s2orc
v3-fos-license
The prediction of land use and land cover change and its impact on soil erosion and sedimentation in the Musi Hydropower-Plant catchment area in Bengkulu Province Abstract Introduction Land use and land cover (LULC) are essential elements for interpreting and monitoring the earth's surface phenomenon (Kokla et al., 2015). The land is the place where all human activities are being performed and the origins of goods for these operations (Briassoulis, 2020). According to Comber et al. (2015), land cover describes observable features on the earth's surface, such as soil, water, and vegetation, while land use emphasizes the intentions explaining land cover, including the changes made by anthropogenic activities. Meanwhile, the dynamic process of interaction between LULC can be represented by human activities, namely agriculture intensification and urbanization, or natural actions such as floods and landslides (Chaves et al., 2020). LULCC can be used to evaluate land resources degradation (Mujiyo et al., 2021), its impact on water yield and runoff (Wang et al., 2018), soil erosion (Obiahu and Elias, 2020;Millazo et al., 2022) and sedimentation (Gwapedza et al., 2021). The subwatershed of Musi Hulu is upstream of the Musi River, located in Bengkulu Province, Indonesia. As the catchment area of Musi Hydropower-Plant, the subwatershed is susceptible to land degradation. The presence of LULCC (Sukisno et al., 2021) altered the soil erosion and sedimentation, ensuing a disturbance in the reservoir function (Amri et al., 2014). Soil erosion and sedimentation are detrimental impacts of LULCC. The conversion of forest cover to agricultural land results in increasingly eroded soil. The easily eroded soil contributed to the high potential of soil loss and increasing sedimentation. Kidane et al. (2019) informed that the dynamic of LULCC contributed to the increasing soil erosion and sedimentation in Ethiopia. Yan et al. (2018) described that soil erosion in Loss Plateau, China, decreased in the order of residential area, farmland, grassland, and forest. Meanwhile, Muddarisna et al. (2021) stated that the leaves and roots of vegetation are the most important part in controlling soil loss. The increasing soil erosion and sedimentation rates influenced the functionality and lifetime of reservoirs (Atulley et al., 2022;Patro et al., 2022). Generally, LULCC were difficult to access, time-consuming and expensive. The growth of remote sensing and geographical information system has provided the tools and methodology for LULCC analysis. LULCC are easy to analyze and simpler and more reliable (Dangulla et al., 2020). Integration of these technologies is also very helpful in the analysis of the impact of LULCC on soil erosion and sedimentation (Ejegu and Yegizaw, 2021;Endalew and Biru, 2022;Jothimani et al., 2022). This study aimed to predict land use and land cover change and its impact on soil erosion and sedimentation in the Musi Hydropower-Plant catchment area. This study was carried out to (1) classify land use and land cover in 1993, 2006, and 2019, (2) predict land use land cover change in 2032, and (3) estimate soil erosion and sedimentation rate in 1993, 2006, 2019, and 2032. Research location The Musi Hydropower-Plant catchment area is positioned within 102°22'18.98" to 102°38'38.93" East Longitude and 3°6'28.873" to 3°33'57.44" South Latitude and located on the south part of Sumatera Island as shown in Figure 1. The catchment area extended into five districts, Rejang Lebong, Kepahiang, Center of Bengkulu, North of Bengkulu and Lebong Regency, Bengkulu Province, Indonesia, with a total area 60,616.4 ha. The Musi River flows from the upland area to the main river and ends on the east coast of Sumatera Island. However, the establishment of the power plant in 2006 modified the flow. After flowing from the catchment area to the reservoir, most of the water used to generate the power plant spill out to Lemau River and ends up on the west coast of Sumatera Island, as shown in Figure 1. The average rainfall in the area is very high, with an annual value of more than 2,500 mm/year. The highly monthly rainfall is recorded in December, while July has the lowest. Rainfall data were collected from the Meteorological, Climatological, and Geophysical Agency of Pulau Baai Bengkulu, Indonesia. LULC change analysis Satellite image taken by Landsat-5 (1993 and2006) and Landsat-8 (2019) was classified into eight types of land use and land cover, namely built-up area, forest cover, water bodies, paddy fields, dry land agriculture, mixed dry land agriculture, bare soil/bare ground, and dry shrub. The satellite image was downloaded from www.earthexplorer.usgs.gov. Land use and land cover type are classified with the supervised classification (Ma et al., 2017) method in Arcgis 10.8.2. The accuracy classification was validated by verifying a fixed number of locations on the map and identifying those areas on the fields. The prediction of LULCC used Land Change Modeler (LCM) model on IDRISI Terrset. The LCM approach integrates the Multilayer-Perceptron (MLP) with the Markov-Chain Method to model land transition probabilities using historical land use and land cover data and other geospatial datasets (Dangula et al., 2020). Land change prediction in LCM is a stepwise process from change analysis, transition potential modeling, and change prediction. In this research, LULCC 1993-2006 was used to predict LULC 2019. The predicted LULC 2019 is validated with the classified LULC 2019. When the Kappa coefficient >80%, the prediction continued to LULC 2032. The potential drivers causing land use change, such as elevation, slope, population density, and distance to roads, rivers, as well as settlements, were analyzed with Cramer's V test. A Cramer's V 0.15 suggests that the explanation of the variable is good. Prediction of soil erosion Soil erosion is estimated with the general equation of the Revised Universal Soil Loss Equation, which is popularly applied in most studies (Borrelli et al., 2021;Pandey et al., 2021). The RUSLE estimates the soil loss by multiplication of five erosion factors, including rainfall erosivity index, soil erodibility index, slopelength steepness index, cover management, and supporting practice. A is annual mean soil loss (t/ha/year), R is the Rainfall factor, K is the Soil erodibility factor, LS is slope length and steepness index, C is the covermanagement factor, and P is the support practice factor. Rainfall has a crucial contribution to soil erosion and sedimentation processes. Rainfall erosivity was determined using the Bols equation as follows: R is rainfall erosivity, Rm is monthly rainfall erosivity, Soil erodibility indicates the vulnerability of a soil type to disaggregate and transport within intense rainfall. The value range of K-factor is 0 to 1. The values close to 0 are the lowest vulnerable, the value range 0.2 to 0.4 is moderate, and values ≥0.4 are the majority vulnerable (Kulimushi et al., 2021). The equation of soil erodibility is: K is soil erodibility index, M is a result of function (% silt + % very fine sand) (100 -% clay), a is % soil organic matter, b is soil structure class, and c is soil permeability class. The soil erodibility factor shows that the average of the K factor is 0.18, with minimum and maximum values of 0.04, and 0.36. The K factor is presented in Figure 5. Topography has a significant impact on soil erosion. The slope-length steepness factor was estimated with the Wischmeier and Smith equation as follows: RESEARCH LOCATION LS is the slope-length steepness index. X is the cell resolution of the DEM. S is the slope, derived from the DEM. The variable m is the slope contingent variable, with the value varied from 0.2-0.5, subjected to the slope, which is 0.5 for ≥5%, 0.4 for slopes 3-5% and 0.3 for slopes 1-3%, and 0.2 for slopes <1.0%. The average value of the slope-length steepness index is 0.41, with minimum and maximum values 0 and 29, as presented in Figure 6. Vegetation cover is crucial in controlling soil erosion and sediment yields (Gwapedza et al., 2021). It can also be used to estimate soil erosion vulnerability in the past, present, and future. The value of the C-factor ranges from nearly 0-1 (Kulimushi et al., 2021). The value of C-factor in the catchment area varies from 0 to 1. The values of the C-factor are derived from land cover, as shown in Table 1. The supporting practice factor describes the ratio of soil loss resulting from land with conservation practices such as contouring and terracing to the soil loss resulting from land with straight-row cultivation up and down the slope. It is the most important process to prevent and control soil erosion (Pandey et al., 2021). The value of P-factor in the catchment area varies from 0-1, with an average value of 0.299 in 1993, 0.304 in 2006, 0.309 in 2019, and 0.312 in 2032. The vulnerability of soil erosion was reclassified into five severity groups, namely very light (0-15 t/ha/year), light (15-60 t/ha/year), medium (60-180 t/ha/year), heavy (180-480 t/ha/year), and very heavy (>480 t/ha/year). The prediction of the soil erosion rate in 2032 was based on the soil erosion factors with the assumption that except C-factor are stable (Millazo et al., 2022). Sediment delivery ratio The prediction of sediment rates was calculated based on the Sediment Delivery Ratio Model. The formula is: with SDR is the sediment delivery ratio, while A is the width of the catchment area (ha). Land use and land cover 1993-2019 The overall classification accuracy of the LULC was 96%, 91%, and 92% for 1993, 2006, and 2019 respectively. Since an accuracy of more than 85% is recommended, this indicates that the LULC map is acceptable. Similarly, the overall Kappa coefficient was 0.95, 0.89, and 0.91 for 1993, 2006, and 2019 respectively, as shown in Table 2. This indicated that the classified LULC map has a strong agreement with the ground reality (Foody, 2020 The annual land use type comparison shows the LULCC over time. The LULC in 1993 shows that mixed dry land agriculture dominated the area (55.3%), followed by forest cover (30.7%), paddy fields (6.7%), dry land agriculture (3.5%), dry shrub (1.5%), built-up area (1.3%), bare soil and water bodies ( Table 3). The LULC in 2006 showed that mixed dry land agriculture and forest area still dominated the area, while dry land agriculture exceeded the paddy fields. The built-up area increases significantly, from 818 ha to 1,412 ha (Table 3). The area of mixed dry land agriculture and forest area still dominated the area in 2019, with the wide area around 56.6% and 25%, followed by dry land agriculture (9.5%), built-up area (3.7%), and paddy fields (Table 3). Water bodies, bare soil, and dry shrub have a wide area of less than 1%. Land use and land cover changes The result of an analysis of LULCC in the catchment area showed the degradation of LULC. This is indicated by the decreasing essential land use type, where forest area was found to be very important for ecohydrological services, while paddy fields are crucial for food safety and security. The decreasing forest area and paddy fields will threaten the environmental services of the area. Driving factors The five potential driving factors of LULCC are elevation, slope, population density, and distance to roads, rivers, as well as settlements. The Cramer's V indicated that elevation, slope, population density, and distance to roads and settlements have significant correlations to land use and land cover change in the catchment area, shown by the Cramer's V of more than 0.15. The distance to rivers has Cramer's V less than 0.15 (Table 5). Tables 6 and 7. The transition probability matrix of prediction land use and land cover in 2019 shows that the built-up area and water bodies are settled and did not change to the other land use type. The forest area is changed to mixed dry land agriculture, bare soil, and dry shrub. Meanwhile, paddy fields tend to change to the builtup area, water bodies, dry land agriculture and mixed dry land agriculture. It was also discovered that mixed dry land agriculture could change to the built-up area, dry land agriculture, and bare land. The bare soil tends to convert to dry and mixed-dry land agriculture. The dry shrub tends to convert to bare land and dry and mixed-dry land agriculture. Prediction accuracy The predicted LULC in 2019 has a K standard of 0.87, with a total agreement of 90.81%. This indicates that the prediction of LULC is acceptable and reliable for predicting the next LULC in 2032. The comparison of LULC prediction in 2019 with the actual value showed that the overall predicted land use type is accurate by more than 80%. The highest accuracy is the built-up area and mixed-dry agricultural land, with a value of 99%, while dry agricultural land is the lowest, with 83%, as shown in Table 8. Predicted LULC in 2032 The predicted LULC in 2032 shows that mixed dry land agriculture still dominated the area, with a coverage of 56%. The forest area coverage is 21.3%, followed by dry land agriculture, the built-up area, dry shrub, bare soil, and water bodies, with coverage at 12.8%, 5.3%, 0.7%, 0.6%, and 0.3%, respectively. The predicted LULC in 2032 is presented in Figure 10. Soil erosion rate 1993-2019 The prediction of soil erosion rates shows that LULCC contributed to the increase in soil erosion rates, from 75 t/ha/year in 1993 to 93 t/ha/year in 2006 and 113 t/ha/year in 2019, which continues to increase until 122 t/ha/year in 2032, as shown in Figure 11. The Figures 12, 13, and 14, while the every class is in Table 9. The soil erosion class showed that the very heavy class tends to increase from 4.1% to 4.8% and 5.9%, while the very light classes decreased from 86.7% to 85.7%, and 84.9% in 1993, 2006, and 2019. These phenomena indicate that LULCC contributes to the change in soil erosion rates. Figure 11. The average soil erosion rates. Soil erosion rates in 2032 As LULCC continues to increase, the prediction of soil erosion rates in 2032 showed an average of soil 122 t/ha/year, which will vary from 0-21,437 t/ha/year. The results showed that the dominant class was very light, followed by very heavy, heavy, medium, and light at 84.3%, 6.3%, 4.6%, 3.2%, and 1.6%, respectively. Increasing soil erosion rates will contribute to the high sedimentation in the reservoir and interfere with the hydropower plant operation. The classification of the soil erosion severity class of 2032 is presented in Table 10 and Figure 15. Average (t/ha/year) 113 122 Figure 15. The soil erosion severity map of 2032. Predicted sediment yield The prediction of sediment yield shows the increasing soil erosion rates followed by sediment yield. In 1993, the sediment yield was 68,048 t/ha/year. The sediment yield increased to 84,533 t/ha/year in 2006, and 103,190 t/ha/year in 2019 and is predicted to reach 111,028 t/year in 2032. Generally, sediment yield is the amount of sediment that reaches the water bodies, as a function of the Sediment Delivery Ratio (0.015). The sediment will interfere with the operation of the power Plant by reducing the supply of water to the generator. The predicted sediment yields are presented in Figure 16. Figure 16. The predicted sediment yields. Discussion The sustainability of the Musi Hydropower-Plant catchment area is affected by the condition of LULC. Therefore, the analysis was performed to show the dynamics of LULC in the catchment area. The result showed that in 1993, mixed dryland agriculture and forest cover dominated with 55.5% and 30.7%, respectively. It is followed by paddy fields, dry land agriculture, shrubs, bare soil, and water body with percentages are 6.7%, 3.5%, 1.5%, 1.3%, 0.8%, and 0.2%, respectively. In 2019, mixed dryland agriculture and forest cover still dominated the area. The mixeddryland agriculture slightly increased to 56.6%, while the forest cover significantly decreased to 25.0%. The paddy fields also decrease to 3.3%, while built-up area and dryland agriculture enlarge to 3.7% and 9.5%, respectively. Increasing mixed dryland agriculture, dryland agriculture, and built-up area and then decreasing forest cover and paddy fields indicate the high human activities in the catchment area. Hu et al. (2021) stated that changes in LULC are increasingly affecting the land properties and the provision of ecosystem services. The decreasing forest area indicated that the Musi Hydropower-Plant catchment area is susceptible to degradation. Forest cover is very crucial in controlling ecohydrological processes. Based on the transition probability matrix, forest cover tends to convert to dry and mixed-dryland agriculture, bare soil, and shrubs. It means that deforestation is closely related to agricultural activity. Decreasing forest area implies increasing cultivated land. There is no plantation such as palm oil or rubber plantation. Forest area converted to small-scale agriculture with an average value of 1-2 ha for each farmer. Austin et al. (2019) reported that small-scale agriculture and small-scale plantation contributed to the national scale deforestation in Indonesia from 2001 to 2016, with the value of 15% and 7%, respectively. It means that deforestation in the catchment area also contributed to the national scale of deforestation. The forest area is predicted to decrease in 2032, with a value of 21.7%. Deforestation in the Musi Hydropower-plant catchment area emerges as a crucial issue because of its impact on the environmental problem. Deforestation in this area is closely related to the shifting cultivation culture. With the limited land availability, people tend to penetrate the primary forest area. Coincident to Susanto et al. (2018), the asymmetric deforestation concept is also the main driving factor of deforestation in this area. Some people claim that forest is a common pool resource and disagree with the borderline of protected forest area stated by the government. Disagreement about the borderline of protected forest areas implies the rate of deforestation. Paddy fields decreased significantly from 6.7% in 1993 to 3.3% in 2019. Paddy fields convert to the built-up area, bare soil, and dry and mixed-dryland agriculture. Decreasing paddy fields threaten food safety and security. Permanently conversion of paddy fields to the built-up area as the impact of urbanization reduces land productivity. Urbanization will convert strategic fields, especially paddy fields (Rustiadi et al., 2020). Paddy fields predicted still decrease to 2.9% in 2032. In the other site, decreasing forest cover and paddy fields implies increasing the built-up area and dry and mixed-dryland agriculture. The built-up area and dryland agriculture are predicted to increase in 2032, while mixed dryland agriculture will slightly decrease in 2032. Increasing the built-up area is the implication of increasing population. People need a place to stay and also need materials for their activities. Land use and land cover change are a consequence of the competition for land use. LULCC is proceeded by the influence of various aspects. Based on Cramer's Value, population density is the driving factor that significantly influences the LULCC, with a value of 0.4175. The increasing population is followed by increasing demand for land, indicated by the high growth of the built-up area, from 818 ha in 1993 to 2,229 ha in 2019. The next factor is the distance to settlements, with a value of 0.3692, followed by distance to roads, elevation, and slope, with a value of 0.3105, 0.3016, and 0.2033, respectively. Distance to settlements and roads is related to the accessibility of objects to change, while elevation and slope are related to the suitability of land use change. Elevation, slope, distance to roads, distance to settlements, and population density are the factors suggested as driving factors of LULC (Allan et al., 2022). The phenomenon of LULCC in the catchment area can be used as a baseline for the decision maker to make better land use planning. LULCC is known to have positive and negative impacts. Based on the predicted map, the positive outcomes can be optimized, while the detrimental impact can be reduced. Decreasing forest area and paddy fields in 2032 is good information for decision-makers. Deforestation lessens by implementing the regulation of the protected area, payment of environmental services, developing a community-based forest, and other programs that support the sustainability of forest areas and improve human well-being (Novick et al., 2022). Meanwhile, decreasing paddy fields should be avoided with the implementation of the regulation of Sustainable Food Agriculture Land ("LP2B-Lahan Pangan Pertanian Berkelanjutan"). LULCC in the catchment area contributed to the increasing soil erosion and sedimentation rates. The soil erosion rate in 1993 was 75 t/ha/year, which increased to 93 t/ha/year in 2006 and 113 t/ha/year in 2019. The soil erosion rates are predicted to continue to increase in 2032 with a value of 122 t/ha/year. Because of the limited data, the soil erosion factor assumed that soil erodibility (K) and slope-length steepness (LS) are constant, while rainfall erosivity (R) and cover management (C) and land management or support practice (P) are different. The rainfall erosivity index showed an increasing value from 1,795 MJmm/ha/year in 1993 to 1,956 MJmm/ha/year in 2006 and 2,283 MJmm/ha/year in 2019. Meanwhile, the C factor and P factor are different each year, derived from LULC. Increasing the built-up area, dry and mixed-dryland agriculture implies the increasing value of the C factor. It means that LULCC in the catchment area contributed to the increasing soil erosion rates. This phenomenon is similar to the contribution of global agricultural activities on the increasing total soil erosion and soil erosion rates in the tropical region was 3.2 Gt and 0.22 Mg/ha/year (Hu et al., 2021), the impact of LULC dynamic on soil erosion and sedimentation in Ethiopia (Kidane et al., 2019), and effect of LULC on soil erosion rates in Southern Spain (Millazo et al., 2022). As same as with the contribution of LULCC on soil erosion rates, the sediment yields increased from 68,048 t/year in 1993 to 84,533 t/year in 2006 and 103,190 t/year in 2019. Soil erosion and sedimentation are predicted to continue to increase in 2032, with a value of 122 t/ha/year and 111,028 t/year, respectively. The soil erosion factors in this prediction assumed constant except for C and P factors. It means that this prediction purely showed the contribution of LULC. The predicted value of soil erosion and sedimentation in 2032 is considered an instrument for managing the environment, especially in the context of integrating the outcomes of environmental effects and human well-being (Novick et al., 2022). The data and information about land use and land cover changes in the Musi Hydropower-Plant Catchment area and their impact on soil erosion and sedimentation are useful for decision-makers and policymakers to make better management planning. Conclusion The Musi Hydropower-Plant catchment area is vulnerable to degradation, as shown by the decreasing forest cover and paddy fields. The forest cover decreased from 18,580 ha in 1993 to 15,153 ha in 2019. At the same period, paddy fields decreased from 4,044 ha to 2,019 ha. The forest area and paddy fields are predicted to continue to decrease, becoming 12,907 ha and 1,729 ha in 2032. In contrast, the built-up area and dry land agriculture increased significantly, from 818 ha and 2,116 ha in 1993 to 2,229 ha and 5,778 ha in 2019, respectively. The increase of built-up area and dry land agriculture will continue to increase and reach 3,240 ha and 7,774 ha in 2032. The LULCC contributed to the increase of soil erosion rates from 75 t/ha/year in 1993 to 93 t/ha/year in 2006, and 113 t/ha/year in 2019. The soil erosion rates fixed continued to increase in 2032 becoming 122 t/ha/year. Increasing soil erosion rates contributed to the increasing sediment yield, from 68,048 t/year in 1993 to 84,533 t/year in 2006, and 103,190 t/year in 2019. The sediment yield continued to increase in 2032, becoming 111,028 t/year.
2023-07-11T16:46:39.904Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "f826a62217ab94d24d7501cc6b30fa5f5fe82888", "oa_license": "CCBYNC", "oa_url": "https://jdmlm.ub.ac.id/index.php/jdmlm/article/download/1541/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b3a7c1d8ece5ac911f5fb4231324e8d53658d8f7", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
118493772
pes2o/s2orc
v3-fos-license
Is CDF large $p_T$ anomaly from virtual SUSY threshold effects? Talk is given at the YITP International Workshop -- `Recent Developments in QCD and Hadron Physics'. Recent CDF data of the inclusive jet cross section shows anomalous deviation around large transverse momentum $p_{T}(j)\approx 200 \sim 350$ GeV. Is it possible to interpret the anomaly in terms of virtual SUSY effects. The answer is `NO', because we find that the virtual SUSY loop interference effects are too small to explain the CDF data. I. INTRODUCTION The CDF [1] and D0 [2] collaborations at Tevatron Collider have recently reported data for the inclusive jet cross section in pp collisions at √ s = 1800 GeV. Let us recapitulate some particulars of this data. There are basically two logical possibilities for explanations of the CDF data on inclusive jet production cross sections: (i) the parton distribution functions determined at low Q 2 region may not be accurate enough to be applicable to the high p T region with p T > 200 GeV, or (ii) there are some new physics around the electroweak scale. We give a brief review of these two possibilities in the following. The most interesting possibility is that jet measurements at hadron colliders may be sensitive to quantum corrections due to virtual SUSY particles [3,4,5]. The purpose of this note is to concentrate on this scenario. The layout of this paper is as follows. In next section we discuss our calculation of the SUSY virtual threshold effects. The final section contains discussions and conclusions of our numerical results. II. VIRTUAL SUSY THRESHOLD EFFECTS It is well-known that SUSY particles [gluinos, squarks] decrease or slow down the rate of fall of α s (µ) for large scale µ. "Large" means far above the threshold. At the one loop level the evolution equation for α s (µ) can be written as In the SM, b 3 is given by whereas in MSSM model one has where n f [ñ f ] is the number of quark [squark] flavors that are active. The contribution '−2' is the gluino contribution [it is assumed that the gluino is active in this case]. In Ref. [4] the issue of the effect of high-mass thresholds due to gluinos, squarks and other new heavy quarks on the evolution of α s was considered. The corrections to α s were found to be appreciable, this in turn means a significant increase in the transverse momentum dependence of jet production at the Tevatron. However, as noted in Ref. [5], these authors [4] do not include the effect of qqg Yukawa interactions and hence one cannot take their estimates for the superpartners of ordinary matter as final. In Ref. [5] the effect of Yukawa couplings was included. The results in [5] indicate that the CDF data cannot be explained by a mass threshold effect in the MSSM, as the calculated result is not only small but of the wrong sign. In a similar but more detailed analysis, the authors of Ref. [3], working in the context of MSSM, consider the virtual one-loop corrections to the parton-level subprocesses qq → qq, qq → q ′q′ , qq ′ → qq ′ , qq → gg and qg → qg, which are expected to dominate the large p T cross sections at the Tevatron energy. The purpose of this talk is to give our results of incorporating the one-loop radiative corrections into the running of α s , the dressing-up of the parton distribution functions, and finally convoluting the relevant subprocess cross sections with the SUSY dressed-up PDF's. We note that one should take into account sparticles effects on the parton structure functions at energies sufficiently far above the threshold and can ignore this effect in the threshold region. In the hadron colliders, like the Tevatron, what is measured is pp cross section, and not the individual subprocesses cross sections. So in order to determine the effect of subprocesses on the p T cross section one must perform a convolution of the cross section of each subprocess with the corresponding PDF's. We find that in the process of convolution with the PDF's, the "dips and peaks" in the various subprocesses [3] are much reduced. III. DISCUSSIONS AND CONCLUSIONS For our numerical calculation, we implement various lower bounds on squarks and gluinos depending on parameters in the MSSM. For example, D0 group [6] searched for the events with large missing p T with three or more jets, observed no such events above the level expected in the SM. This puts some limits on the squark and gluino masses assuming the short-lived gluinos: or mg = mq > 212 GeV . CDF group is currently analyzing their data, with their preliminary data being similar to the D0 results with slight increase in sparticles' mass bound. In the subsequent numerical analysis, we choose three sets of (mg, mq), which we shall refer to as Case I, II and III, respectively, (mg, mq) = (220 GeV, 220 GeV), (150 GeV, ∞), (150 GeV, 150 GeV) . The Case III is only of academic interest if the limit given in Eq. (5) is valid in reality. by going from subprocess to process level is due to t-channel "dilution" effect. This can be shown by simply not including the t-channel subprocesses' contribution. When this is done [see Fig. 2] reduction in the "peaks and dips "is not so large. In Fig. 2 In summary, in the MSSM the p T distributions does not differ very much from those of the SM except for the possible threshold effects (∼ 1%) through loop corrections. In actual experiments, the jet resolution will in general smear out any narrow resonance structures (which may be the case for the long-lived gluinos), leading to broad resonance structure, and therefore it looks impossible to detect SUSY particles through this kind of indirect virtual threshold effect. It has been reported [7] that the apparent discrepancy between CDF data and theory may be explained by the uncertainties resulting from the non-perturbative parton distribution, in particular in the gluon distribution. Our main conclusion seems to be on the right track in view of this global analysis of parton distribution function. For more details on this talk, please look Ref. [8]. Fig. 1 Deviation from the SM due to SUSY contribution to dσ dp T j versus p T j . Fig. 2 Deviation from the SM due to SUSY contribution to dσ dp T j versus p T j , when only the subprocesses qq → q ′q′ , and qq → gg are included. SUSY (m g =m q =150 GeV) SUSY (m g =m q =220 GeV) √s = 1800 GeV, |η| < 0.7 Fig. 2 (qq --> q′q′, gg only)--
2019-04-17T15:39:02.836Z
1997-07-20T00:00:00.000
{ "year": 1997, "sha1": "cd6640edcdab5d61cf25f5a185f9e5d93c4c875f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5f72118b29f23a42fdeba91c56e483580d303adb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
20209175
pes2o/s2orc
v3-fos-license
The Genia Event Extraction Shared Task, 2013 Edition - Overview The Genia Event Extraction task is organized for the third time, in BioNLP Shared Task 2013. Toward knowledge based construction, the task is modified in a number of points. As the final results, it received 12 submissions, among which 2 were withdrawn from the final report. This paper presents the task setting, data sets, and the final results with discussion for possible future directions. Introduction Among various resources of life science, literature is regarded as one of the most important types of knowledge base. Nevertheless, lack of explicit structure in natural language texts prevents computer systems from accessing fine-grained information written in literature. BioNLP Shared Task (ST) series (Kim et al., 2009;Kim et al., 2011a) is one of the community-wide efforts to address the problem. Since its initial organization in 2009, BioNLP-ST series has published a number of finegrained information extraction (IE) tasks motivated for bioinformatics projects. Having solicited wide participation from the community of natural language processing, machine learning, and bioinformatics, it has contributed to the production of rich resources for fine-grained BioIE, e.g., TEES 1 (Björne and Salakoski, 2011), SBEP 2 (McClosky et al., 2011) and EVEX 3 (Van Landeghem et al., 2011). The Genia Event Extraction (GE) task is a seminal task of BioNLP-ST. It was first organized as the sole task of the initial 2009 edition of BioNLP-ST. The task was originally designed and implemented based on the Genia event corpus (Kim et al., 2008b) which represented domain knowledge around NFκB proteins. There were also some efforts to explore the possibility of literature mining for pathway construction (Kim et al., 2008a;Oda et al., 2008). The GE task was designed to make such an effort a community-driven one by sharing available resources, e.g., benchmark data sets, and evaluation tools, with the community. In its second edition (Kim et al., 2011b) organized in BioNLP-ST 2011 (Kim et al., 2011a), the data sets were extended to include full text articles. The data sets consisted of two collections. The abstract collection, that had come from the first edition, was used again to measure the progress of the community between 2009 and 2011 editions, and the full text collection, that was newly created, was used to measure the generalization of the technology to full text papers. In its third edition this year, while succeeding the fundamental characteristics from its previous editions, the GE task tries to evolve with the goal to make it a more "real" task toward knowledge base construction. The first design choice to address the goal is to construct the data sets fully with recent full papers, so that the extracted pieces of information can represent up-to-date knowledge of the domain. The abstract collection, that had been already used twice (in 2009 and 2011), is removed from official evaluation this time 4 . Second, GE task subsumes the coreference task which has long been considered critical for improvement of event extraction performance. It is implemented by providing coreference annotation in integration with event annotation in the data sets. The paper explains the task setting and data sets, presents the final results of participating systems, and discusses notable observations with conclusions. Table 1: Event types and their arguments for Genia Event Extraction task. The type of each filler entity is specified in parenthesis. Arguments that may be filled more than once per event are marked with "+", and optional arguments are with "?". Task setting This section explains the task setting of the 2013 edition of the GE task with a focus on changes to previous editions. For comprehensive explanation, readers are referred to Kim et al. (2009). The changes made to the task setting are threefolds, among which two are about event types to be extracted. Table 1 shows the event types and their arguments targeted in the 2013 edition. First, four new event types are added to the target of extraction; the Protein modification type and its three sub-types, Ubiquitination, Acetylation, Deacetylation. Second, The Protein modification types are modified to be directly linked to causal entities, which was only possible through Regulation events in previous editions. The modifications were made based on analysis on preliminary annotation during preparation of the data sets: in recent papers on NFκB, discussions on protein modification were observed with non-trivial frequency. However, in the end, it turned out that the influence of the above modifications was trivial in terms of the number of annotated instances in the final data sets, as shown in section 3, after filtering out events on nonindividual proteins, e.g., protein families, protein complexes. Third change made to the task setting is addition of coreference and part-of annotations to the data sets. It is to address the observation from 2009 edition that coreference structures and entity relations often hide the syntactic paths between event triggers and their arguments, restricting the performance of event extraction. In 2011, the Protein coreference task and Entity Relation were organized as sub-tasks, to explicitly address the problem, but this time, coreference and part-of annotations are integrated in the GE task, to encourage an integrative use of them for event extraction. Figure 1 shows an example of annotation with coreference and part-of annotations 5 . Note that the event representation in the figure is relation centric 6 , which is different from the event centric representation of the default BioNLP-ST format. The two representations are interchangeable, and the GE task provides data sets in both formats, together with an automatic converter between them. Below is the corresponding annotation in the BioNLP-ST format: In the example, the event trigger, binding, denotes four binding events, in which the four proteins, TRAF1, TRAF2, TRAF3, and TRAF6, bind to the protein, CD40, respectively, through the site, cytoplasmic tail. The links between the four Figure 1: Annotation example with coreferences and part-of relationship proteins and the event trigger are however very hard to find, without being bridged by the demonstrative noun phrase (NP), These proteins. In the case, if the link between the demonstrative NP, These proteins and its four antecedents, TRAF1, TRAF2, TRAF3, and TRAF6, can be somehow detected, the remaining link, between the demonstrative NP and the trigger, may be detected by their syntactic connection. A key point here is the different characteristics of the two step links: detecting the former is rather semantic or discoursal while the latter may be a more syntactic problem. Then, solving them using different processes would make a sense. To encourage an exploration into the hypothesis, the coreference annotation is provided in the training and development data sets. Based on the definition of event types, the entire task is divided into three sub-tasks addressing event extraction at different levels of specificity: Task 1. Core event extraction addresses the extraction of typed events together with their primary arguments. Task 2. Event enrichment addresses the extraction of secondary arguments that further specify the events extracted in Task 1. Task 3. Negation/Speculation detection addresses the detection of negations and speculations over the extracted events. For more detail of the subtasks, readers are referred to Kim et al. (2011b). Data Preparation As discussed in section 1, for the 2013 edition, the data sets are constructed fully with full text papers. Table 2 shows statistics of three data sets for training, development and test. The data sets consist of 34 full text papers from the Open Access subset of PubMed Central. The papers were retrieved using lexical variants of the term, "NFκB" as primary keyword, and "pathway" and "regulation" as secondary keywords. The retrieved papers were given to the annotators with higher priority The annotation to the all 34 papers were produced by the same annotators who also produced annotations for the previous editions of GE task. The annotated papers are divided into the training, development, and test data sets; 10, 10, and 14, respectively. Note that the size of the training data set is much smaller than previous editions, in terms of number of words and events, while the size of the development and test data sets are comparable to previous editions. It is the consequence of a design choice of the organizers with the notion that (1) relevant resources are substantially accumulated through last two editions, and that (2) therefore the importance of training data set may be reduced while the importance of development and test data sets needs to be kept. Instead, participants may utilize, for example, the abstract collection of the 2011 edition, of which the annotation was produced by the same annotators with almost same principles. As another example, the data sets of the EPI task also may be utilized for the newly added protein modification events. Table 3 shows the statistics of annotated event types in different sections of the full papers in the data sets. For the analysis, the sections are classified to five groups as follows: • The TIAB group includes the titles and abstracts. In the GE-2011 data sets, the corresponding files match the pattern, PMC-* TIAB * .txt. Participation The GE task received final submissions from 12 teams, among which 2 were withdrawn from final report. Table 4 summarizes the teams. Unfortunately, the subtasks 2 and 3 did not met a large participation. Table 5 profiles the participating systems. The systems are roughly grouped into SVM-based pipeline (EVEX, TEES-2.1, and DlutNLP), rule-based pipeline (BioSEM and UZH), mixed pipeline (USheff and HCMUS), joint pattern matching (NCBI and NICTANLM), and joint SVM (HDS4NLP) systems. In terms of use of external resources, 5 teams (EVEX, TEES-2.1, NCBI, DlutNLP, and USheff) utilized data sets from 2011 edition, and two teams (HDS4NLP and NICTANLM) utilized independent resources, e.g., UniProt (Bairoch et al., 2005), IntAct (Kerrien et al., 2012), and CRAFT (Verspoor et al., 2012). Table 6 shows the final results of subtask 1. Overall EVEX, TEES-2.1, and BioSEM show the best performance only with marginal difference between them. In detail, the performance of BioSEM is significantly different from EVEX and TEES-2.1: (1) while BioSEM show the best performance with Binding and Protein modification events, EVEX and TEES-2.1 show the best performance with Regulation events which takes the largest portion of annotation in data sets; and (2) while the performance of EVEX and TEES-2.1 is balanced over recall and precision, BioSEM is biased for precision, which is a typical feature of rule-based systems. It is also notable that BioSEM has achieved a near best performance using only shallow parsing. Although it is not shown in the table, NCBI is the only system which produced Ubiquitination events, which is interpreted as a result of utilizing 2011-EPI data sets for the system development. Table 7 shows subtask 1 final results only within TIAB sections. It shows that the systems developed utilizing previous resources, e.g., 2011 data sets, and EVEX, perform better for titles and abstracts, which makes sense because those resources are title and abstract-centric. Tables 8 and 9 show evaluation results within Methods and Captions section groups, respectively. All the systems show their worst performance in the two section groups. Especially the drop of performance with regulation events is huge. Note the two section groups also show significantly different event distribution compared to other section groups (see section 3). It suggests that language expression in the two section groups may be quite different from other sections, and an extensive examination is required to get a reasonable performance in the sections. Table 10 and 11 show final results of Task 2 (Event enrichment) and 3 (Negation/Speculation detection), respectively, which unfortunately did not meet a large participation. Conclusions In its third edition, the GE task is fully changed to a full text paper centric task, while the online evaluation service on the abstract-centric data sets is kept maintained. Unfortunately, the coreference annotation, which has been integrated in the event annotation in the data sets, was not exploited by the participants, during the official shared task period. An analysis shows that the performance of systems significantly drops in the Methods and Captions sections, suggesting for an extensive examination in the sections. As usual, after the official shared task period, the GE task is maintaining an online evaluation that can be freely accessed by anyone but with a time limitation; once in 24 hours per a person. With a few new features that are introduced in 2013 editions but are not fully exploited by the participants, the organizers solicit participants to continuously explore the task using the online evaluation. The organizers are also planning to provide more resources to the participants, based on the understanding that interactive communication between organizers and participants is important for progress of the participating systems and also the task itself.
2014-07-01T00:00:00.000Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "3598e8b4664477cb82b9887c78e4d178a2437f5c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "3598e8b4664477cb82b9887c78e4d178a2437f5c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Computer Science" ] }
9968223
pes2o/s2orc
v3-fos-license
Pattern formation in inclined layer convection We report experiments on thermally driven convection in an inclined layer of large aspect ratio in a fluid of Prandtl number $\sigma \approx 1$. We observed a number of new nonlinear, mostly spatio-temporally chaotic, states. At small angles of inclination we found longitudinal rolls, subharmonic oscillations, Busse oscillations, undulation chaos, and crawling rolls. At larger angles, in the vicinity of the transition from buoyancy- to shear-driven instability, we observed drifting transverse rolls, localized bursts, and drifting bimodals. For angles past vertical, when heated from above, we found drifting transverse rolls and switching diamond panes. of experiments. The two convection cells had a size Γ 1 ≃ (21 × 42)d 2 and Γ 2 ≃ (14 × 48)d 2 . For all data, the Prandtl number was σ ≡ ν/κ = 1.07 as determined from [14], with the kinematic viscosity ν and thermal diffusivity κ. The vertical thermal diffusion time was τ v ≡ d 2 /κ = 3.0 s. Inclines from 0 • (horizontal) to 120 • (30 • past vertical) were possible, with an accuracy of ±0.02 • . Following [2] we calculated the Boussinesq number P(θ) for the corresponding horizontal layer to estimate non-Boussinesq effects. At ∆T c (θ) for θ < 70 • we found P(θ) < 1.0, putting the flow into the Boussinesq regime. For larger angles P increased linearly to 3.0 for the largest temperature differences investigated. We observed the same convective patterns in both convection cells. Onset of convection: In ILC, the forward bifurcation to LR is predicted to occur at the critical Rayleigh number R c (θ) = R t c (0 • )/cos θ where R t c (0 • ) = 1708 = αd 3 g∆T c /κν (α is the thermal expansion coefficient and ∆T c is the critical temperature difference). The threshold for the forward bifurcation to shear driven TR at large inclination angle is more complicated, and can only be determined numerically [12,13,15]. We determined ∆T c for convection by quasi-statically increasing ∆T in steps of 1 mK every 20 minutes past the point where convection was observable and then decreasing the temperature difference similarly. For all angles we observed forward bifurcations. Figure 2 shows the measured R c (θ), as well as the theoretically predicted onsets for both the buoyancy-driven (longitudinal) and the shear-driven (transverse) instabilities [15]. We found agreement with theory for the onset of LR: the experimentally observed value was R c (θ) = R e c (0 • )/cos θ with R e c (0 • ) = 1687 ± 24. We did not, however, observe the theoretically predicted stationary TR, but instead drifting TR (DTR) at a slightly larger critical Rayleigh number. The drift down the incline may be attributed to the broken symmetry across the layer which is caused by the temperature dependence of the fluid parameters (non-Boussinesq effects). Very interesting is the vicinity of the theoretically predicted codimension two point at θ c = 77.76 • [15], where LR and TR have the same onset value. Experimentally, we found a forward bifurcation to DTR above θ c = (77.5 ± 0.05) • , and in the range 77.5 • ≤ θ ≤ 84 • DTR lost stability to drifting bimodals (DB) above ǫ ≈ 0.001. As shown in Fig. 3, DB consist of a superposition of LR and DTR. Here ǫ ≡ (∆T − ∆T c (θ))/∆T c (θ) is the reduced control parameter. Theoretically, Fujimura and Kelly [11] predicted a forward bifurcation to transverse rolls, which lose stability to bimodals at ǫ ≈ 0.001 in a narrow angular region. We find good agreement with these predictions, but with the difference that the experimentally observed patterns are drifting. Nonlinear states: Figure 4 shows the measured phase boundaries for the ten observed nonlinear convective states. At low angles (θ < 13 • ), LR are stable up to ǫ ≃ 1 above which the novel state of subharmonic oscillations (SO) sets in. These oscillations are characterized by a pearlnecklace-like pattern of bright (cold) spots that travel along a standing wave pattern of wavy rolls. As shown in Fig. 5, these oscillations appear in patches whose location changes in time. Typical frequencies of the oscillations were measured to be 1 to 3 cycles per τ v . A recent theoretical analysis has shown agreement with this value [18]. With further increase in ǫ, localized patches of traveling oscillations burst intermittently. Within O(τ v ) the amplitude of the rolls' waviness increases, the pattern tears transverse to the rolls as shown in the upper left corner of Fig. 5, and fades away leaving an almost parallel roll state. For θ ≃ 10 • and ǫ 4, we observed patches of the well-known Busse oscillations (BO) coexisting with patches of the SO. As shown by the dotted line in Fig. 4, our data for the onset of the BO agrees well with the theoretical prediction calculated for σ = 0.7 [12]. It is surprising, however, that both oscillations (SO and BO) coexist as localized patches in the same cell. At intermediate angles (25 • < θ < 70 • ), where the initial instability is to LR we found with increasing ǫ that LR were unstable to undulations. Although the experimentally determined value for the instability ǫ ≈ 0.01 agrees well with the theoretical prediction (see Fig. 4) [12,13,15], we did not observe a stationary pattern of undulations, but a defect-turbulent state of undulation chaos (UC). A snapshot of UC is shown in Fig. 6a. At ǫ 0.11, the UC begins to "twitch" in the direction transverse to the rolls on time scales O(τ v ). With increasing ǫ, the amplitude of the twitching increases and the rolls eventually tear, with the ends "crawling" in the direction transverse to the original rolls. A snapshot of crawling rolls (CR) is shown in Fig. 6b. In the vicinity of the codimension-two point, at θ c , we observed drifting bimodals quite close to onset. As shown in Fig. 4, for small angles the existence region of the pure DB is limited by localized transverse bursts (TB), while for large angles by DTR. A snapshot of transverse bursts and the evolution of a single burst is shown in Fig. 7. In this region of phase space the LR occur in patches that grow and decay intermittently while TB nucleate in high amplitude LR-regions. As shown in the time series in Fig. 7, TB grow over the period of a few τ v and then decay rapidly. Above ǫ ≈ 0.8 the DB are unstable to localized longitudinal bursts (LB) as shown in Fig. 8a. As shown in Fig. 8b-j, a few longitudinal rolls grow locally to large amplitude and then quickly fade. With both types of bursts, the bursts increase both in density and frequency when ǫ is increased, eventually developing into a turbulent state at ǫ 1. Past 90 • , we continued to observe shear driven convection patterns. DTR are the primary instability; however, they are unstable to switching diamond panes (SDP) at ǫ ≈ 0.07. The state is characterized by spatio-temporally chaotic switching on time-scales of O(τ v ) from +45 • to −45 • of large amplitude regions of DTR, as seen in Fig. 9a. At ǫ 0.1 SDP are unstable to LB, which in this region of phase space are denser but travel less distance than in TR, as shown in Fig. 9b. Conclusion: Inclined layer convection in the weakly nonlinear regime displays a rich phase diagram, with ten different states accessible over the range of parameters investigated. The phase space naturally divides into several regions of characteristic behavior which have so far been characterized semi-quantitatively. All states but longitudinal and transverse rolls are spatio-temporally chaotic. Most instabilities occurred very close to onset and further theoretical description should be possible. Especially interesting is the bursting behavior, which may be related to turbulent bursts in other shear flows [19]. We thank F. H. Busse and W. Pesch for important discussions on the stability curves and theoretical descriptions of various states. E.B. acknowledges the kind hospitality of Prof H. Levine at the University of California at San Diego where part of this manuscript was prepared. We gratefully acknowledge support from the NSF under grant DMR-9705410. [12], the dashed line is the predicted onset of undulations [15], and the solid lines are guides to the eye. Open circles (UC) were measured via defect density [17], open diamonds (SDP) were measured via correlation length [17], and the remainder were measured visually. The inset shows a magnification of the codimension-two region.
2014-10-01T00:00:00.000Z
1999-12-21T00:00:00.000
{ "year": 2000, "sha1": "f3aa9d8d3f1927e9c296194af3fc27aef80a0cf6", "oa_license": null, "oa_url": "https://repository.lib.ncsu.edu/bitstream/1840.20/34600/1/Daniels-2000-PFI.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f1754d21ee17c969cb21e0fe2547b6396a51dfd5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
260530403
pes2o/s2orc
v3-fos-license
Preserving Consumer Autonomy through European Union Regulation of Artificial Intelligence: A Long-Term Approach Abstract Personal autonomy is at the core of liberal societies, and its preservation has been a focus of European Union (EU) consumer and data protection law. Professionals increasingly use artificial intelligence in consumer markets to shape user preferences and influence their behaviours. This paper focuses on the long-term impact of artificial intelligence on consumer autonomy by studying three specific commercial practices: (1) dark patterns in user interfaces; (2) behavioural advertising; and (3) personalisation through recommender systems. It explores whether and to what extent EU regulation addresses the risks to consumer autonomy of using artificial intelligence in markets in the long term. It finds that new EU regulation does bring novelties to protect consumer autonomy in this context but fails to sufficiently consider the long-term consequences of autonomy capture by professionals. Finally, the paper makes several proposals to integrate the long-term risks affecting consumer autonomy in EU consumer and data protection regulation. It does so through an interdisciplinary approach, drawing from legal research and findings in the study of long-term thinking, philosophy and ethics and computer science. I. Introduction The short-term thinking driving policymaking and business decisions contributes to the major crises that our societies are currently facing.A growing body of academic literature addresses the need for long-term thinking in various policy areas, 1 including the risks that artificial intelligence (AI) poses to society. 2 AI researchers sometimes disagree on the time frame to assess this technology: to focus on either its present or future risks and impacts.However, Baum suggests realigning the debate around an "intellectualist" factiondeveloping AI and assessing its risks for the sake of intellectual interestand a "societalist" factionstudying the societal impacts in both the near and long term. 3Here, the focus is to assess the near-and long-term risks associated with using AI in consumer markets, in line with the latter approach.As developed below, various time scales are thus relevant when analysing these risks: some risks can materialise soon (eg risks for today's children), while others can take decades or generations to emerge (eg risks for cultural transmission).Uncertainty in the face of rapid technological development is also relevant to this discussion.Ultimately, this exercise also relates to the sustainability of AI; that is, "the extent to which AI technology is developed in a direction that meets the needs for the present without compromising the ability of the future generations to meet their own needs". 4I is intrinsically linked to future thinking, as the technology learns from past and present data to predict future outcomes.Deep learning, an AI technique based on artificial neural networks, allows AI systems to learn autonomously from a given dataset.Firms use AI to predict consumer preferences and behaviours and influence transactional decisionmaking in consumer markets.To some extent, traditional marketing (ie before the uptake of digital technologies and big data) had the same objectives.However, the unprecedented ability of AI technologies to predict consumer preferences and influence transactions and the scale at which firms have been using AI for marketing purposes warrant specific research and assessment of the current legal and regulatory landscape in this field. 5ndeed, in some instances, new AI-enabled commercial practices unjustifiably affect consumer autonomy, here understood as their ability to make decisions without undue commercial influence.The focus here is on three undue commercial practices: (1) the use of so-called "dark patterns"or manipulative designin user interfaces; (2) abuses in behavioural advertising; and (3) the lack of control over personalised services through recommender systems.Reflecting on the risks and impacts of AI on consumer autonomy in the near and long term is of paramount importance in liberal democracies 6 because its preservation is one of European Union (EU) consumer and data protection law's foundations. 7herefore, this article first briefly conceptualises consumer autonomy (Section II).Second, it evaluates near-and long-term risks of AI-driven influences on consumers' autonomy (Section III).Third, it maps and assesses the EU regulatory instruments addressing these risks to consumer autonomy (Section IV).Fourth, it concludes with a set of proposals for integrating these concerns into EU regulations (Section V).The article takes an interdisciplinary approach, drawing from legal and regulatory considerations and findings in the study of behavioural economics, philosophy and ethics and computer science. II. Conceptualising consumer autonomy The concept of autonomy is central to the EU's consumer and data protection law framework, but it appears to lack a proper definition, which makes it difficult to understand. 8In turn, a lack of a proper conceptualisation of consumer autonomy entails a systemic difficulty for policymakers to properly calibrate regulation to the realities generated by new technologies.This regulatory struggle is especially true in the AI context, which is characterised by the rapid evolution of technologies, new challenges for individual consumers and risks for society at large.As the following sections develop, these collective risksresulting from the spillover from individual infringements into societal issuesinter alia consist in the increased economic power in the hands of a few concentrated businesses, threats to democracy and alterations to the formation of human personality.This collective dimension should properly be considered in conceptualising consumer autonomy in the AI context. Despite its legal connotations, better understanding the concept of autonomy requires "a reference point outside the legal system", 9 mainly because of its meaning in private law usually referring to the freedom of contractwhich is too narrow.It is thus appropriate to better conceptualise autonomy from an interdisciplinary viewpoint, as follows, with relevant references to the marketing, ethics and sustainability literature, moving from a general context to a more specific AI one.A more original conceptualisation applied to AI marketing is offered hereafter. Classically, autonomy refers to the ability to govern oneself, free from external control or manipulationthus implying independence. 10The marketing literature is particularly relevant to conceptualising autonomy applied to consumers.As in EU law, autonomy also appears to lack proper conceptualisation in marketing ethics. 11A literature review of this field reveals that authors define autonomy as involving control, will and desire, choice and self-reflection. 12More importantly, consumer autonomy is the ethical precondition that legitimates marketing as a social system in capitalistic societies. 13Building on existing research in marketing theory, Anker distinguishes between internal and external conditions for consumer autonomy and suggests that consumers are more likely to be autonomous when they have access to relevant information and can critically reflect on it based on their values and goals. 14At the same time, it is well established that consumer autonomy is affected by cognitive limitations 15 and social contexts. 16In consumer law, these limitations have challenged the regulatory focus on information requirements partly based on the belief that consumers are perfectly rational agents who do read the fine print. 17ooking at autonomy from an ethical perspective helps further flesh out the concept.Sax et al have laid down three requirements for consumer autonomy in a digital context: (1) independencebeing in control of one's life by acting based on one's own "values, desires and goals"; (2) authenticitytruly identifying with one's own values, desires and goals, free from manipulation; and (3) optionsallowing for effectively acting based on one's own values, desires and goals, which would otherwise remain useless. 18This definition aligns with the marketing literature review mentioned above: in order for consumer transactions to reflect their wills and desires, they first need to have sufficient options and true choices.The importance of choices brought by competition is an aspect of consumer autonomy that is sometimes overlooked. Moreover, consumer autonomy has been defined by scholars when studying the sustainability of AI systems, which is relevant when considering the long-term impacts of AI technologies on consumers.Bjørlo et al argue that consumer autonomy in this context requires: (1) transparencyconsumers must understand how AI systems make decisions and what information they are using; (2) complementarityconsumers should be able to use AI systems actively to augment their autonomy, not only as passive receivers of recommendations, for instance; and (3) privacy regulationto ensure that firms do not exploit personal data to feed their AI systems. 19The authors again point to the collective dimension of autonomy preservation through privacy, the individual protection of which has positive spillover effects on society at large. 20In an AI context, the issue of privacy is unavoidable, as AI systems affecting consumer autonomy rely on the large-scale collection and processing of personal data.The definition also offers a more specific focus on interactions between consumers and AI systems, which is relevant in this context because, by definition, AI systems do operate with varying degrees of autonomy themselves. Building on these preliminary definitions of consumer autonomy, one can propose a more comprehensive conception that also takes long-term risks into account.The idea behind this concept is that if consumer autonomy is the ethical precondition legitimising marketing as a social system in capitalistic societies, then AI marketing must respect consumer autonomy to remain legitimate.The conception proposed here relies on four requirements for preserving consumer autonomy in the context of AI marketing: (1) choice; (2) privacy; (3) independence; and (4) reciprocity (see Figure 1).Without ordering them hierarchicallyno requirement is more important than the othereach subsequent requirement is the precondition for the next.In that sense, this framework allows us to put these requirements in relation: some harms to consumer autonomy require remedies that pertain to other requirements upstream. First, the requirement of choice implies both structural and granular elements.Structurally, choice implies the need for sufficient competition.Otherwise, limited options can structurally thwart autonomous action.Lack of competition is already a challenge today: powerful actors dominate many AI-intensive consumer markets, and national and EU competition enforcement has been taking place in many of them, including search engines, targeted advertising and app stores.More competition in consumer markets using AI would allow more choices for consumers.In a data-intensive context, more competition can be fostered through the right to data portability, for instance.At a more granular level, choice implies true options for consumers when transacting and defining the limits of that transaction without professionals manipulating these options.In turn, both structurally and granularly, privacy could become more of a differentiating factor between competitors.In the long term, more competition and privacy-friendly consumer choices would encourage more trustworthy innovation. Second, privacy has become increasingly relevant with the development of information technologies and even more so in the AI context.In many markets lacking effective competition, consumers have fewer options and are often attracted to platforms whose business models mainly rely on the extensive collection and processing of personal dataincluding for behavioural advertisingagainst free services.Hence the need to analyse the impacts of AI on consumer autonomy by considering both consumer and data protection laws.This reality has two consequences in terms of autonomy.On the one hand, the large scale of individual privacy violations has significant consequences for society.For instance, misusing large amounts of personal data has enabled interferences in the democratic political process.The spillover from individual data protection infringements into societal issues, again, refers to a more collective dimension of consumer autonomy.On the other hand, respect for consumer privacy is a precondition to their independence in market transactions, free from manipulation. Third, independence can be seen as a shield of consumer autonomy.Having more control over their personal data, consumers can better preserve their preferences and desires when consuming content, thus preserving their authenticity and avoiding manipulation.More control implies, for instance, more transparency from AI marketing actors, protection against exposure to biased content, access to the parameters of algorithmic decision-making in an intelligible way and clear labels and information requirements.To preserve effective consumer autonomy, more transparency should be coupled with the ability to withdraw from targeting or personalised services.With the long term in mind, preserving consumer independence implies clear rules and prohibitions around the use of generative AI in marketing.In turn, consumer independence is a prerequisite for more reciprocity between contracting parties when AI systems are involved. Finally, reciprocity can be seen as the sword of consumer autonomy.More independence upstream means a greater ability for consumers to play on a more level playing field with professionals using AI marketing downstream.More reciprocity implies the possibility for consumers who do choose targeting and personalisation to engage in a more complementaryand activeway with AI services (eg with the possibility of controlling specific parameters of the advertisements shown to consumers, the recommendations they receive or the chatbots they interact with).Giving consumers more control over the adequacy of the AI-recommended content would indeed be autonomy-enhancing. Analysing consumer autonomy through this four-layer prism also allows us to study the dynamic at play.The argument here is that the current situation in consumer markets allows "autonomy capture" by professionals using AI systems at the expense of consumers.This autonomy capture first occurs at a more structural level of market competition and individual choices about transactions.Then, it progressively affects the subsequent requirements of privacy and independence to reach the reciprocity in the relationship between individual consumers and businesses.In this setting, each requirement is a prerequisite for the next because they form a progression that safeguards and enables consumer autonomy.Choice establishes the foundation for autonomy, privacy ensures the protection of personal information and independence and independence empowers consumers to engage reciprocally with AI systems.Together, these requirements create a framework that allows consumers to exercise autonomy in their interactions with AI marketing.The understanding of the dynamic of autonomy capture at play helps us analyse the applicable regulatory framework and find new remedies against AI-generated harms to consumer autonomy.The following analysis is conducted against the conceptualisation of consumer autonomy proposed here. III. Long-term risks to consumer autonomy posed by artificial intelligence Undoubtedly, consumers can benefit from AI systems and personalisation, as it allows them to make sense of the vast amounts of information and content available.Therefore, AI systems can improve consumer autonomy by making more relevant information available, allowing more efficient and accessible decision-making based on more personalised options.If these systems were transparent, complementary and allowed users to control parameters for recommendations, one could argue that they would enhance consumer autonomy. At the same time, philosophers21 and lawyers22 studying the impacts of AI on society and markets share concerns that, currently, firms using the technology negatively impact consumer autonomy.These concerns mainly relate to privacy issues, such as untransparent or misleading personal data collection and processing.They also reflect on the extent to which algorithms influence consumers' preferences and choices (eg through untransparent advertising and recommendations parameters, users lacking control over these parameters or by making it difficult to withdraw from services).Thus, the issue is not to regulate the technology abstractly but specific commercial practices. Addressing these issues today is necessary to mitigate possible negative consequences in the near and long term. 23While these concerns might not qualify as "existential" in the sense of extinguishing humankind, they could significantly alter human nature in terms of free will, personhood, intimacy and interpersonal relationships. 24Regulators should thus prioritise the protection of consumer autonomy for current and future generations. Consumers appearing indifferent to protecting their privacy or autonomy is no reason for regulators not to interveneto the contrary.Indeed, consumers usually face the "privacy paradox": although they constantly state that they value privacy, they often do not act to protect it and freely provide their data. 25Worse: consumers appear to suffer from a cognitive bias known as the "non-belief in the law of large numbers", whereby, due to information overload about the implications of their decisions regarding privacy, they actually undervalue it. 26This apparent lack of concern usually reflects a lack of understanding about the consequences of their technology uses. 27At the same time, scholars repeatedly demonstrate the case for protecting both privacy28 and autonomy,29 adding to the arguments for more regulatory intervention, not less. The following subsections discuss three commercial practices relying partially or entirely on AI that particularly affect consumer autonomy and are linked together.The starting point of our analysis relates to design choices in user interfaces because they are doorways to more privacy-and autonomy-intrusive practices (Section III.1).This is the case for digital targeted advertising, which AI systems heavily automate (Section III.2), as well as for personalised recommendations (Section III.3). Design choices in user interfaces Designing user interfaces for technology is not neutral and impacts user engagement.Although design choices are not explicitly related to AI, they are relevant to consider when analysing the effects of AI on consumer autonomy.Indeed, some interfaces are intentionally designed to deceive or manipulate consumers when making decisions such as privacy choices (eg with deceiving cookie banners).Such design choices are more commonly known as "dark patterns". 30The focus here will be on dark patterns affecting privacy choices.Indeed, in consumer transactions, their use upstream allows firms to obtain more personal data to feed their AI systems, enabling potentially more effective targeted advertising and personalised recommendations downstream (see the following sections). The use of dark patterns is widespread, including on popular websites.In 2019, researchers developed automated techniques that identified dark patterns on about 11,000 shopping websites worldwide. 31The study defined dark patterns as "user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions"thus, not limited to privacy choices.The authors found that around 11% of these websites contained dark patterns and that they were more likely to appear on popular websites. Dark patterns affecting privacy choices are illegal, among other things, because they do not comply with the General Data Protection Regulation (GDPR)'s requirements for valid consent, which must be a "freely given, specific, informed and unambiguous indication of the data subject's wishes". 32Returning to our short conceptualisation of autonomy (see above), the GDPR's requirements for consent imply that consumersas data subjectsshould have a real choice about accepting being tracked online.Here, transparency and consent come together because data subjects' consent is only valid if firms duly and intelligibly inform them of their data collection and the purposes of their data processing, enabling them to make that choice.In other words, bypassing transparency requirements directly affects consumer autonomyin terms of choice, privacy and independenceand makes consent invalid under the GDPR.In 2020, an empirical study showed that so-called "cookie banners" appearing on websites and seeking consumer consent to place cookies on their devices were mainly not compliant with the GDPR: just 11.8% of websites using the five top consent management platforms met minimal GDPR requirements for valid consent. 33he French data protection authority referred to this empirical study in its decision to fine Facebook €60 million for the use of dark patterns, namely not enabling users to reject cookies as easily as accepting them. 34Discouraging users to decline cookies while encouraging them to accept being tracked on the first page undermined their freedom of consent, as many users would not accept cookies if offered a genuine choice.The Commission Nationale de l'Informatique et des Libertés (CNIL) also fined Google €150 million for similar practices. 35Applying the French law transposing the ePrivacy Directive in light of the heightened consent requirements under Article 4 (11) and Recital 42 GDPR, the authority held that the method employed by Google and YouTube for users to manifest their choice over the placing of cookies was illegally biased in favour of consent. 36Again, the authority referred to several studies showing that organisations implementing a "refuse all" button on the first-level consent interface had seen a decrease in the consent rate to accept cookies. 37The CNIL more recently fined Microsoft €60 million for similar practices within its Bing search engine, relying on the same studies. 38o conclude, the design of digital technologies has important implications for longterm thinking, especially for its impact on consumerism.Philosopher Roman Krznaric argues that the design of digital technologies plays a significant role in consumer autonomy and that short-term design elementssuch as the "Buy Now" buttoncontribute to a culture of instant gratification that can undermine the ability to make autonomous decisions in the long term. 39Krznaric asserts that technology companies often prioritise immersing users in the digital present, distracting them from pursuing long-term goals. 40In a way, the French data protection authority recognised just that by finding that Google was discouraging users from rejecting cookies, taking advantage of the fact that Internet browsing is "fast and fluid". 41The challenge for policymakers is thus to foster long-term thinking in regulation and the industry to preserve consumer autonomy, as will be discussed in Section IV. Behavioural advertising Firms design dark patterns to obtain personal consumer data, usually for advertising purposes.For example, targeting cookies towards consumers' devices allows digital advertisers to track Internet browsing across websites.By crossing this information with personal and behavioural data from social media and data purchased from brokers, digital advertisers use AI systems to analyse consumer preferences.Advertisers can target them with personalised ads by relying on complex networks of actors 42 and having first-hand knowledge of consumer behaviours.For consumers, the benefits of personalised advertisements include lower transaction costs, as the recommended products and services supposedly better match their preferences.However, behavioural advertising raises issues regarding consumer autonomy, especially in terms of privacy, independence and reciprocity, referring to the requirements proposed in Section II. First, in terms of privacy, digital targeted advertising is highly problematic within the European data protection framework.Being opaque about data processing in privacy policies itself constitutes a GDPR infringement.Veale and Zuiderveen Borgesius argue that real-time bidding (RTB) for adsbehavioural advertising is based on such auctionsis mainly incompatible with EU data protection rules because of the GDPR's transparency and consent requirements. 43The Belgian data protection authority cited the authors' article in a 2022 decision sanctioning IAB Europe for several GDPR infringements. 44The case concerned IAB Europe's Transparency and Consent Framework, a widespread mechanism facilitating the management of users' preferences for personalised digital advertising.The authority found that IAB Europe lacked a valid legal basis for processing personal information and was not transparent about the scope of the processing, preventing users from controlling their data.Overall, this decision calls into question the legality of the entire RTB ecosystem considering the GDPR. 45The Norwegian Consumer Council and other consumer protection groups notably called for a ban on "surveillance-based advertising", arguing that pervasive commercial surveillance online to show personalised ads to consumers cannot justify constant violations of fundamental rights. 46Again, behavioural advertising in its current form thus appears to undermine core features of consumer autonomy: a lack of privacy or control over their personal data impedes consumers from having more independence in markets and entails risks for society at large (think about targeted political advertising and possible manipulation of the political process). Second, in terms of independence, issues of transparency and consent with dark patterns upstream also impact consumer autonomy downstream when personal datafor example, data collected with cookiesare used to refine ad targeting.In addition, as more products connect to the Internet, the ability of advertisers to target consumers based on their offline behaviour increases, raising further privacy concerns affecting consumer autonomy, especially in terms of transparency. For instance, interactions with voice assistants through connected speakers are highly problematic.An audit of Amazon Echo shows that Amazon and third parties using the smart speaker platform track these interactions, allowing at least Amazon to infer user interests. 47The study found that Amazon and third parties use this information for personalised, targeted advertising and that advertising parties share these user data significantly.Most importantly, neither Amazon nor third parties are transparent enough about these practices in their privacy policies, with 70% of third parties not even mentioning Amazon or its voice assistant. 48The study also found that third parties sync their cookies with Amazon and other third parties, demonstrating the importance of dark patterns as a gateway to more autonomy capture, as defined in Section II. 49hird, in terms of reciprocity, Internet companies dominating digital advertising markets, thanks to their access to scores of behavioural data, have demonstrated their ability to influence consumer behaviour.For instance, Facebook notably showed in a study that it could manipulate its users' emotions without their awareness 50 a troubling ability for a company building virtual reality devices and platforms, thanks to which it collects more emotional data for its advertising business. 51Another study in Vienna suggests that Google Maps influences users' choices of the best transportation type, nudging them into driving their cars. 52Thus, the personal data not transparently obtained upstream tangibly impact downstream consumer autonomy.However, the reciprocity requirement for consumer autonomy would require more control and complementarity in the relationship between professionals and consumers, which would limit undue commercial influence. In terms of long-term thinking, influencer marketing poses specific risks for children.What is at stake is the construction of the personality of individuals comprising the next generation of adults.Because they are still developing their personalities, children are more negatively affected than adults by commercial influence. 53mpirical research shows that children are particularly vulnerable to online tracking for advertising purposes: mobile apps for children are among the worst in terms of third-party tracking. 54This tracking allows online platforms to build better profiles of these young users and target them more effectively for marketing purposes.A study requested by the European Parliament shows that children are particularly vulnerable to influencer marketing. 55Children strongly identifying with influencers tend to imitate their behaviours. 56The consequences for autonomy are significant: influencer marketing affects brand attachment and future purchases, potentially develops materialistic behaviours, confronts children with inappropriate content and reinforces role perceptions and expectations regarding physical appearance.All of these factors ultimately impact the development of children's traits and attitudes, with possible replications in their adult lives.While international law establishes the right to develop one's personality, 57 arguably no other generation has been exposed to this level of targeted and influential marketing.Hence, policymakers and the industry need to give more consideration to the potential harm to the following generations. Overall, advertisers' extensive collection and processing of behavioural and emotional data to better target users and personalise their services imply significant risks to consumer autonomy, especially in terms of privacy, independence and reciprocity.However, more reciprocity and control by consumers first implies more privacy and independence.The pervasiveness and complexity of digital advertising networks also highlight critical power imbalances and information asymmetries between traders and consumers that are not comparable with pre-AI marketing practices. 58 Personalisation and recommender systems Consumers increasingly use recommender systems to access personalised content.A 2019 study showed that 41% of music streamed on platforms is recommended by algorithms. 59Around 70% of videos watched on YouTube and 80% on Netflix are recommended by algorithms. 60Shopping websites like Amazon extensively use product recommendations too.The current social media trend is to show users more AI-recommended content instead of friends' content.The benefits of personalised recommendations for consumers include displaying potentially relevant content or products, thus enabling them to make sense of vast amounts of information and reducing transaction costs.Nevertheless, recommender systems also present risks for consumer autonomy, especially in terms of privacy, independence and reciprocity, with both individual and collective effects. First, these recommendations entail privacy risks.More precise recommendations require more personal dataincluding behavioural data and past consumption habits.As explained above, privacy risks arise when personal data are collected and processed illegally upstream.Like targeted advertising, this untransparent data collection can negatively affect consumer autonomy downstream, as recommendations shown to users are based on illegally obtained personal data. Second, the requirement for independence, implying more transparency, is undermined because data protection regulationwhich mainly establishes individual rightsdoes not adequately cover the societal risks provoked by personalised services.Societal risks can inter alia relate to political polarisation due to "echo chambers", 61 the perpetuation of unsustainable consumption habits 62 and the erosion of privacy.The issue is that online targeting makes individual situations mainly opaque to other consumers, civil society organisations, regulators or researchers. 63Thus, establishing more transparency requirements and individual rights (the right of access, the right to data portability and the right to be forgotten, to name a few) does not help us to deal with more collective problems.Some have criticised the current personal data regulation framework, which mainly focuses on the individual, 64 whereas personalisation also has significant consequences for societal life. 65hird, in terms of reciprocity, consumers lack control over these personalised recommendations and over how to modify or influence them.A 2022 report showed how users on YouTubethe second most visited website worldwidefeel that they do not have control over its recommendations, despite the platform having implemented feedback tools. 66mpirical data from the largest experimental audit of the platform confirm this feeling: user controls are mostly ineffective at preventing undesirable recommendations.Consumers primarily use passive ways of influencing recommender systems through feedback tools (eg by liking, flagging if the content is not relevant to them or following accounts) but currently lack more active ways of influencing them. 67Such recommendations lack true bi-directionality, thus undermining complementarity as one of the characteristics of consumer autonomy discussed above. 68More active ways to influence recommender systems could include more control over the specific parameters for those recommendations. In terms of long-term thinking, recommender systems also impact culture as such, which ultimately amounts to achievements of human intellect transmitted from generation to generation.As human culture is increasingly transmitted in the digital world, including through recommender systems, what place do algorithms take in this transmission?Do recommender systems have any bias in cultural transmission?Cultural evolution researchers are increasingly examining these questions, 69 as they might have long-term impacts on cultural transmission. IV. Mapping and assessment of EU regulatory instruments After discussing the risks posed by specific commercial practices, the question is whether and to what extent current EU regulation considers the long-term consequences of using AI systems on consumer autonomy.This section maps EU regulatory instruments addressing consumer autonomy, considering their long-term implications for the three commercial practices discussed above.Figure 2 shows the principal fundamental rights involved and the new regulatory landscape applied to the three practices highlighted above: (1) design choices in user interfaces; (2) behavioural advertising; and (3) personalisation and recommender systems. 63Authors have pointed out that online targeting entails risks of epistemic fragmentation, defined as the "lack of shared context in relation to a given practice of content personalisation" (S Milano, B Mittelstadt, S Wachter and C Russell, "Epistemic Fragmentation Poses a Threat to the Governance of Online Targeting" (2021) 3 Nature Machine Intelligence 466).In addition, a report on YouTube's recommender system explains that personalisation was difficult to study without accessing real user data. Autonomy In the EU, protecting consumer autonomy has never been the main objective but rather a means to instrumentalise private law for market integration. 70The main techniques of EU private law to that end have been information duties, 71 a light regulatory intervention and a small price to pay for traders to access one of the largest consumer markets.Today, digital economy regulation continues to require important information disclosures to achieve more transparency, but the EU legislator also establishes more prohibitions and specific obligations for traders to protect consumers from the risks highlighted above.Figure 3 offers an overview of EU regulations aiming at protecting consumers against these risks. As Figure 3 shows, relevant instruments proceed from consumer, data protection and competition law.Many of these instruments already apply.That is the case of the GDPR, 72 the Unfair Commercial Practices Directive (UCPD) 73 and the Consumer Rights Directive (CRD) 74 the latter two were slightly revised in 2019.However, instruments recently adopted or in the legislative pipeline establish new rules affecting all three commercial Similarly, most provisions of the Digital Services Act (DSA) 76 will apply from February 2024.The Artificial Intelligence Act (AI Act) entered trilogue negotiations in 2023, so there is still time before its application. One regulatory instrument does apply horizontally to all three practices at hand: the UCPD.Although the UCPD prohibits misleading and aggressive commercial practices, the main issue highlighted in the literature is the standard against which to assess a particular practice: that of the average consumer and their understanding under the case law of the Court of Justice of the European Union (CJEU).Indeed, authors have called into question the use of the average consumer standard, arguing that it is not suitable to address the extent of the power imbalances and asymmetry of information in the digital realm. 77This is because the definition of the vulnerable consumerto whom the Directive attaches more robust protection compared to the average consumeronly used to take into account specific categories of consumers, solely based on their internal characteristics, not considering exposure to external factors. 78However, the Commission adopted new guidelines on the interpretation of the UCPD in late 2021, which regard the concept of vulnerability in data-driven practices and dark patterns as "dynamic and situational".Although this guidance is welcome, as it seems better adapted to the digital realm, its concrete application in cases is still to be analysed by the CJEU. a. Design choices in user interfaces If most of the literature and enforcement action focuses on applying data protection rules to tackle dark patterns, scholars have also shown how consumer protection instrumentsespecially the UCPDcan help counter them. 79In 2021, the Commission adopted new guidelines on the application of the UCPD that consider "data-driven practices and dark patterns", 80 which are expected to better assess the vulnerability of consumers depending on the situation at hand.New instruments will soon apply.First, the DSA prohibits online platforms from designing interfaces to deceive, manipulate or materially distort or impair free and informed decisions.While this prohibition seems helpful at first, it has two significant limitations.First, the prohibition only concerns online platforms, not intermediaries, contrary to what was initially intended, considerably limiting the scope of this rule, especially given the widespread use of dark patterns online, and not only by prominent actors.Second, the prohibition does not apply to practices covered by the UCPD or the GDPR. 81Although the provision requires further interpretation, this second limitation seems to annihilate the prohibition, as most dark patterns are illegal under either instrument. 82econd, the DMA and the Data Act 83 include similar prohibitions against manipulative designs.Under the DMA, gatekeepers of digital platforms cannot use dark patterns to undermine the Regulation's prohibitions.Under the proposal for a Data Actdesigned to foster data sharing in the advent of the Internet of Thingsthird parties to whom data are made available cannot use manipulative designs either. Overall, although the current data and consumer protection law regime already offers ways of tackling the use of dark patterns, these new prohibitions might be helpful to clarify the obligations of each actor under these new regulatory regimes.They also introduce new pathways for private and public enforcementwith the Commission having a central role in enforcing the DSA, for instancemultiplying the possibilities of tackling dark patterns.These new regimes are welcome to preserve consumer autonomy regarding the choice requirement proposed in Section II, but their concrete application still needs to be studied. b. Behavioural advertising The existing regulatory framework already contains rules limiting behavioural advertising. 84ata protection authorities progressively enforce the GDPR for issues related to digital advertising, 85 which might prompt adtech actors to review their practices in terms of consent and transparency, hopefully providing more privacy and independence to consumers.As developed in Section II, privacy protection upstream is indeed a requirement for more consumer independence and reciprocity downstream.Notably, the European Data Protection Board instructed the Irish data protection authority to fine Meta €390 million for using an invalid legal basis for processing its users' data for personalised advertising purposes. 86Facebook and Instagram have relied on the performance of a contract (their terms of service) instead of relying on consent to process such data.The European Data Protection Board found that social media could not rely on the "contract legal basis" for behavioural advertising purposes and that users should be able to opt out of personalisation. New instruments (will) impose additional obligations.The DSA now establishes more stringent and explicit rules applicable to targeted advertising, partly incorporating concerns from academia and consumer organisations.For instance, the DSA limits targeted advertising's most evident and severe issues, such as targeting based on sensitive personal data or targeting children using any kind of personal data.The AI Act proposal prohibits using "subliminal techniques" and "manipulative AI systems" that materially distort behaviour or (likely) cause physical or psychological harm.However, it is as yet unclear whether this wording would encompass personalised advertising. 87lthough the UCPD already applies to influencer marketing,88 the EU legislator incorporated new obligations in this field.Indeed, scholars have pointed out that one of the Act's blind spots is related to new models of advertising based on content monetisation and "human ads". 89After the European Parliament incorporated significant amendments, the DSA now imposes on providers of online platforms to ensure that influencers mark their posts as "ads" when promoting products or services and provide information about them.The added value of the DSA compared to the UCPD relies on the former imposing this obligation on online platforms themselves (while the UCPD is addressed to influencers directly), thus increasing possibilities of compliance mechanisms and enforcement. Interestingly, the DSA requires very large online platforms to present a public ad repository containing information on the ad's content and origin (including who paid for it), the period during which it was displayed, the parameters for presenting it to specific groups and the total number of recipients reached and where (breakdown by Member State).If this provision might have limited use for individual consumers, it will no doubt be of immense use for regulators, consumer protection organisations and researchers.It also has the potential to mitigate the effects of epistemic fragmentation produced by behavioural advertising and personalisation, as described above.c.Personalisation and recommender systems Much had been expected from the GDPR to better protect consumers willing to escape personalisation, but the initial years of its enforcement have demonstrated its limits. 90aradoxically, although the GDPR is based on the principle of informational selfdetermination, thus presupposing consumer autonomy, individual rights might not always be appropriate to regulate such complex situations of personalisation. 91Privacy rights in the GDPR only consider the data provided, not the data generated or inferred based on that data. 92In other words, consent is not only flawed for data obtained through dark patterns, but it is, bluntly, non-existent for inferred data.Arguably, inferred data provide even greater insights than explicitly stated data (which can be purposely inaccurate), as big data and AI allow for probabilistic determinations.This realisation prompted ethicists to call for a new data protection right: a right to reasonable inferences, 93 requiring data controllers to justify their inferences ex ante and the ability for individuals to challenge them ex post. Thus, compliance with the new rules introduced by the DSA will be closely scrutinised, all the more so that they impose obligations on platforms themselves and allow individuals to understand better how recommendations are personalised for them and how to change them.The DSA indeed obliges online platforms to indicate the main parameters used for recommendations, including the criteria used and the reasons for their relative importance.The only problem is that online platforms must add this transparency requirement in their terms and conditions, which consumers largely ignore. 94However, this limitation is mitigated by the fact that online platforms will have to offer options to modify or influence the parameters on which they base their recommender systems through additional functionality in their settings.Depending on the implementation of such functionalities, they might contribute to increasing consumer autonomy, as they would be able to influence actively the recommended content depending on their evolving preferences.Hopefully, these features could better "accommodate for [consumers'] aspirational preferences". 95Additionally, very large online platforms must include an option not to be profiled for each recommender system.This last rule is welcome to increase consumer autonomy, as Article 22 of the GDPR, which establishes the right not to be subject to decisions solely based on the automated processing of personal data, including profiling, is more limited in scope ratione materiae (not ratione personae). Long-term thinking Current policymaking related to consumer autonomy still appears to lack long-term thinking.Although the European Commission recognises that current data policies will affect the following decades, the European Strategy for Data adopted in 2020 outlines policies for the following five years only. 96Although civil society organisations had conveyed to the Commission the need to establish future-proof digital principles and prevent "negative effects of long-term exposure to digital technologies", 97 its proposal for 90 Finck, supra, note 64. 91ibid. 92Zuboff, supra, note 6, 480-88. 93S Wachter and B Mittelstadt, "A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI" (2019) 2 Columbia Business Law Review 1. 94 98 While the new EU regulatory instruments identified here address critical issues related to consumer autonomy, their ability to address these long-term implications appears more limited.The question is not whether regulations are future-proof from an institutional viewpoint, but whether EU regulations account for consumer autonomy in AIintensive markets long term, based on the abovementioned risks.As Figure 4 shows, three regulations contain measures that can foster long-term thinking, mainly from the industry: the GDPR, the DSA and the AI Act.The measures highlighted can be classified into two groups: risk assessment obligations and information requirements. First, most measures relate to obligations to conduct risk assessment.The GDPR currently obliges data controllers to conduct data protection impact assessments, especially before using new technologies involving the processing of large amounts of data, which result in high risks for the rights and freedoms of data subjects.If these risks are too high, the controller should refer its doubts to the supervisory authority.Similarly, the DSA imposes on very large online platforms to conduct risk assessments for their systems, including user interfaces and AI systems such as those used for targeted advertising and personalised recommendations.In particular, very large online platforms must assess: (1) systemic risks stemming from the design (including algorithmic systems), functioning and use made of their services; (2) at least once every year; and (3) before deploying new functionalities.Under the DSA, systemic risks include illegal content and actual or foreseeable adverse effects on fundamental rights (especially human dignity, private and family life, personal data, freedom of expression, discrimination, the rights of the child and consumer protection).The annual independent audit, which very large platforms must undergo under the DSA, does include a review of this risk assessment.Importantly, very large platforms need to mitigate the identified systemic risks, a general obligation that would need further interpretation over its application in practice. Nevertheless, the fact that the regulation imposes on Internet companies to consider foreseeable systemic risks shows the will to instil more long-term thinking in the industry.Moreover, the AI Act also imposes risk management obligations for high-risk AI systems but limits them to specific sectors such as education.Finally, one can wonder whether the regulatory sandboxes foreseen in the AI Act might be an appropriate setting to address long-term autonomy concerns, especially when developing, testing and validating AI systems. The main issue with risk assessments is that they mainly rely on self-assessment.Risk assessments should thus be subject to external review to be meaningful.In that sense, the obligation to conduct an annual audit established by the DSA provides an improvement compared to the risk assessment requirements under the GDPR.The newly launched European Centre for Algorithmic Transparency could also assist the Commission when enforcing the new obligations under the DSA. Second, information requirements can also be measures through which regulators could impose more long-term thinking from the industry.Indeed, the GDPR could be interpreted this way.According to a transparency requirement in the GDPR, the data controller must provide information on the consequences of processing personal data based on automated decision-making that produces legal effects or significantly affects data subjects.Should this requirement include information on the foreseeable, long-term consequences of automated data processing?To what extent should data controllers conduct this assessment?As stated in the Working Party Guidelines, "significant effects" over automated processing can be triggered by the actions of other data subjects.The Guidelines provide the example of a data subject having their credit card limit reduced due to the automated personal data processing of subjects in the same living area.Could we interpret this provision so that these significant effects also include an intergenerational dimension?For example, could the controller be required to provide information on the foreseeable or possible long-term consequences of its systems?If an AI system engaged in recommending personalised investments inter alia based on personal data, the controller could be required to inform consumers about possible long-term effects of those investments, for the individual and society.The inclusion of the intergenerational dimension in the transparency requirement would encourage the AI platform to consider sustainable and socially responsible investment options.It would also foster a more longterm thinking approach from the industry, aligning with the principles of the GDPR and safeguarding the interests of present and future generations.These questions would require further interpretation, including by the CJEU. In summary, some measures could instil more long-term thinking from the industry, but their concrete application requires further interpretation.They could also remain a dead letter if not adequately enforced.The advantage of risk assessment obligations over information requirements is that they mitigate harms upstream when products and services are not yet placed on the market. V. Conclusion: perspectives for integrating consumer autonomy and longterm concerns in AI regulations Overall, EU regulations addressing autonomy issues in the context of AI use in consumer markets do not sufficiently consider the long-term risks to human nature highlighted in Sections II and III.To ensure these risks are considered, EU policymakers must integrate long-term thinking into consumer and data protection regulations.Adopting such regulations with the long term in mind might become a constitutional requirement if intergenerational solidarity is incorporated into the Treaties, as Commission President von der Leyen suggested in her 2022 State of the Union address. 99EU regulation can promote long-term thinking differently, depending on whom it targets. First, regulatory measures can target firms using AI systems.Section IV explained that the GDPR and DSA require Internet companies to conduct data protection and systemic risk assessments.Here, regulation could require Internet companies that train algorithms (eg recommender systems) to consider long-term risks, even for future generations.One could argue that the GDPR suggests just that when it mentions that personal data processing should be designed to serve "mankind" (ie humans collectively, arguably present and future). 100Recent research suggests that AI trainers adopt more prosocial behaviours when they are aware of the consequences of their actions for future generations. 101However, this study shows that this is true only when there is a risk of future algorithmic choices harming AI trainers themselves.That limitation alone is a further argument to promote more reflection on the impacts of AI systems on future human conditions as, currently, regard for the long-term consequences of AI applications does not seem to be the focus of their developers. Second, engineers can be subject to technical measures integrating mandates into their design processes.Studies have illustrated that the GDPR's privacy protocols may be transformed into such demands via socio-technical security modelling language. 102ddressing privacy and long-term requirements in an interdisciplinary way 103 is even more necessary in this context.Indeed, regulation alone will never be effective if the engineers building AI systems do not have concrete models to implement sometimes abstract rules in a language that they do not master.So-called "requirement engineering" can help software developers build compliant systems by design.Different techniques exist for this objective, one being modelling languages and model-driven engineering; it could also help introduce long-term thinking in the engineering process. Third, long-term thinking can also be instilled in policymakers themselves.In its Better Regulation Guidelines, the Commission indicates that its impact assessments should consider "possible long-term developments, trends and challenges (using foresight elements and scientific advice, where appropriate)", "take account of the key long-term challenges and corresponding EU policy ambitions" when evaluating a specific problem and "compare the options with regard to their effectiveness, efficiency and coherence, compliance with the proportionality principle and how future-proof they are, given the Secondary law Practices Treaty on the Functioning of the European Union: Right to data protection [Art.16]; Consumer protection [Arts.12, 114, 169] Charter of Fundamental Rights of the EU: Right to the respect for private and family life [Art.7]; Right to the protection of personal data [Art.8]; Right to nondiscrimination [Art.21]; Rights of the child [Art.24]; Guiding principle that Union policies shall ensure a high level of consumer protection [Art.38]; Right to an effective remedy and to a fair trial [Art.47] From the Nation State to the Market: The Evolution of EU Private Law, EUI LAW, 2012/ 15, 12 <http://hdl.handle.net/1814/22415>(last accessed 4 January 2023). 71See information requirements in the Unfair Commercial Practices Directive, the Consumers Rights Directive and the General Data Protection Regulation. 72Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016,pp 1-88. 73Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council, OJ L 149, 11.6.2005,pp 22-39. 74Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council, OJ L 304, 22.11.2011,pp 64-88.Most provisions of the Digital Markets Act (DMA) 75 have applied since May 2023. Figure 2. Overview of the European Union (EU) regulatory landscape. 70HW Micklitz and D Patterson, producing legal effects or significantly affecting data subject [Art.22 GDPR] + controller to provide information about existence of automated decision-making, logic involved, and significance and consequences of processing for data subject [Art.13 + 14 GDPR] + right of access [Art.15 GDPR] + right to object [Art.21 GDPR] FocusProhibition of misleading + aggressive commercial practices [Art.8UCPD]:undueinfluence(exploiting a position of power) significantly impairing or is likely to significantly impair the average consumer's freedom of choice or conduct and thereby causes him/her or is likely to cause him/her to take a transactional decision that he/she would not have taken otherwiseAutonomyRight not to be subject to decision solely based on automated processing, incl.profiling,Prohibitionofsubliminal techniques and manipulative AI systems materially distorting behaviour, causing or likely to cause physical or psychological harm [Art.5(1) AI Act] Transparency requirements for AI systems: (i) interaction with AI system; (ii) emotion recognition or biometric categorisation systems; (iii) deep fakes [Art.52 AI Act] Bakos et al, supra, note 17. 95 Bjørlo et al, supra, note 2, 9. 96 Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions: A European Strategy for Data, COM(2020) 66 final, 19.02.2020. 97Commission Working Staff Document, Report on the stakeholder consultation and engagement activities, accompanying the document "Establishing a European Declaration on Digital rights and principles for the Digital Decade", SWD(2022) 14 final, 26.1.2022,pp 24-25. Data protection impact assessment , esp. when using new technologies, if high risk to rights and freedoms of natural persons, in particular required for systematic and extensive evaluation of personal aspects relating to natural persons based on automated processing, incl.profiling[Art.35GDPR]
2023-08-05T15:03:33.582Z
2023-08-03T00:00:00.000
{ "year": 2023, "sha1": "a688439b7f10790a5dacb127a4df83eb15327d40", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C59490014B968AB10ECA772683D2B283/S1867299X23000582a.pdf/div-class-title-preserving-consumer-autonomy-through-european-union-regulation-of-artificial-intelligence-a-long-term-approach-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "8a52705f48a54c84eea39a17519cc45a2d203d03", "s2fieldsofstudy": [ "Law", "Computer Science" ], "extfieldsofstudy": [] }
245118989
pes2o/s2orc
v3-fos-license
Characterizing telemedicine use in clinical immunology and allergy in Canada before the COVID-19 pandemic Rationale There exists a geographic barrier to access CIA care for patients who live in rural communities; telemedicine may bridge this gap in care. Herein we characterized the use of telemedicine in CIA at a population-based level and single centre. Methods Before the COVID-19 pandemic, telemedicine care was provided via the Ontario Telemedicine Network (OTN) in Ontario, Canada. Descriptive data were collected from the OTN administrative database and from electronic medical records at a single academic centre during 2014 to 2019. The potential distance travelled and time saved by telemedicine visits were calculated using postal codes. Results A total of 1298 telemedicine visits was conducted over OTN, with an average of 216 visits per year. Only 11% of the allergists/immunologists used telemedicine to provide care before the COVID-19 pandemic. In the single centre that provided the majority of the telemedicine care, 66% patients were female and the overall mean age was 46. The most common diagnosis was immunodeficiency (40%), followed by asthma (13%) and urticaria (11%). Most patients required at least one follow-up via telemedicine. The average potential two-way distance travelled per visit was 718 km and the average potential time travelled in total was 6.6 h. Conclusion Telemedicine was not widely used by allergists/immunologists in Ontario, Canada before the COVID-19 pandemic. It could offer a unique opportunity to connect patients who live in remote communities and allergists/immunologists who practice in urban centres in Canada. Independent of the current pandemic, our study further highlights the need for more physicians to adopt and continue telemedicine use as well as for healthcare agencies to support its use as a strategic priority once the pandemic is over. Introduction Synchronous telemedicine refers to the delivery of care using an interactive audio-video communication system, where physicians provide care to patients in real-time [1,2]. Even before the COVID-19 pandemic, its use in Clinical Immunology and Allergy (CIA) has been increasing in the US, particularly for adverse drug reactions and immunodeficiency [3]. Although telemedicine has been available in Canada to provide care to patients who live in remote areas and lack access to allergists/immunologists, the pattern of its use has not been evaluated. As of December 2019, there were 122 registered allergists/immunologists to serve 14.6 million residents of Ontario-the most populous province representing 38% of Canada's population [4,5]. However, there exists a large geographic barrier between patients who live in remote communities and specialists who predominantly practice in urban centres (Fig. 1) compounded by the geographic distance between patients and specialists, poses a significant barrier to access CIA care in a timely and effective manner. Telemedicine offers a unique opportunity to bridge this gap in care [6]. Herein, we aimed to characterize the use of telemedicine by allergists/immunologists in Ontario, Canada at the population and patient levels. Methods We conducted a retrospective descriptive study that included all synchronous telemedicine visits provided by allergists/immunologists from January 1st, 2014 to December 31st, 2019. At the population level, data on the use of telemedicine in CIA were collected from an administrative database provided by the Ontario Telemedicine Network (OTN)-the non-profit organization funded by the Government of Ontario to provide the virtual care platform with synchronous audio-video call [7]. Available information included telemedicine visit date and location as well as health provider information and location. Patient level data were collected from the electronic medical records at our hospital in Toronto-an academic institution that provided the majority of telemedicine visits in Ontario. They included patient age and sex, postal code, diagnostic code, consult or follow-up, telemedicine visit date and location, and health provider information and location. Distance between patients' residences and our hospital was calculated using the postal codes and Google Maps. The potential time travelled between these locations was estimated using the average speed of highway driving at 90 km/h. The study received approval from the institutional research ethics board. Results During the six-year study period, there was a total of 1298 telemedicine visits through OTN with an average of 216 visits per year (range 127-346). Only 11% of the allergists/immunologists (n = 13) used telemedicine to provide care and more than half of the visits were provided by a single physician at our hospital labelled as site A. While the number of visits has not increased much over the years, more than 80% of the visits (n = 1066) was provided by three specialists at site A ( Fig. 2). At this site, a total of 865 telemedicine visits (327 new referrals and 538 follow-ups) were available for chart review during the same study period. In the cohort from our hospital, 66% patients were female (n = 571) and the overall average age was 46 ± 16 years old. The number of telemedicine visits remained steady with an average of 170 visits per year (range 125-213). While most patients required at least 1 follow-up via telemedicine, about 18% of patients (n = 152) did not require any follow-ups and 17% of patients (n = 145) required more than 6 followups via telemedicine during the study period. Most conditions assessed and followed via telemedicine were chronic diseases, including immunodeficiency (40%), asthma (13%) and urticaria (11%). Lastly, the average potential two-way travel distance avoided per visit was 718 ± 852 km and the average potential two-way travel time avoided was 6.6 ± 5.5 h (SD). Discussion Our study showed that the use of telemedicine to provide CIA care in Ontario, Canada was limited but remained steady over the years before the COVID-19 pandemic. However, the annual average of telemedicine visits in our centre was comparable to another academic centre in the US (170 vs. 153, respectively) [3]. Although telemedicine use by allergists/immunologists in other countries at the population level is unknown, it was not widely adopted in Ontario-the most populous province of Canada with one third of the nation's population [8], as over 95% of visits were provided by only 4 physicians as shown in our study. Further, most patients in our cohort had chronic diseases and required at least one follow-up via telemedicine. Compared to other studies, the reasons for consultation via telemedicine markedly differed from the ones in our centre: one centre consisted of adverse drug reaction (66%), immunodeficiency (15%) and urticaria (5%), whereas the other centre consisted of allergic rhinitis (30%), asthma (24%) and food allergy (10%) [3,9]. Lastly, because there are very few centres that specialize in immunodeficiency care, our study showed that telemedicine allowed patients who lived in remote areas to be connected to the allergists/immunologists in urban centres and reduced the potential travel distance, similar to the reported travel distance in another study (718 km vs. 700 km, respectively) [9,10]. Telemedicine use in CIA has been increasing in the US, particularly in adverse drug reactions and immunodeficiency, resulting in rapid turnaround time and decreased wait time [3]. It also has been associated with significant time and cost savings, as well as high patient satisfaction compared to in-person visits [9]. Despite the evidence to support telemedicine for CIA care, it had been underutilized in Canada before the pandemic. There also has been a lower ratio of allergists/ immunologists to population compared to other medicine specialties in Canada, resulting in a longer wait time for an in-person visit compared to other medicine specialties [11,12]. Timely access to allergy care is important because appropriate diagnosis and/or management of various allergic diseases improve healthrelated quality of life [13]. In the era of growing demand for allergists/immunologists and increasing number of patients with allergic diseases, telemedicine would be a great tool to address this supply-demand mismatch. This is the first study to characterize the use of telemedicine to provide CIA care in Canada at both population and patient levels. There are several limitations that merit consideration. Firstly, because of the limited information in the administrative database, we were unable to assess certain clinical parameters such as patient demographics and diagnoses at the population level that were done at the patient level. Secondly, our study did not include patients from in-person visits for comparison because the study was purely descriptive. Although we could not comment on any differences in care between in-person and telemedicine visits, telemedicine has been shown to offer equal or at least non-inferior care compared to in-person visits [10,14,15]. Thirdly, because diagnostic codes were used to infer the reasons of assessment at each visit, we could not ascertain the accuracy of this inference and recognized that reasons for consultations may not always be the same as the diagnoses. Lastly, because patient level data were only available in one centre, the pattern of telemedicine use at our centre may not reflect its use at other centres. In conclusion, telemedicine was not widely used by allergists/immunologists in Ontario, Canada before the COVID-19 pandemic. It could offer a unique opportunity to connect patients who live in remote communities and allergists/immunologists who practice in urban centres in Canada. Independent of the current pandemic, our study further highlights the need for more physicians to adopt and continue telemedicine use as an alternative route to deliver care when in-person visits are less ideal, as well as for healthcare agencies to support its use as a strategic priority once the pandemic is over.
2021-12-13T14:23:09.094Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "6c81f6808a5b8f5c5c1a227fa3c6946f0143ab31", "oa_license": "CCBY", "oa_url": "https://aacijournal.biomedcentral.com/track/pdf/10.1186/s13223-021-00635-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6c81f6808a5b8f5c5c1a227fa3c6946f0143ab31", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
8116638
pes2o/s2orc
v3-fos-license
Measurement of the Z boson plus two b-jets cross section in CMS with 100 pb-1 The cross section for production of Z bosons with two associated b-jets, and Z decaying to leptons, can be measured at the LHC with about 100 pb-1 of data. We use simulated data to study possible strategies for an early measurement of this process with the CMS detector. The rate and kinematic properties of this final state needs to be well understood because it constitutes a large fraction of the total backgrounds to several of the Higgs discovery channels at the LHC. Introduction At the LHC, Z/γ * can be produced in association with two b-quarks, gg/qq → bbZ, with a significant cross section [1], [2], [3]. The measurement of this process is an important test of QCD calculations and also can help to reduce the uncertainty of the Super Symmetry bbH production cross section calculation [4], since the two processes are produced via similar partonic processes at the initial state. This process is also a background to Higgs boson discovery channels at the LHC, like Standard Model H → ZZ → 4ℓ and associated production of SUSY Higgs boson bbϕ, ϕ → ττ (µ µ) (ϕ = h, H, A). In the following, the possibility of observing and measuring the cross section of the bbZ production at the LHC, using 100 pb −1 of early data at CMS experiment, has been described which was studied in [5]. A related, but different cross section for pp → Z + b-jet has previously been measured at the Tevatron, both by D0 and CDF [6], [7]. Monte Carlo Signal and Background samples The signal events ℓℓbb (Zbb ) were generated at parton level using the leading order (LO) CompHEP generator [8] and hadronized with PYTHIA [9] with the following generator-level cuts : GeV/c 2 and |η ℓ | <2.5. The next-to-leading order crosssection σ (ℓℓbb) = 45.9 pb (ℓ=e, µ, τ) was calculated with the MCFM program [10] applying the same generator-level cuts. The CTEQ6M parton density functions (pdf) and scale µ R = µ F = M Z were used. The backgrounds considered for this process were Drell-Yan Z/γ * → ℓℓ (ℓ = e, µ, τ) production in association with two or more light-quark and gluon jets (Z+jets), ℓℓcc+jets (Zcc+jets) and tt+jets. Background samples were generated at the leading order (LO) with the ALPGEN generator [11]. The tt+jets ALPGEN events were normalized to the NLO inclusive tt cross section 840 pb [12]. The Zcc+jets events were normalized to the NLO MCFM cross section of ℓℓcc, 13.29 pb, applying the ALPGEN production cuts: p c T >20 GeV/c, |η c | <5, m ℓℓ > 40 GeV/c 2 . The same pdf and scale settings as for ℓℓbb process were used. The Z+jets events were normalized to NLO MCFM cross section of ℓℓ + 2 jets, 714 pb, applying the ALPGEN production cuts: p j T >20 GeV/c, |η j | <5, m ℓℓ > 40 GeV/c 2 . Signal and background samples were passed through the full simulation and reconstruction chain of CMS, under an imperfect calibration and alignment configuration assumed to be typical of the first 100 pb −1 of data at CMS. Event Selection The events were selected by the CMS Level-1 (L1) and High-Level (HLT) single isolated electron and muon triggers designed for the low luminosity period (L=10 32 cm −2 s −1 ). In offline, Events with at least two isolated and oppositely charged electrons or muons, with p e,µ T greater than 20 GeV, |η e | < 2.5 and |η µ | < 2.0, were selected to form Z boson candidate. Events are selected with two or more jets, reconstructed using iterative cone algorithm of cone size 0.5, with corrected jet E T greater than 30 GeV and jet |η| < 2.4. The η cut ensured good quality b-tagging. The Figure 1 (Left) shows the distribution of di-lepton invariant mass for the electron final state events with at least two jets passing lepton and jet selections. The b-tagging is an important and effective tool to ensure the purity of Zbb events and reduce the Z + jets and Zcc+jets backgrounds since the b-tag discriminator values for light quarks, gluon and c-quark jets tends to be lower compared to that of the b-quark jets. The events were double b-tagged using 'track counting' b-tagging, which uses the 3D track impact parameter significance of the 3rd highest significance track in the jet as the b-tagging discriminator. A b-tag discriminator value of 2.5 has been used in the analysis which ensured high purity of b-tagged jets. The possibility of applying selections on the amount of missing transverse energy in the event, E miss T , is exploited to suppress tt+jets background events since tt+jets events contain neutrinos from W → ℓν decays and has higher E miss T compared to Zbb events. A selection cut on E miss T less than 50 GeV is applied on E miss T , where E miss T is reconstructed using calorimeter tower information and calibrated using jet energy corrections and muon corrections. The dilepton invariant mass distributions of the signal and various background events passing all selection criteria are shown in Figure 2. The events are scaled to 100 pb −1 of data. Background Estimation and Systematic Uncertainties The contribution of the tt+jets background events in the signal region of the di-lepton invariant mass distribution can be estimated from the data, by extrapolating sidebands in the di-lepton invariant mass spectrum. The following method is used to estimate the tt+jets background using the relation N Z (tt) = (ε Z (tt)/ε noZ (tt)) × N noZ (tt), where N Z (tt)=8 is the expected number of tt+jets events in the signal region within the m ℓℓ mass window of 75-105 GeV/c 2 , N noZ (tt)=27 is the measured number of tt events outside the signal region, ε Z (tt) is the selection efficiency for tt+jets events in signal region, ε noZ (tt)) is the selection efficiency for tt+jets events outside the signal region. ∆N Z (tt) is the uncertainty of the expected number of tt+jets events in the signal region given by ∆N Z (tt)/N Z (tt) = 1/ √ N noZ (tt). The uncertainty on the ratio ε Z (tt)/ε noZ (tt) is negligible in comparison to the statistical uncertainty of the number of events outside the signal region. The following equations are used to determine the number of Zbb events (N Zbb ) and it's uncertainty ∆N Zbb for 100 pb −1 , after all selections except double b-tagging. where, N be f ore b−tag Z = 4644 is the measured number of Z/γ * → ℓℓ events in the di-lepton mass window between 75 and 105 GeV/c 2 after all selections have been applied except double b-tagging. It receives negligible contribution (less than 1%) from the tt background, as one can see from Figure 1 (Right). N a f ter b−tag Z = 38 is the measured number of Z/γ * → ℓℓ events after all selections including double b-tagging. It is defined after subtraction of the tt background described above. N Z j j is the unknown number of ℓℓ+jets (jet=u,d,s,g) events before double b-tagging. N Zcc is the unknown number of ℓℓcc events before double b-tagging. N Zbb is the unknown number of ℓℓbb events before double b-tagging. ε b is the efficiency of double b-tagging for Zbb signal events which is determined from the Monte-Carlo simulation tuned to reproduce the b-tagging efficiency measured from the data, ε c is the efficiency of double b-tagging for Zcc+jets background events which is determined from the Monte Carlo, ε ℓ is the efficiency of double b-tagging for Z+jets (jet=u,d,s,g) background events which is determined from the Monte-Carlo simulation tuned to reproduce the mistagging efficiency measured from the data. The number of unknown parameters are reduced to two, following the D0 analysis approach [7], by using the theoretical ratio of cross sections and ratio of selection efficiencies and by replacing N Zcc by R × N Z j j . The uncertainty of the σ (Zcc) σ (Z j j) ratio due to µ R , µ F scale variation and JES and MET scale are considered. The equations are solved to evaluate the value of N Zbb . The uncertainty on N Zbb , ∆N Zbb , is than calculated due to the uncertainties of N a f ter b−tag Z from tt background subtraction (δ N a f ter b−tag Z =4.0%), the uncertainty of R and the uncertainties of ε b , ε c and ε ℓ . From the estimated number of N Zbb events after all selections except b-tagging, the number of Zbb events before the lepton and jet selections and cut on E miss T can be evaluated. The systematic uncertainties due to Jet Energy Scale, E miss T scale, lepton selections, b-tagging, Monte Carlo dependence on p T and η of jets and luminosity measurement have been taken into account according to the values foreseen to be achieved in CMS with data corresponding to 100 pb −1 of integrated luminosity. The statistical uncertainty is defined as ∆N sel /N sel =1/ √ N sel , where N sel is the measured number of events after all selections (46), thus it is δ N sel =14.7%. Conclusion The process pp → bbZ at LHC needs to be well understood because of its importance as a background to SM and SUSY Higgs boson discovery channels. The possibility of measuring the bbZ, Z → ℓℓ process at CMS with the first 100 pb −1 of data has been studied with a robust selection of leptons and b-jets. Possible methods of measuring backgrounds from data have been discussed. The statistical and systematic uncertainties have been discussed.
2009-03-10T15:56:58.000Z
2009-03-10T00:00:00.000
{ "year": 2009, "sha1": "a4c802b29b315066b288aa601b23e7a05411ef91", "oa_license": "CCBYNCSA", "oa_url": "https://pos.sissa.it/055/065/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1911072e64a766e8ede758f2aca9b0ef587e9028", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
42034253
pes2o/s2orc
v3-fos-license
Tuberculosis in kidney transplant recipients: A case series Solid organ transplant recipients have an elevated risk of tuberculosis (TB) with high mortality. Data about TB in this population in the United States is sparse. We present four cases of active tuberculosis in kidney transplant recipients at our center. All patients had possible TB exposure prior to transplant and all were diagnosed with active TB within the first year of transplant. Disseminated TB was seen in half of the patients with extra-pulmonary TB being more common affecting lymph nodes, pericardium, and the kidney allograft. Delay in diagnosis from onset of symptoms ranged from fifteen days to two months. Two patients died from TB. TB is a largely preventable and curable disease. However, challenges remain in the diagnosis due to most recipients presenting with atypical symptoms. Physicians should maintain a high degree of suspicion for TB to promptly diagnose and treat post-transplant thereby minimizing complications. A review of the literature including the epidemiology, pathogenesis, clinical presentation, diagnosis and treatment options are discussed. INTRODUCTION The overall incidence and prevalence of mycobacterium tuberculosis (TB) in solid organ transplant recipients is not well defined. The rates of TB in this population are mostly based on data available from individual study cohorts reported in the literature. In the western world, TB is a rare opportunistic infection with significant morbidity and mortality. Clinical presentation in immuno compromised individuals, including transplant recipients is often atypical and diverse. This leads to delay in the diagnosis and advanced disease at the time of diagnosis. In addition, inadequate host response in this setting poses a treatment challenge. The higher toxicity of treatment and concurrent use of immunosuppressive medications with drug interactions further generate complexity in management. We describe four cases of active TB in our kidney transplant recipients and explore the epidemiology, clinical presentation, management and outcomes of TB disease in this population. Case 1 A 63yearold Vietnamese male with end stage renal disease due to IgA nephropathy received an expanded criteria deceased donor kidney transplant (DDKT) in 2012 (5 antigen mismatch, 5% panel reactive antibody, PRA). He received induction with alemtuzumab and solumedrol and was maintained on tacrolimus and mycophenolate mofetil. There were no surgical complications or episodes of acute rejection in the posttransplant period. Allograft function stabilized with a serum creatinine (Cr) of 1.8 mg/dl. His past medical history was notable for incarceration in Vietnam, prior hepatitis B exposure with protective anti Hepatitis B surface antibody, positive tuberculin skin test (TST) and a noncalcified nodule on chest Xray (CXR). He had been in the United States for twenty years prior to his transplant. He did not receive isoniazid (INH) prophylaxis before undergoing kidney transplant. At oneyear posttransplant, he was admitted with fever, palpitations and 3 cm nontender submental lymph node. Labs were notable for acute kidney injury (AKI) with Cr of 3 mg/dl and urinary retention that resolved with urinary catheter placement and treatment for an enlarged prostate. CXR revealed bilateral pleural effusions and a large pericardial effusion. Fine needle aspiration of the lymph node and pericardial fluid grew Mycobacterium tuberculosis (MTB). He received antitubercular therapy (ATT) with 2 mo of Rifampin, INH, Pyrazinamide and Ethambutol (RIPE) and 4.5 mo of INH and Rifampin (IR). His treatment course was complicated by transaminitis with reactivation of hepatitis B leading to end stage liver disease. He was treated with tenofovir with resolution of transaminitis. Patient completed 6.5 mo of ATT and has been cured of TB. His kidney transplant failed three years later due to BK nephropathy, and he was initiated on hemodialysis. Case 2 A 67yearold Caucasian male, Vietnam War veteran with ESRD presumed secondary to hypertension received a DDKT in 2013 (0 antigen mismatch, PRA 36%). He received induction with alemtuzumab and solumedrol and was maintained on tacrolimus, mycophenolate mofetil, and prednisone. Pretransplant CXR showed prior granulomatous disease. He was not tested for latent TB infection (LTBI). Two months after transplant, he was admitted with fever and progressive shortness of breath. CXR revealed a miliary pattern of infiltrates. He developed acute respiratory failure and septic shock requiring intubation and multiple vasopressors. The day after admission, sputum samples returned positive for acid fast bacilli (AFB), and later grew MTB. Clinical course was complicated by development of presumed macrophage activation syndrome (MAS). He received neupogen for pancytopenia but bone marrow biopsy could not be obtained due to agitation. He did not receive intravenous steroids or chemotherapy for MAS. Patient died within three days of admission. Case 3 A 38yearold Indonesian woman living in United States for ten years with ESRD due to IgA nephropathy on hemodialysis for 10 years received a DDKT in 2015 (6 antigen mismatch, PRA 0%). She received induction with alemtuzumab and solumedrol and was maintained on tacrolimus, mycophenolate mofetil, and prednisone. There were no surgical complications or episodes of acute rejection in the posttransplant period. Allograft function was excellent with serum Cr of 1.0 mg/dl. Pretransplant work up was notable for positive TST with normal CXR. She was started on INH immediately after transplant and received nine months of therapy for LTBI. One month after completing INH therapy, she was admitted with persistent fevers, night sweats and acute kidney injury, serum Cr of 2 mg/dl. Fever work up showed adenovirus in the blood and urine. There was increased flurodeoxyglucose uptake in the kidney allograft on positron emission tomography scan. Biopsy of the kidney transplant showed necrotizing granulomatous interstitial nephritis. Differential diagnosis of the granulomatous interstitial nephritis included renal transplant TB and adenovirus infection. Renal pathology changes were not consistent with adenovirus infection. AFB smear and cultures were negative in the urine and renal biopsy specimens. Due to persistent fevers, worsening renal function and clinical suspicion for TB, she was started on RIPE and Moxifloxacin. Moxifloxacin was added as a fifth agent due to concern for INH resistance given she was treated with INH monotherapy for LTBI. Fevers, night sweats and AKI resolved on treatment without addition of cidofovir, which supported the diagnosis of renal transplant TB. Her IS was modified with discontinuation of MMF. She is currently maintained on tacrolimus and prednisone. She completed 6 mo of ATT and is cured of TB. Renal allograft function is stable with Cr of 1.3 mg/dl. Case 4 A 67yearold Caucasian male with ESRD, secondary to diabetes mellitus on hemodialysis for 2 years received a DDKT in 2015 (4 antigen mismatch, PRA 0%, A2 to B kidney). He received induction with alemtuzumab and solumedrol and was maintained on tacrolimus, mycophenolate mofetil, and prednisone. His pretransplant CXR showed calcified lung nodules, and he had a negative interferon gamma release assay (quantiferon gold). He presented two and a half months' posttransplant with two weeks of intermittent fever, malaise, progressive dyspnea and lower extremity swelling. He was diagnosed with bilateral lower extremity deep vein thrombus and pulmonary embolism for which anticoagulation was initiated. Due to intermittent fevers, computed tomography (CT) of the chest was done that showed a few scattered sub centimeter noncalcified pulmonary nodules and a 2 cm right paratracheal lymph node concerning for granulomatous disease. Fungal testing including serum galactomannan, serum cryptococcal antigen, betaDglucan levels and urine histoplasma antigen, was negative. Bronchoscopy was performed with AFB stain positive in the bronchoalveolar lavage (BAL). AFB and nonnecrotizing granulomas were seen on trans bronchial lung biopsy. MTB complex polymerase chain reaction (PCR) was positive in both the BAL and blood, and cultures from both grew MTB. Sputum cultures later grew pan susceptible MTB. He was discharged on a fourdrug regimen with RIPE. Two weeks later, he was readmitted with recurrence of fever, altered mental status and partial loss of vision. Repeat CT of the chest showed worsening bilateral pulmonary infiltrates. Moxifloxacin was added to his regimen. ATT drug levels were obtained and found to be therapeutic. Sputum, urine and blood cultures returned negative for AFB. Neurology work up including magnetic resonance imaging of the brain and lumbar puncture was negative. Patient developed AKI with serum Cr of 3 mg/dl. Ethambutol dose was decreased from 1600 mg daily to 1600 mg every 36 h and pyrazinamide dose was lowered from 2000 mg daily to 2000 mg thrice weekly. Ethambutol was subsequently discontinued due to worsening visual changes and amikacin was added to the treatment regimen of isoniazid, rifampin, pyrazinamide and moxifloxacin. His IS was ultimately tapered to prednisone alone due to worsening of TB with persistent fever and progressive pulmonary infiltrates. Renal allograft function continued to decline likely due to tapering off IS and aminoglycoside toxicity ultimately leading to allograft failure. He was started on hemodialysis 4 mo after initiation of ATT and died three months later. Epidemiology Even before MTB was discovered, Laennec described the diseased lung cavities on autopsies. Historically this was referred to as "consumption" owing to significant weight loss and finally death that consumed patients. In 1839, Johann Schonle coined the term tuberculosis from the latin word "tuberculum" which means small pimple or a bump. The bacillus was identified by Robert Koch as Mycobacterium tuberculosis on March 24, 1882 which is commemorated as World TB day. The global TB incidence and prevalence has been declining per the most recent WHO Global TB report [1] . The incidence of TB globally is 18% lower in 2014 as compared to 2000 and TB prevalence is 42% lower as compared to 1990. TB mortality has also fallen 47% since 1990. The incidence rate is highest in South East Asia and the Western Pacific and lowest in Western Europe, Canada, United States, Australia and New Zealand. The CDC Morbidity and Mortality Report in early 2016 shows leveling of TB incidence in the United States at 3 cases/100000 persons in 20132015, after two decades of annual decline [2] . Approximately 70% of the cases are in foreignborn individuals, with Mexico, the Philippines, India, Vietnam and China accounting for the top five countries of origin. In our case series, two out of four were from Southeast Asia which is considered to be endemic for TB. Among those born in the United States, native Hawaiians/other Pacific Islanders have the highest incidence followed by American Indians and Alaskan Natives. Almost half of all reported TB cases in the United States are reported from California, Florida, New York and Texas. The TB incidence in foreign born individuals has been steadily declining compared to stabilization of TB incidence among those born in the United States, pointing to TB transmission in the United States. This has been confirmed by molecular genotyping. Risk factors for TB outbreaks include substance abuse, incarceration and homelessness. Data regarding the prevalence and incidence of TB in solid organ transplant recipients is sparse. Prevalence of active TB is estimated to be 1.2%6.4% in developed countries and up to 15% in highly endemic areas [3] . A study in 1998 estimated a 0.35%1.2% incidence in renal transplant recipients in the United States [4] . Risk in solid organ transplant recipients is estimated to be 2074 times higher than the general population with a high mortality rate of up to 30%. Mortality of TB is higher in patients with disseminated disease, prior rejection and those who received antiT cell antibody therapy [4] . Another study found higher mortality with graft rejection, steroid treatment and concomitant other opportunistic infection [3] . Diabetes mellitus and chronic liver disease have also been associated with greater mortality [5] . Our case series show a mortality of 50%. Half of our TB cases had disseminated disease. All four patients received antiT cell antibody therapy and three were on steroids. Half of our patients had diabetes mellitus. Baseline characteristics of our patients are listed in Table 1. Over 50% of renal transplant recipients develop TB within the first year of transplant [4] . TB develops earlier disease is defined as involvement of two or more non contiguous organs with positive TB cultures, with or without granulomas [4] . CXR's in posttransplant TB show diffuse pulmonary infiltrates rather than cavitary lesions which are more commonly seen in the general population [7] . In our case series, fever was present in all four patients. Cervical lymphadenopathy was seen in one patient. Disseminated TB was seen in two of the four patients with extra pulmonary involvement of lymph nodes, pericardium and the renal allograft. Two patients had pulmonary TB and one of them had disseminated disease. Only one presented with cough. Patients with pulmonary involvement showed military pattern and bilateral diffuse pulmonary nodules on CXR. Diagnosis and pre-transplant screening Diagnosis of latent TB is an indirect measure of possible infection by detection of a cellular response to MTB specific antigens in the absence of symptoms. The two types of tests are in-vivo: Tuberculin skin test (TST) done by intradermal injection of purified protein derivative (PPD); and ex-vivo: IGRA (Quantiferon gold test or Tspot TB test). PPD is a glycerol extract of the tubercle bacillus and is not species specific. Induration of 5 mm or more is considered to be positive in transplant candidates. If the first test is negative, a followup second test is recommended two weeks later. This leads to a "booster effect" due to amnestic recall of immunity and can identify an additional 10% of cases [9] . Limitations of PPD testing include a higher rate of false negatives in the immunocompromised host, confounding by non tubercular mycobacteria and prior BCG vaccination, and need for trained staff and a second visit for interpretation of the test by a qualified provider. IGRA utilizes sensitized T cells that release interferongamma. The advantages of IGRA over PPD include improved specificity due to MTB specific antigens that do not crossreact with BCG and the use of positive and negative controls that may differentiate true negatives from those that result from anergy or overt immunosuppression [10] . Performance of IGRA is better in low prevalence countries as compared to endemic in those with prior TB exposure [3] . Markers for prior infection include cellular response to TB specific antigens (positive TST or interferon gamma release assay, IGRA) or sequelae of granulomatous infection on CXR. Older patients are more likely to have reactivation following transplantation than younger patients, particularly in the developed world. All of our cases had prior TB exposure and developed TB early after transplant, half developed disease within the first 3 mo following transplant. Factors predisposing to TB both in the general po pulation and transplant recipient include country of origin, history of untreated latent TB infection, cigarette smoking, body mass index < 18.5, diabetes mellitus, chronic kidney disease, chronic liver disease, lupus, human immunodeficiency virus, silicosis, gastrectomy, jejuno ileal bypass, as well as social risk factors (homelessness, incarceration, alcoholism and known TB contact) [6,7] . The main predisposing factor in our center's experience was residence from or previous travel to an endemic region (Table 2). Pathogenesis TB is usually acquired via inhalation of bacilli into the lungs. Progression to clinical disease depends on the infecting dose and virulence of the Mycobacteria as well as the development of host cell mediated immunity. The most common reason for posttransplant TB is reactivation of previous infection. In patients with prior exposure, the risk is generally inversely related to the time from acquisition to transplantation. Rarely, TB can be donorderived and transmitted through the transplanted organ. TB can be acquired posttransplant, more commonly in TB endemic countries, or nosocomial as part of outbreaks in renal transplant units [8] . Clinical presentation The clinical presentation of TB in transplant recipients differs from the general population in that symptoms are more unusual and varied, often leading to a delay in diagnosis and poor outcomes. Fever is seen more commonly, and approximately 30%50% of TB after transplant is extrapulmonary or disseminated [4,7] . Disseminated [11] . Both these tests cannot differentiate between latent TB and active TB. ESRD and immunosuppressant use are responsible for a higher rate of false negative or equivocal results of immune based Tcell assays. Uremia is associated with impaired costimulatory function of the antigenspecific Tcells leading to a defect in Tcell function. One of our transplant recipients had a negative IGRA in the presence of calcified nodules on chest imaging. Immunosuppressants such as Tcell depleting anti bodies, corticosteroids and calcineurin inhibitors cause a reduction in the number of Tcells, affect their interaction with antigenpresenting cells and impair cytokine in duction [12] . Diagnosis of TB in transplant recipients is often delayed. In our case series, delay in diagnosis from onset of symptoms ranged between fifteen days and two months. Diagnosis of active TB is made by demonstration of AFB on smear microscopy and isolation of mycobacteria in culture of the body fluid. AFB blood cultures should be done if there is a suspicion for disseminated TB. For pulmonary TB, three samples of sputum are sent 824 h apart with at least one being an early morning sample. Sputum induction with aerosolized hypertonic saline can be employed for patients who are unable to expectorate. Invasive diagnostic tests such as bronchoscopy with bronchoalveolar lavage may be necessary for diagnosis. Sensitivity and specificity of sputum AFB smear micro scopy is 45%80% and 50%80%, respectively [13] . Sensitivity and specificity of sputum culture is 80% and 98%, respectively [14,15] . Cultures need to be incubated for 68 wk to isolate MTB. Drug susceptibility testing should be done on all positive MTB cultures. Nucleic acid amplification (NAA) assays are available for rapid diagnosis of TB. These tests can be done from cultures or direct tissue samples. The Centers for Disease Control (CDC) recommends sending the first sputum sample for NAA testing. These assays can detect target specific MTB complex RNA/DNA sequences with nucleic acid probes in 2448 h. Xpert MTB/Rif test is an automated NAA test that detects rifampin resistance simultaneously in two hours. Rifampin resistance is a marker of multidrug resistant (MDR) TB. Sensitivity and specificity of NAA tests in AFB smear positive respiratory secretions is over 95% and is not affected by nontuberculous Mycobacteria (NTM) or immunosuppression. They have lower sensitivity, 75%85% in smear negative sputum [1618] . These tests should be performed within the first few days of ATT and a negative NAA test does not exclude TB. Cultures are still required for species identification and for drug susceptibility testing. NAA assays do not perform as well for other clinical specimens and the overall evidence regarding their use in transplant patients is lacking at this time. Tissue biopsy of the involved organ and/or fluid for histopathology evaluation, AFB smear and culture should be obtained in suspected extrapulmonary TB. In our case series, we diagnosed TB disease if any of the following criteria were met: (1) isolation of MTB in culture of sputum, blood or any body fluid, with or without detection of AFB on smear; (2) clinical response to ATT in a patient with fever of unknown origin or compatible clinical syndrome with radiographic and histopathological features suggestive of TB, including tissue sample with granulomas; and (3) presence of MTB DNA using PCR. Pretransplant screening of donor and recipient for TB infection should be rigorous given the high risk of TB in the transplant setting and significant associated mortality. In transplant candidates and living donors, thorough history taking and comprehensive physical examination should be performed with a special focus on the medical and social risk factors for TB mentioned earlier. History of TB exposure is most essential and one should inquire about residence and travel history to endemic areas, contact with a known active TB case, and prior TST testing results. In patients with a history of prior LTBI or TB, details regarding treatment regimen and duration are essential, and active TB in these individuals should be excluded. These patients may need additional testing and consultation with a transplant infectious disease specialist. Donors with active TB within 2 years have higher risk of relapse and transmission via the allograft [6] . Patients without prior history of known LTBI or TB disease should undergo testing for LTBI with a PPD test or IGRA. If the first PPD test is negative, a second skin test is recommended for booster effect as discussed earlier. A CXR is part of routine preoperative screening and should be evaluated for evidence of prior granulomatous disease. Patients with positive PPD or IGRA should be treated for LTBI prior to transplantation, whenever possible, after exclusion of active TB. Individuals with low risk of TB based on history and negative testing are cleared for transplantation. In high risk patients with negative TST/IGRA, indeterminate IGRA or chest imaging suggestive of prior granulomatous disease, it is recommended to treat with INH for presumed LTBI, prior to transplantation. Active TB needs to be ruled out by appropriate smears, cultures and molecular testing before treatment for latent TB is initiated. In highrisk patients, urine for AFB and renal imaging should also be performed to rule out genitourinary TB [19] . In our case series, two patients had known LTBI by PPD/IGRA but did not receive INH prophylaxis prior to the transplant. One of the patients received INH prophylaxis immediately posttransplant. One patient was not tested for LTBI, but was high risk based on prior exposure history and a CXR with old granulomatous changes. Interestingly, one recipient tested negative by IGRA and was low clinical risk. He had calcified nodules on imaging and later developed TB disease. Pretransplant evaluation is challenging in deceased donors given the limited history available. Efforts should be made to obtain a history regarding prior TB exposure, TB disease and treatment from family and healthcare givers. The evaluation is similar to living donors as above, prior to accepting the organ. In donors with a history of TB and reliable information about completed ATT, appropriate smears, cultures and molecular testing should be done to rule out active disease. In deceased donors with a history of TB disease and insufficient information about treatment or positive testing, it is recommended to reject the donor except in urgent transplants. In this scenario, recipients should be treated for active TB after informed consent with close monitoring under the guidance of an infectious disease specialist [6,8] . Management Direct evidence regarding management of transplant recipients for prevention and treatment of latent and active TB infection is lacking. Their care is largely based on expert opinion and extrapolation from studies in immunecompetent and other immunocompromised populations. Indications for treatment of LTBI in recipient candidates include a positive TST/IGRA as well as those with a negative TST/IGRA or indeterminate IGRA with risk factors: Radiographic evidence of prior TB in the absence of treatment, donor with recent TB exposure, positive TST or radiographic signs, or close prolonged contact with an active TB case [8] . Before treatment of LTBI, active TB needs to be excluded. One recipient in our case series with a positive PPD received INH prophylaxis soon after transplant for 9 mo. However, a month after finishing INH, she developed renal allograft TB. This patient was asymptomatic, but cultures were not obtained prior to initiation of prophylaxis. The other explanations for the development of active TB include possible low levels of INH due to concomitant steroids and inadequate host response in the setting of immunosuppressant use posttransplant. Treatment regimens for LTBI include INH 5 mg/kg daily (maximum dose 300 mg/d) for 9 mo with pyridoxine 2550 mg daily to prevent neurotoxicity. INH 15 mg/kg twice weekly (maximum dose 900 mg/d) with pyridoxine, given as directly observed therapy has also been proposed. Rifampin 10 mg/kg daily (maximum dose 600 mg/d) for four months may be used prior to transplant but should be avoided if possible after transplant due to drug interaction with the immunosuppressant medications. Combination of pyrazinamide and rifampin daily for 2 mo is not recommended due to the high risk of hepa totoxicity in the transplant population. A shorter regimen of weekly INH and rifapentine for 12 wk, as directly observed therapy, to treat immune competent individuals is not recommended in renal transplant candidates [8] . Compliance with LTBI treatment is poor as seen in a North American study where only half of the patients initiated on therapy finished the complete course of treatment [6] . If treatment is interrupted for more than two months, patients should be excluded again for active TB [12] . Adverse effects are more common in solid organ transplant recipients with hepatotoxicity seen in 37% of kidney recipients and up to 50% in liver transplant recipients [8,20] . Monitoring should involve monthly phy sician examination and bimonthly blood levels of liver function tests. Medications will need to be discontinued or dose adjusted if liver function tests are more than three times the upper limit of normal with symptoms/signs, or more than five times the upper limit of normal without symptoms [12] . Treatment of active drug susceptible TB usually involves two months of an initial phase therapy with INH, rifampin/rifabutin, pyrazinamide, +/ ethambutol, followed by a continuation phase therapy of four months of INH and rifampin, with a total duration for six months. Cavitary TB, with positive sputum culture after two months of intensive phase therapy, is treated for nine months' duration with prolongation of continuation phase therapy. Bone and joint disease as well as severe disseminated disease are treated for a total of six to nine months. Central nervous system disease warrants treatment duration of nine to twelve months [8] . Since the majority of transplant recipients present with severe disseminated TB, 9 mo or longer duration of treatment may be preferred in the presence of response to ATT. Risk of recurrence was found to be lower when treatment is extended to beyond 12 mo [12] . Longer course of therapy is required if second line drugs are used MDR and extensively drug resistant TB fortunately has been rarely reported in solid organ transplant recipients. This should be treated according to drug susceptibility testing with at least four active drugs. The World Health Organization (WHO) suggests a total treatment duration of 18 mo after culture conversion. Adjunctive surgery may be required in some patients [12] . In the United States, patients with pulmonary TB have sputum cultures obtained monthly until two consecutive cultures are negative, and at two months of intensive phase therapy to further guide treatment. If the sputum culture at two months of treatment is positive, WHO recommends sputum smear microscopy at the end of the third month and if positive, sputum culture and drug susceptibility testing. Drug susceptibility testing should also be done if a patient develops positive cultures after a period of negative cultures. European guidelines in transplant recipients recommend sputum smear and culture at a minimum of two months and four months of treatment, at the end of ATT, and on two further occasions until the end of 12 mo [12] . Extrapulmonary TB in general is followed clinically. Patients should have baseline laboratory data including a comprehensive metabolic panel, complete blood counts, and uric acid levels. They should be monitored and managed for hepatotoxicity as described above. Baseline and monthly visual acuity and redgreen discrimination testing should be done with ethambutol use. If one suspects pulmonary TB, the patient should be isolated in a negative pressure room until active TB is excluded. Pulmonary TB patients should be isolated for at least two weeks with clinical improvement on therapy and until three consecutive negative sputum smears are obtained. In immunocompetent patients, rapid testing with Xpert MTB/Rif has been used in conjunction for decisions regarding discontinuation of TB isolation. However, this cannot be recommended in the transplant population at this time. Drug interactions Patients need to be monitored closely for drug interactions with immunosuppressive medications used in solid organ transplant given the increased risk of rejection. Rifampin is used in treatment of TB due to its potent MTB sterilizing action. Rifampin is a strong inducer of CYTP3A4 leading to increased metabolism of calcineurin inhibitors, mammalian target of rapamycin (mTOR) inhibitors, mycophenolate mofetil and corticosteroids. Rifabutin is a less potent cytochrome inducer. Drug levels need to be monitored closely at initiation of TB therapy, after discontinuation of rifampin or rifabutin, or with any adjustment of immunesuppressant dosing [8] . Spanish guidelines recommend rifamycin free regimens for treatment, except for disseminated TB and INH resistant TB [19] . We prefer rifamycin based regimens for treatment of TB in our renal transplant recipients. Other drug interactions to consider include the following: INH may increase corticosteroid levels and its adverse effects, streptomycin with cyclosporine and sirolimus may cause additive nephrotoxicity, fluoroquinolones can further increase risk of tendon rupture with concomitant corticosteroids, and corticosteroids may decrease INH levels [12] . Complications Complications of TB besides primary organ involvement include septic shock, venous thromboembolism (VTE), immune reconstitution inflammatory syndrome (IR IS) and macrophage activation syndrome (MAS) or hemophagocytic syndrome [21,22] . Septic shock with TB is associated with high mortality [23] . Pulmonary and extrapulmonary TB both predispose to VTE with the risk being much higher than other hospitalized patients, in general [24,25] . IRIS is recognized by the paradoxical symptom worsening of fever, cough, enlarging lymph nodes or worsening of findings on imaging after initiation of treatment. This is seen primarily in the first few months of initiation of therapy. MAS is rare and has high mortality. It manifests as fever, hepatosplenomegaly, pancytopenia and liver abnormalities. Diagnosis is usually made by bone marrow biopsy showing infiltration of nonmalignant macrophage phagocytizing red blood cells [12,21] . In our case series, one patient presented with septic shock and presumed MAS succumbing to his illness. The other patient presented with VTE and developed IRIS two months after initiation of ATT. In conclusion, TB remains a challenging opportunistic infection in the solid organ transplant population. Efforts should be made to prevent active TB via recognition and treatment of LTBI in potential donors and transplant candidates, ideally prior to transplantation. Current tests for LTBI (PPD and IGRA) can be falsely negative in patients with ESRD and those on immunosuppressive medications. IGRA has not been evaluated for use in deceased donors. There is a need for better diagnostics for LTBI. Exclusion of active TB is of paramount interest prior to LBTI therapy by culture, smear, imaging and molecular testing as needed. Given the changes in the allocation system, older and longer dialysis vintage recipients are being transplanted, increasing the risk of active TB. Due to the organ shortage with more high risk donors being utilized, the risk for donor derived TB might increase as well. More widespread use of rapid NAA assays and line probe assays is needed to screen highrisk TB donors, and for diagnosis of TB in recipients. As disseminated and extrapulmonary disease are more common in transplant recipients, studies are needed to assess the performance of NAA assays in body fluids, other than sputum, in this population. Given diagnostic limitations, physicians need to maintain a high clinical suspicion for TB post transplantation in order to initiate early treatment and decrease morbidity and mortality. Studies are needed to investigate the efficacy of shorter treatment regimens given the interactions with immunosuppressive medications and significant adverse effects. Lastly, public health efforts are needed both at the Case characteristics Four kidney transplant recipients, aged 38-67 years, presenting with fever within one year of kidney transplantation. Clinical diagnosis Lymphadenopathy, pleural effusion, pericardial effusion, acute respiratory failure, septic shock, acute kidney injury, bilateral lower extremity deep venous thrombosis and pulmonary embolism. Differential diagnosis Bacterial infections, fungal infections such as histoplasmosis, cryptococcosis, interstitial nephritis due to adenovirus infection, post-transplant lymphoproliferative disorder. Laboratory diagnosis Demonstration of acid-fast bacilli in sputum and bronchoalveolar lavage. Mycobacterium tuberculosis grew in cultures from sputum, blood, lymph node aspirate and pericardial fluid. Positive Mycobacterium tuberculosis PCR in blood and bronchoalveolar lavage. Imaging diagnosis Radiological features included calcified/non-calcified lung nodules, diffuse lung infiltrates, pleural effusion, lymphadenopathy, pulmonary embolism and increased flurodeoxyglucose uptake in the kidney allograft on positron emission tomography scan. Pathological diagnosis Necrotizing and non-necrotizing granulomas seen on kidney allograft and transbronchial lung biopsies respectively. Demonstration of acid-fast bacilli on lung biopsy. Treatment Two months of Rifampin, Isoniazid, Ethambutol and Pyrazinamide followed by 4 mo of Rifampin and Isoniazid. Second-line drugs moxifloxacin and amikacin were used in selected cases. Related reports Tuberculosis in solid organ transplant recipients is rare in the developed countries. A study in 1998 estimated 0.35%-1.2% incidence in the United States. Term explanation Tuberculosis is a rare opportunistic infection caused by acid fast bacillus Mycobacterium tuberculosis that was identified by Robert Koch in 1884. Experiences and lessons Tuberculosis should be considered in solid organ transplant recipients presenting with unexplained fever to avoid delayed or missed diagnosis. TB carries high morbidity and mortality. Transplant recipients should have comprehensive screening for risk factors for TB along with testing for latent TB. Active TB needs to be ruled out prior to the treatment of latent TB. Ideally patients should be treated for latent TB prior to transplant due to drug interactions and suboptimal response to therapy in the setting of immunosuppression.
2018-04-03T05:13:32.138Z
2017-06-24T00:00:00.000
{ "year": 2017, "sha1": "b873ccab9ec7ada261a79d7444a8e1dad4b4ee6e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5500/wjt.v7.i3.213", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b873ccab9ec7ada261a79d7444a8e1dad4b4ee6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56320769
pes2o/s2orc
v3-fos-license
Study on new method for modal parameters identification of stiffened plate with four clamped edges In order to accurately and rapidly identify the vibration characteristic parameters of nonlinear stiffened plate structure, Rayleigh-Ritz method is used for modeling. First, the stiffened plate with four edges clamped is divided into plate and stiffeners. The plate is considered to be geometrically nonlinear, and the stiffeners are taken as Euler beams to solve the deformation energy and kinetic energy. Then, natural frequency of stiffened plate is solved by Rayleigh-Ritz method. Finally, stiffened plate structure in the laboratory is selected as the research object, results from Rayleigh Ritz method calculation are compared with ANSYS and the swept frequency method results. The results show that the solution in this paper can be correct, which lays a good foundation for further nonlinear active control experiment. Introduction In order to improve rigidity and strength of plate, stiffened plate structure is wide used in industrial fields specially in the aerospace, shipbuilding and automobile industry.This structure has the advantages of light weight and high stiffness [1].Therefore, in order to better understand the characteristics of the stiffened plate, it is necessary to analyze the vibration characteristics of stiffened plate. At present, its results are seldom about the vibration analysis of the stiffened plate, and most of the results focus on FEM (Finite Element Method) software and experiment Method, the numerical method of stiffened plate is rare analyzed.Cho [2] proposed a method based on FEM and the assumed modal for the inherent modal analysis of stiffened plate structure, but there are still some deficiencies in this research of using FEM software methods, such as the serious distortion if the inappropriate element and mesh are selected.Liu [3] developed a method to study influence of the different arrangement forms of the stiffened rib for vibration of stiffened plate, Qiu [4] presented a method for the ultimate strength research for two-way orthogonal stiffened plate.Laser method, sweeping method of frequency, tapping method are frequent used in experiment, Li Xiao-jun [5], Li Shengquan [6], Hu Yuyong [7] respectively used sweeping method, laser method, tapping method to analyze vibration natural frequency of the stiffened structures.Although experimental method is relatively accurate, it is required the sensor of the reception test data pasted accurately, however, it is not easy that the sensor will be pasted on the appropriate location.For numerical method, Zhou [8] proposed a method based on analysis of stiffened plate structure of the difference method, N. Nguyen-Minh [9] developed a method to research free vibration of folded plate by cell smooth discrete shear gap, Zhong Ming [10] studied the rectangular plates from angle of theory, Ma Niujing [11] presented a analytical method of Lagrange equation for nonlinear free vibration of stiffened plate with four edges clamped.However, the reports of piezoelectric stiffened plates show that the calculating process of numerical calculation method could be often more complicated. In view of these, Rayleigh-Ritz method was used to simplify numerical calculation of frequency analysis problem for the stiffened plate in this paper.Relative to other numerical methods, the Rayleigh-Ritz method is applied to the modeling of frequency analysis problem creatively.It makes the calculation process of the model simplified greatly.At the meantime, it can guarantee the correctness of the analysis process and provide theoretical basis for the engineering application and contrast of experimental analysis. Vibration equation of stiffened plate Stiffened plate structure is shown as Figs.1-2, where , , ℎ respective represent the length, width and height of stiffened plate structure, width and height of stiffened plate structure, ( , , ) is neutral flexural function of the stiffened plate.Considering the nonlinear plate, the reinforced bar can be equivalent to Euler beam element [11], then the deformation energy of the reinforced bars can be expressed respectively: where = 1,2, … , , = 1,2, … , , , -bending rigidity.The kinetic energy of the reinforced bars can be expressed respectively: Based on Rayleigh-Ritz method: where = + + , = + + . ANSYS analysis for stiffened plate Unit type of the plate is Shell63, density of unit is 2700 kg/m 3 , elastic modulus is 72e9 Pa, Poisson's ratio is 0.3.Through the horizontal and vertical line segment to form unit grid, the modal analysis results are shown as Fig. 3-4. Experiment analysis for stiffened plate Experimental system is showed as Fig. 5.Under sinusoidal sweep excitation signal of 0-500 Hz, piezoelectric acceleration sensor collected time domain signal as Fig. 6(a), then Fig. 6(b) is frequency spectrum analysis of Fig. 6(a).2, these results include theoretical calculation, ANSYS analysis, experiment test three ways. Table 2 shows that results from ANSYS and experimental would be bigger compared with theoretical calculation results, the main reason is that the impact on natural frequency of stiffened plate by smaller torsional rigidity could be ignored in order to simplify the complex calculation process [12][13].Experimental results are also small compared with the ANSYS results, the main reason is that the boundary of the stiffened plate fixation couldn't be fixed completely in the experiment.however, the ANSYS analysis process is the ideal conditions in boundary completely fixed. Conclusions The natural frequency of stiffened plates with four edges clamped is solved by Rayleigh-Ritz method.Its results compared with results of ANSYS and experiment showed that theoretical derivation from Rayleigh-Ritz method is correctness.Compared with other theoretical calculation methods, its calculation process is relatively simple.Rayleigh-Ritz method is appropriate for the theoretical calculation of the stiffened plate.It offers a new method for the frequency analysis of the stiffened plate, which avoids the high requirements for engineering experience and theoretical basis in the FEM analysis.Meanwhile, it offers contrast of result for the result of the experiment, and it also provides support for further nonlinear vibration active control experiment research. Table 1 . Related parameters of structure Table 2 . Natural frequency of stiffened plate
2018-12-18T03:04:57.952Z
2017-10-21T00:00:00.000
{ "year": 2017, "sha1": "a5dac007ce97c11b57c63c8126c57a0dde8ed5de", "oa_license": "CCBY", "oa_url": "https://www.jvejournals.com/article/18972/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a5dac007ce97c11b57c63c8126c57a0dde8ed5de", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
245059044
pes2o/s2orc
v3-fos-license
Agricultural disaster management and contingency planning to meet the challenges of extreme weather events Natural disasters of hydro-meteorological nature are playing a key role in the economic development of India. Agricultural production in India is largely dependent on the performance of summer monsoon rainfall. Apart from its spatial and temporal variability, several climatic anomalies / extremes attaining disastrous form at times were found to influence the country's agricultural production. Nature and magnitude of climate extremes that frequent India are presented with their history and region of occurrence. Droughts and floods are found to be paramount. Of late, hailstorms, cold and heat wave conditions are also exerting considerable influence on field and orchard crops. Trends in extreme events, their frequency and effects on crops are discussed. Regions in the country prone to be sensitive to the various weather extremes are presented. Management strategies and contingency planning to be adopted to cope-up the weather extremes are elucidated. Few case studies on the successful strategies adopted at the field level to cope-up extreme weather events under National Initiative on Climate Resilient Agriculture (NICRA) program are reported. Introduction Natural disasters can be classified in to hydrometeorological and geophysical disasters. The hydrometeorological disasters include landslides/avalanches; droughts/famines; extreme temperatures and heat waves; floods; hurricanes; forest/scrub fires; windstorms; and others (insect infestation and waves/surges). The geophysical disasters include earthquakes and volcanic eruptions. During the period 1900-2014, the number of occasions in which large Indian population got affected from drought were more than any other natural disaster. There were 14 severe drought events that claimed on an average more than 3 lakh lives, affected 75 million people and resulted in US$ 0.17 million loss. If number of natural disaster events are considered riverine flood tops the list (143) closely followed by tropical cyclones (104) (CRED, 2015). An increase in the frequency and intensity of disaster events in south Asia over the period 1970-2010 was observed. The increase is large given by a greater number of hydro-meteorological events as flood and storm events have become increasingly common despite relatively consistent rainfall patterns (GFDRR, 2013). Agricultural production in India is closely linked to the performance of summer monsoon (June to September) which contributes about 75% of the annual precipitation. Thus, an understanding of the variability of monsoon rainfall is of great relevance as they have a direct impact on total food grain production. Apart from the inter-annual variability in summer monsoon rainfall, occurrence of many of the hydro-meteorological events are found to (155) influence Indian agriculture at different spatial scales. The data on climate anomalies, extreme and disastrous weather events in respect of the Indian sub-continent lie scattered in published literature. De et al. (2005) compiled the information on the extreme weather events for over 100 years and highlighted their socio-economic impacts. Goswami et al. (2006) made a comprehensive study of trends in three different intensity classes such as moderate (5<R<100 mm/day), heavy (R>100 mm/day) and very heavy (R>150 mm/day) of Indian Summer Monsoon Rainfall (ISMR) and found that moderate intensity ranges show a decreasing trend while heavy and very heavy intensity ranges show significant increasing trend over Central India. Guhathakurta and Rajeevan (2008) studied the long term trends of rainfall using 103 year data set over the Indian meteorological sub-divisions and found that three sub-divisions (Kerala, Jharkhand and Chhattisgarh) are showing decreasing trends and eight meteorological sub-divisions showed increasing trend. In a more recent study, Hamza et al. (2013) showed that number of rainy days over major parts of India are decreasing and increasing over northeastern region in the high (15-20 mm) and very high (> 20 mm) rainfall classes, respectively. Over the west coast, a significant decreasing trend was found. In the northeastern region, days with less than 10 mm rain showed a slight decreasing trend but days with above 10 mm rain showed significantly increasing trend. Apart from the summer monsoon rainfall, India receives precipitation during the winter months of December to March which is about 15% of the annual precipitation. This precipitation is very important for rabi crops. Yadav et al. (2012) noticed increased variability in winter precipitation during the most recent three decades with more excess and deficient years. Productivity of kharif rice and rabi wheat in Indo-Gangetic Plains (IGP) was found to be influenced by monthly distribution of rainfall, which accounted for 44% yield variability in rice and 21% yield variability in wheat (Subash and Ram Mohan, 2010). Though rainfall and its distribution has profound influence on the Indian food grain production, climatic elements like radiation and temperature are also exerting considerable effect. For instance, Pathak et al. (2003) observed negative trends in solar radiation and an increase in minimum temperature, resulting in declining trends of potential yields of rice and wheat in the IGP of India. Mall and Singh (2000) observed incidence of intense fog events curtails the photosynthetically active radiation; intense spell of high temperature during grain filling and ripening stage of most of field crops resulted in low yields. Revadekar et al. (2012) studied the frequency indices of hot events at 121 stations that fairly represented the entire Considering the influence of extreme weather events on Indian food grain production and the likely chance of increase in their frequency in future, we present here a review on the type and frequency of different extreme weather events that occurred and research results on their impacts on crop production. Management options and contingency planning for different weather extremes are discussed with a few case studies. Drought prone regions of India IMD classified agricultural drought as the period when weekly rainfall is less than half of its normal (> 5 mm) consecutively for a four week period during May to October. Sikka (1999) compiled the occurrences of droughts during the period 1276 to 1870 AD. There were eight instances during which the entire country was affected and on 16 occasions drought prevailed in different regions. During the period 1873-2009 there were 24 drought years of different magnitudes. Mostly north and north western parts of the country witnessed more frequent of drought years compared to other parts of the country. Severe to phenomenal drought affecting more than 50% of areas were observed in 1877, 1899, 1901, 1918, 1972, 1987 and 2009. Rao et al., 2008) The departures of per cent average monsoon rainfall and area affected under the phenomenal droughts from years 1872 to 1990 compiled by Kulshreshtra (1997) and updated for the period 1991-2010 by All India Coordinated Research Project on Agrometeorology (AICRPAM) are given in Table 1. Impacts on cropped area and production Droughts have considerable impact on agricultural production and continuous droughts during 1965-66 and 1966-67 forced the country to import food grains. Similarly, all the drought years have recorded a decline in production over the previous years. However, no other drought in the past led to such a drop in food production as the 2002 drought ( . Changes in cropped area Area sown to different crops can be affected by the incidence of drought. These changes in area and in productivity as affected by drought together result in changes in production of different crops (Rama Rao et al., 2013). The changes observed in area, yield and production of major dryland crops during 1970-2011 periods are Figs. 2(a-f). Correlation between monsoon rainfall (sub-divisional) and state-level production of different crops. The correlation coefficients shown in (e) for Maharashtra are with the post-monsoon (October-November) rainfall of the sub-divisions in that state (after Krishna Kumar et al., 2004) presented in Table 3. There were 10 drought years during the period of analysis out of which four were severe droughts. Less area was sown to pearl millet during drought years. Area sown to castor was adversely affected by drought and this adverse effect was more severe during 2000-2011 compared to earlier periods. The yield effect in case of sorghum was most significant (-15.8%) during 1986-2000 which was about 9% during the initial period and during the long term . Yield of pearl millet was most sensitive to drought throughout as most of its cultivated area is in low rainfall regions. In case of pulses, sensitivity of yields to incidence of drought was found to be increasing over time as is evident from the yield effects of pigeonpea, chick pea and green gram. Thus, significant changes were observed in the sensitivity of production of major dryland crops to drought over time. Changes in production The impact of deviations in rainfall during the monsoon period (June to September) on productivity of crops was regressed on the deviations in monthly rainfall and time trend using all India data on production and productivity for the period 1976-2010. The analysis showed that rainfall during July was more critical to the productivity of crops. A one per cent increase over normal in the rainfall during July was found to increase productivity of pearl millet by about 2.25 kg/ha. Conversely, one per cent decrease in the rainfall would be accompanied by a productivity fall of about 2.25 kg/ha. Similar rainfall-productivity relationship was observed in case of soybean, cotton and groundnut. Deviations in September rainfall were also found to have a positive significant relationship with productivity of sorghum, pigeonpea and soybean. It was also found that rainfall during September also had a significantly positive effect on productivity of rabi crops like chickpea, rapeseed and mustard Dependence of kharif food grain production, productivity of crops like kharif rice, rabi rice, wheat, kharif sorghum, kharif groundnut on the monsoon rainfall was amply demonstrated (Krishna Kumar et al., 2004) (Fig. 2). Kharif food grain production in most states [ Fig. 2(a)] has a strong association with regional monsoon rainfall. It is also interesting to note that, within Bihar, kharif food-grain production is strongly correlated with monsoon rainfall in the northern Bihar Plateau, but only weakly correlated with monsoon rainfall in the plains. Similar spatial variability is also apparent in Maharashtra (kharif food grains and kharif sorghum) and Karnataka (kharif rice, wheat and kharif groundnut). Wheat shows a strong response to local rainfall in Madhya Pradesh, Uttar Pradesh, Gujarat, Maharashtra and Rajasthan, but had a poor correlation with local rainfall in Punjab, a major wheat-producing state [Fig. 2(d)]. The strong influence of monsoon rainfall on sorghum observed in Gujarat, Rajasthan and Punjab is not seen in the major sorghum producing state of Maharashtra, where a significant correlation with rainfall during the months of October-November was noticed [Fig. 2(e)]. Kharif groundnut production is strongly related to sub-divisional monsoon rainfall in all of the groundnut-producing states [ Fig. 2 Auffhammer et al. (2011) recently confirmed that drought and extreme rainfall negatively affected rice yield in predominantly rainfed areas during 1966-2002, with drought having a much greater impact than extreme rainfall. Twenty-three of the 36 states and union territories in the country are subject to floods and 40 million hectares of land, roughly one-eighth of the country's geographical area, is prone to floods. Floods occur in almost all river basins in India. The main causes of floods are heavy rainfall, inadequate capacity of rivers to carry the high flood discharge, inadequate drainage to carry away the rainwater quickly to streams/rivers. Ice jams or landslides blocking streams; typhoons and cyclones also cause floods. Flash floods occur due to high rate of water flow as also due to poor permeability of the soil. Most of the floods occur during the monsoon period and are usually associated with tropical storms or depressions, active monsoon conditions. The extent of area affected and damage caused to agriculture due to floods that occurred during 1953-2004 is given in Table 4. Floods In recent years, heavy precipitation events have resulted in several damaging floods in India. Analyzing data from 2599 rain gauge stations (Guhathakurta et al., 2011) showed that wet days have increased in Peninsular India, particularly over Karnataka, Andhra Pradesh and parts of Rajasthan and some parts of eastern India, while most parts of central and northern India showed a decreasing trend in frequency of rain days (> 0.1 mm) (Fig. 3). Frequency of heavy rainfall events (> 64.5 mm) are decreasing in major parts of central and north India while they are increasing in Peninsular, east and northeast India. The rate of increase in rainy days has been observed to be around 40-50 days in 100 years in Peninsular India particularly over Karnataka and Andhra Pradesh. Increase in rainy days has also been observed over most parts of Rajasthan, parts of Gangetic West Bengal and adjoining Fig. 3. Highest one-day extreme rainfall in India (after Guhathakurta et al., 2011) areas of Jharkhand. The spatial pattern of India's highest one-day ever recorded point rainfall is presented in Fig. 5. Occurrences of 40 cm or more rainfall has been noticed along the west and east coast of India, Gangetic West Bengal and northeastern parts of India. Trend analysis performed on annual one-day extreme rainfall series showed an increasing trend over south Peninsular region, Maharashtra, Gujarat region, Bihar and some other isolated areas. Cyclones The major natural disaster that affects the coastal regions of India is cyclone and as India has a coastline of about 7516 km; it is exposed to nearly 10 per cent of the world's tropical cyclones. On an average, about five or six tropical cyclones form in the Bay of Bengal and Arabian sea, and hit the coast every year. Out of these, two or three are severe. When a cyclone approaches to coast, a risk of serious loss or damage arises from severe winds, heavy rainfall, storm surges and river floods. Most cyclones occur in the Bay of Bengal followed by those in the Arabian Sea and the ratio is approximately 4:1. The incidence of cyclonic storms, with wind speeds between 65 kmph and 117 kmph and severe cyclonic storm with wind speeds between 119 kmph and 164 kmph, reaching Tamil Nadu and Andhra Pradesh is high during the north east monsoon season i.e., October -December, whereas the highest annual number of storms, severe storms occur in the Odisha -West Bengal coast. Cold and heat waves Prevalence of extreme low temperature in association with incursion of dry cold winds from north into the sub-continent is known as cold waves. The cold waves mainly affect the areas to the north of 20° N. In India, a cold wave is considered to be severe when the night temperature drops below its daily normal by 7 °C or more, when normal minimum temperature is 10 °C or more. If the normal minimum temperature is less than 10 °C, then 5 °C or more below normal is called the severe cold wave conditions. The frequencies of occurrence, of cold waves and heat waves in different parts of the country for different periods are given in Table 5. Maximum number of cold waves generally occurs in Rajasthan followed by Jammu & Kashmir and Uttar Pradesh. The frequency of events over different time periods indicates that in recent years the state of Rajasthan is experiencing more cold waves and they were few over Jammu & Kashmir. Depending upon the time of occurrence they are either beneficial or harmful to the field and orchard crops. Cold wave conditions that prevailed during rabi 2010-11 and 2011-12 coincided with flowering and seed formation stage of wheat in Punjab resulting in good yields (Samra et al., 2012). Average wheat production of Ludhiana district (Punjab) during recent 12 years of heat waves, cold waves and normal years is given in Table 6 along with mean temperature and deviation in minimum and maximum temperature. Below normal deviations in the minimum temperature and maximum temperature in cold wave years of 2010-11 and 2011-12 were statistically significant. Out of 12 years, 8 years (66.7%) were normal, two each (17.6%) were heat and cold waves. On an average, there was a loss of 217 kg/ha (4.5%) in productivity in heat wave year and gain of 356 kg/ha (7.4%) in cold wave year. Among the two recent continuous cold wave years productivity gain in the relatively colder year 2011-12 was higher by 400 kg/ha and total production in the country was highest so far. During March 2004, heat wave conditions prevailed over different parts of north India coinciding with maturity phase of wheat, rapeseed and vegetables. The temperature increase above normal was of lower magnitude towards eastern and southern India. In 2004 even minimum temperature was higher than normal in several places at Srinagar (Jammu & Kashmir), Palampur, Ludhiana, Pantnagar and Pusa for many days continuously. This has resulted in a loss of 4.6 million tonnes of wheat production. There was a higher incidence of diseases (rusts, powdery mildew) and pests, the wheat Frequency of cold and heat waves 1901-19101911-19671968-19771978-19992000Total (1901 De et al., 2005;IMD 2000IMD -2008 Heat wave year - Cold wave year ----------4942 5342 5142 (Source : Samra et al., 2012) crop matured 10-20 days in advance of normal period with reduced 1000 grain or test weight. Coconut, banana, cardamom, black pepper, cashew etc. were affected in Kerala due to heat wave induced lower humidity and soil moisture. Milk production was affected slightly due to early disappearance of green fodder. However, castor productivity improved slightly in Gujarat. In Haryana, night temperatures during February and March, 2004 were recorded 3 °C above normal and subsequently wheat production declined from 4106 kg ha -1 to 3937 kg ha -1 (Ranuzzi and Srivastava, 2012). In a recent study, sensitivity of wheat yields to minimum temperature during post-anthesis period was quantified and it was found that wheat yields in India for the period 1980-2011 declined by 7% (204 kg/ha) for a 1 °C rise in minimum temperature. Exposure to minimum temperature exceeding 12 °C for 6 days and to maximum temperature exceeding 34 °C for 7 days during post-anthesis period are thermal constraints in achieving high productivity levels in wheat (Bapuji Rao et al., 2015). Hailstorms Starting from 26 th February, 2014, series of hailstorms struck central India and went on unabated till 15 th March, 2014. In Maharashtra, the extended hailstorm activity of 2014 has adversely affected most parts of Marathwada, western Maharashtra, north Maharashtra and Vidarbha. Reports that appeared in the media accounted for damage ranging from Rs.10,000 to 15,000 crores, with all field and orchard crops were put together. In Madhya Pradesh, parts of Malwa and Mahakoshal regions were largely affected. Such an extended period of hailstorm or that matter even thunderstorm activity caught farmers and officials, and people at large, unaware and they were left clueless with this sudden development every day for more than a fortnight. Large sheets of hail were formed over extensive land areas resembling snow hit areas like Kashmir. The hailstones formed into large lumps of irregular shape, in some cases weighing more than 5 kg, after reaching the ground due to agglomeration. It took several hours to 2-3 days for the hails to melt away completely. Crop canopies that came in contact with hails sustained physiological damage which has led ultimately to decay of plants. Apart from this, the crops experienced severe mechanical damage caused the impact of the hails, the weight of which varied from 2g -200g. The entire episode has brought into limelight the fragility of Indian agriculture to extreme weather situations and the agony of the affected farmers has compelled the State Governments to announce immediate relief measures. Rao et al. (2014) used hailstorm data of 38 years for the period 1972 -2011 (excluding 1977 and 1984, for which data are not available) for mapping areas prone to frequent hailstorms [ Fig. 4(a)]. More than 61 per cent of the districts experienced at least one hail event in a 38 year period. Highest frequency is noticed over districts in the northern parts of Vidarbha region of Maharashtra that are adjoining the state of Madhya Pradesh. Rama Rao et al. (2013) used various indicators derived from daily gridded data that was aggregated at district level for identifying areas prone to cyclones, floods and drought, and discussed as below: Cyclone proneness It is computed by combining number of cyclones crossing the district, number of severe cyclones crossing the district, probable maximum precipitation for a day, probable maximum winds and probable maximum storm surge. Districts of east coastline are more vulnerable to cyclone than west coast [ Fig. 4(b)]. Flood proneness It is based on per cent geographical area prone to flood incidence. According to this index, most parts of Punjab, West Bengal and some districts of Bihar and Uttar Pradesh are most vulnerable to flood (> 60% area) [ Fig. 4(c)]. Observation and measurement of meteorological parameters with sufficient density in time and space has led to development of monitoring and forecasting systems with sufficient lead period in India. Successful predictions by IMD on the movement of tropical cyclones (Laila, Thane, Nilam, Phailin, Helen, Lehar and Hudhud) that struck east coast during 2010-2014 resulted in the minimization of human and agricultural losses. Much progress has been made on the agrometeorology front in identification of areas prone to drought, floods, heat and cold waves, frost and hailstorms. Use of remote sensing techniques, crop modelling and GIS are making a head way to gain adequate insights on the crop responses to extreme weather and delineating regions that are likely to get affected. Drought proneness This index computed by combining the probability of occurrence of severe and moderate droughts with weights of 2:1. Western parts of Rajasthan and Gujarat are highly prone to severe and moderate drought episodes [ Fig. 4(d)]. Coping with extreme weather events Targeted research cum adaptation and mitigation of extreme events is at beginning stage and based on the information already generated; these strategies can be broadly categorized into (a) crop based; (b) resource management based and (c) early warning systems. Crop based approaches These approaches encompass growing crops and varieties that fit into changed rainfall patterns, use of varieties with changed duration, tolerance for heat stress, drought and submergence. Additionally, varieties with high fertilizer and radiation use efficiency and new crops and varieties that can tolerate coastal salinity and sea water inundation are to be identified / evolved. Intercropping is a proven practice of insurance against crop failures due to floods or droughts and facilitates minimum assured returns (Venkateswarlu et al., 2009). Resource management based approaches Resource management strategies include in-situ moisture conservation, rainwater harvesting and recycling, efficient use of irrigation water, conservation agriculture, energy efficiency in agriculture and use of poor quality water. Watershed management is now considered an accepted strategy for development of rainfed agriculture. Use of anti-hail nets in Himachal Pradesh and J & K to protect the apple orchards from hailstorm is one of the best mitigation strategies adopted by the Indian farmers. The state government is supplying these nets to the farmers at subsidized rates. Opportunities to minimize extreme weather impacts All the above classes of strategies have to be perceived at different time scales short, medium and longterm. Short term measures include contingency crop planning, strengthening input chain and other management aspects. Medium term measures include planning the natural resources addressing the problems and long-term measures mostly involve socio-economic development measures. We present here mostly the short term measures to be adopted for different weather extremes. Droughts In-season monitoring of drought through monitoring of rainfall and progress in sowings is crucial for effective management of droughts and minimizing the adverse impacts on crop production. Early season drought due to delay in onset of monsoon is directly responsible for shortfalls in area sown under major crops compared to normal situation. Also, delay in onset often leads to poor inflows into reservoirs, water bodies or poor recharge of groundwater and contributes to delay in sowings. Contingency crop planning refers to making available a plan for providing alternate crop or cultivar choices in tune with the resource endowments of rainfall and soils in a given location. In rainfed areas, as a general rule, early sowing of crops with the onset of monsoon is the best-bet practice that gives higher realizable yield. Major crops affected due to monsoon delays are those that have a narrow sowing window and therefore cannot be taken up if the delay is beyond this cut-off date for sowing. Crops with wider sowing windows can still be taken up till the cut-off date without major reduction in crop yield and only the change warranted could be the choice of short duration cultivars. Beyond the sowing window, choice of alternate crops or cultivars depends on the farming situation, soil, rainfall and cropping pattern in the location and extent of delay in the onset of monsoon (2, 4, 6 and 8 weeks). Breaks in monsoon cause prolonged dry spells and are responsible for early, mid and terminal droughts. These aberrant situations often lead to poor crop performance and or total crop failure. While early season droughts have to be combated with operations like gap filling and re-sowing, mid and late season droughts have to be managed through crop, soil, nutrient management and moisture conservation measures. Drought also affects livestock/milk productivity due to shortage of fodder. Appropriate location-specific fodder production strategies go a long way in reducing the adverse impact on livestock, which is the major source of livelihood in dryland areas. Hailstorms In areas where hailstorm frequency is high all the three strategy categories need to be implemented. Protective measures to minimize losses The impact of hail damage can be minimized in two ways; one is preventing physically (hail abatement) and the other on the economic front. In the hail abatement, physical barriers such as hail nets or other protective screens can be used to intercept the incoming hailstones. Usefulness of hail nets in protecting the apple orchards in Himachal Pradesh is the best example for this type of protection. The durability and effectiveness of these nets are directly proportional to the quality of material used in their making. Change in land use Another way of abatement of hailstorm in regions with high frequency is to grow those crops that are less subject to hail damage. In the majority of hail prone districts in India, two or more crops can be grown successfully, but increased net returns from some crops like horticultural might have favoured more area to come under their cultivation. In some of these predominant horticultural areas, which are often subjected to extreme hail damage, wheat or other crops could be grown instead of fruit crops that are more susceptible to hail. This alternative approach is again limited by physical factors like soil, rainfall and temperature apart from differential net returns and farmers choice. Risk management Another approach to minimize the hail losses to the farmers is through insurance. Various levels of coverage and types of policies are to be made available to farmers at affordable premium rates. In India, the Agriculture Insurance Company (AIC) of India insures against hail damage with add-on premium. Insurers in India may develop products that cover three categories of probability of hail occurrence viz., high, moderate and low. Management options in the event of hailstorm occurrence Top most priority may be given to save the human lives followed by livestock in the event of a hailstorm forecast. Avoiding damage to the infrastructure, including vehicles and other farm machinery/equipment may be taken up given enough time for the commencement of the hail-fall. If the hail event is associated with heavy rainfall, farmers are advised to drain out excess water from standing fields either through land modifications or pumping out the water. Drained water may be collected in farm ponds, if feasible. As a compensatory mechanism for the production losses due to hail damages, the possibilities of making best and timely use of available in-situ soil moisture and surface water (stagnated) to be explored for raising short duration crops including forages and vegetables. Early sowing of greengram/blackgram is better with seed treatment/zero-till sowing after paraquat/glyphosate application. In hail damaged field, orchard crops and vegetables recommended chemicals like fungicides may be sprayed to control spread of diseases due to secondary infection and booster dose of nutrients may be applied apart from growth regulators like NAA to induce fresh flowering, if required. Adoptable technologies for extreme weather-Lessons from NICRA An important component of the National Initiative on Climate Resilient Agriculture (NICRA), a flagship program of ICAR, deals with demonstration of climate resilient technologies on farmers' fields. This component addresses extreme weather events such as floods, cyclones, prolonged drought, extreme heat/cold wave etc. The demonstrations are being laid out in a farmer participatory mode by Krishi Vigyan Kendras 1 (KVK) in 100 vulnerable districts across the country. The initial outcome of demonstrations has shown that there is a great potential in the existing best bet practices to impart resilience to extreme weather events such as the ones mentioned above. Following are some of the specific cases where demonstration of appropriate technologies helped the communities to cope with extreme weather events successfully. Crop based In the southern peninsula, Tumkur district, Karnataka is one of the most drought prone districts of the region. Prolonged droughts cause near complete loss of yields of finger millet, which is the staple food crop of this region, threatening the food security of the rural population. Although finger millet requires very little water for cultivation, distribution of the rainfall determines its yield, as the crop is mostly rainfed. Farmers generally cultivate long duration finger millet in this region. As a result, the crop is more prone to intermittent dry spells. Demonstration of a short duration finger millet variety (ML365) that can be sown late when the onset of monsoon is delayed, helped farmers to minimize their loss due to prolonged drought in 2011. It also ensured the food security of the farm families. The farmers have retained the seeds and are using them in subsequent seasons. In low lands of Saran district, Bihar aberrant rainfall situation prevailed in five out of previous 10 years and very low rainfall in July has led to delayed transplanting of paddy. Farmers preference for long duration varieties often lead to delayed transplanting and even up to end of August, which severely lowers the productivity. A resilient technology, establishment of community nurseries, for this specific situation was developed. In this technique, staggered community nursery at an interval of two weeks involving varieties of different durations (140, 125-135 and <110 days) under assured irrigation was raised at the village level. During 2013-14, deficit rainfall conditions (-70%) in July and first fortnight of August has led to delayed transplantings. However, farmers who adopted resilient technology were benefitted with an additional yield of 4-5 q/ha (13% yield increase) compared to farmers who transplanted over aged seedlings in August. Management based Transplanted rice in Punjab requires about 130 ± ha-cm water and an estimate puts 10% of global methane emissions to the flooded rice fields. In case of delay in monsoon, farmers resort to excessive exploitation of ground water which is leading to a decline quality of land and water. Direct seeded rice cultivation is identified as a resilient technology for this situation wherein nursery raising is done away. This allows timely sowing of succeeding rabi wheat. It also leads to a saving in water up to 25% compared to transplanted rice. It further saves 27% of diesel used as pumping energy, 35-40 man days per hectare and reduction in methane emissions. Drum seeding technique is another resilient technology in which direct seeding of pre-germinated paddy seeds is done. This technology was found advantageous in the predominant transplanted rice areas of Andhra Pradesh, Telangana and Kerala states that are increasingly facing water shortages due to deficit rainfall, declining ground water table due to insufficient recharge, late and limit release of canal irrigation water due to poor inflows into tanks/reservoirs. Apart from increased benefit cost ratio (0.8-0.9), this technology facilitates about 30% saving in seed, fertilizer and at least two irrigations. Desiltation of existing tanks/water harvesting structures is another smart practice in drought prone regions. This practice was found promising in Namakkal (Tamil Nadu), Kurnool (Andhra Pradesh), Rajkot (Gujarat) and Kullu (Himachal Pradesh), where prolonged dry spells at critical stages during kharif often lead to low productivity and some time even to total crop failures. Renovation and up-gradation of existing runoff structures like check dams resulted in improving the cropping intensity, recharge of open/tube wells, crop diversification and increase in crop productivity at several NICRA villages. Resource based Furrow irrigated raised bed (FIRB) planting and Broad bed and furrow (BBF) system are some in situ soil and water conservation practices that proved beneficial across the country. Zero till drill of wheat is another climate resilient practice to overcome the terminal heat stress as it facilitates early sowing of wheat compared to conventional method. Small scale water harvesting structures like farm ponds at individual farm level enable reuse of harvested water during critical periods of crop growth stage and for providing pre-sowing irrigation to rabi crop. Large potential exists for up-scaling this technology for Indian conditions. Even in high rainfall areas like Dimapur, Nagaland, where the annual rainfall is around 2500 mm, a problem persists in the form of unavailability of adequate amount of water during the dry season. Field experiences indicate that this intervention allows farmers to shift to cultivation of vegetables/high value crops, resort to integrated farming system. Rainwater harvesting in dugout ponds helped farmers in the South 24 Parganas district of West Bengal to harvest fresh water and start cultivating small patches of land after sea water intrusion due to cyclone Aila (23-26 May, 2009). The district situated on the east coast of India is prone to cyclones and sea water intrusion during high tides and often suffers from high rates of out migration of farmers. The KVK of the district demonstrated to the farmers how to harvest rainwater and use the same for cultivating vegetables in a small scale. This prevented en masse migration of farmers from the village by ensuring a reasonable economic activity to earn their livelihood. Flood management The district of Srikakulam, Andhra Pradesh located on the East coast is prone to floods. Located in the delta of two minor rivers, it is often prone to inundation of crop lands thereby causing large scale loss to farmers who predominantly cultivate rice in this region. One of the major interventions of NICRA was to first look at the possibility of improving the drainage so that the period of crop inundation can be reduced. Secondly, introduction of submergence tolerant varieties 2 of rice like Indra (MTU-1061) so as to minimize loss to farmers (Plate 1). This strategy proved very successful when cyclone Neelam (28 th October to 1 st November, 2012) hit the east coast and resulted in wide spread crop damage. Plate 1 : Use of flood tolerant varieties to cope with cyclonic rains in Andhra Pradesh Socio-economic approach In rainfed regions of India because of the rainfall pattern, a narrow window exists for timely land preparation, sowing and other agricultural operations. Contrary to this, in high rainfall regions providing drainage at the right time to remove excess water from heavy rainfall events becomes crucial. Labor shortage at these peak times of agricultural operations is influencing the farm income. Custom hiring centers established under NICRA program enabled farmers to get access to the farm machinery and to implement several climate resilient practices. A typical example of this facility is documented in Umrani village of Nandurbar, Maharashtra in which in situ conservation of soil and water and sowing across this flow resulted not only in 11-13% increase in soybean yield but also in conserving valuable top soil from erosion. Some early lessons from the custom hiring centers in adopting climate resilient technologies are:  Seed cum fertilizer drills helped in introducing or expanding the intercropping areas.  Different kinds of crop threshers enabled farmers in timely harvesting operations at a lower cost. This could help avoid crop damage in weather abbreviations such as cyclone, frost etc.  Zero till drills helped save time, water fuel and escape terminal heat stress besides enabling farmers to make early harvest of rabi crops.  Broad bed furrow technology for wheat, soybean, and maize saved crop damage from excess soil moisture by aiding quick drainage and avoiding water stagnation. Integrating diverse farm enterprises Mono-cropping is widely practiced in areas prone to droughts, floods and extreme weather events such as frost/cold stress. In these vulnerable areas, dependence on a single farm enterprise by farmers is making them more vulnerable as they have limited resilience to cope with climatic conditions. Several integrated farming system (IFS) modules with a combination of small enterprises such as crop, livestock, poultry, piggery, fish and duck rearing were demonstrated to farmers in NICRA villages in the eastern, northern and north eastern states were found to increase their resilience. Successful demonstrations with integrated farming systems involving Rice-fish-poultry in Sonitpur, Assam; Apple-fisheriespoultry in Phulwama, J&K; Duckery-fish in Alapuzha, Kerala; Pig-poultry-fish in East Singhbum, Jharkhand, captive rearing of fish seed in Srikakulam, Andhra Pradesh amply show the potential existing under IFS for climatically stressed regions. Modifications in the design of housing structures to eliminate cold stress effects in backyard poultry in East Sikkim and heat stress/cyclone effect in goats at Namakkal, Tamil Nadu are some resilient technologies that proved successful. As livestock contributes a major chunk in the total agricultural income, mostly in drought prone regions of the country, meeting fodder requirements in the slack season is a tough task especially in the event of deficit or delayed rainfall conditions. Possibilities to sow fodder crops in uncultivated paddy fields under late kharif situation in lowlands may be explored. In such an eventuality seeds of sorghum, bajra and maize along with legume intercrop (cowpea / horsegram etc) need to be mobilized and made available to interested farmers who own livestock. A catch crop of urd/ mung can be taken up in May with early rains for multiple use as grain / green manure / mulch/fodder depending on subsequent rainfall conditions. There has been a very encouraging response from farmers and other stakeholders for the technologies developed and on-farm demonstrated. The learning has been valuable in terms of policy formulation and programme development aimed at coping with extreme weather events. Conclusions To sum up, monsoon activity in the lower atmosphere, position of the monsoon trough, breaks in the monsoon activity are the main determining factors governing the extreme events like droughts, floods, hailstorms, heat waves over the Indian sub-continent. In the decades to come, their study and projections in relation to global climate change would be an important contribution in understanding future scenarios. Research on adaptation and mitigation strategies to extreme weather as well as development of fore-warning systems are to be strengthened to enable Indian agriculture to cope with extreme weather events in future. A multi pronged strategy for scaling up the existing coping mechanisms involving all the stake holders is required to make production systems in India more resilient.
2021-12-12T17:21:57.027Z
2021-12-08T00:00:00.000
{ "year": 2021, "sha1": "85aee04a9144e8f2d2284d7293ea9acde4194982", "oa_license": "CCBYNC", "oa_url": "https://mausamjournal.imd.gov.in/index.php/MAUSAM/article/download/1173/1005", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "271941689a183d9194ececdd5474a10c48105410", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
245102214
pes2o/s2orc
v3-fos-license
Flavonoids Synergistically Enhance the Anti-Glioblastoma Effects of Chemotherapeutic Drugs Flavonoids are polyphenolic plant secondary metabolites with pleiotropic biological properties, including anti-cancer activities. These natural compounds have potential utility in glioblastoma (GBM), a malignant central nervous system tumor derived from astrocytes. Conventional GBM treatment modalities such as chemotherapy, radiation therapy, and surgical tumor resection are beneficial but limited by extensive tumor invasion and drug/radiation resistance. Therefore, dietary flavonoids—with demonstrated anti-GBM properties in preclinical research—are potential alternative therapies. This review explores the synergistic enhancement of the anti-GBM effects of conventional chemotherapeutic drugs by flavonoids. Primary studies published between 2011 and 2021 on flavonoid–chemotherapeutic synergy in GBM were obtained from PubMed. These studies demonstrate that flavonoids such as chrysin, epigallocatechin-3-gallate (EGCG), formononetin, hispidulin, icariin, quercetin, rutin, and silibinin synergistically enhance the effects of canonical chemotherapeutics. These beneficial effects are mediated by the modulation of intracellular signaling mechanisms related to apoptosis, proliferation, autophagy, motility, and chemoresistance. In this light, flavonoids hold promise in improving current therapeutic strategies and ultimately overcoming GBM drug resistance. However, despite positive preclinical results, further investigations are necessary before the commencement of clinical trials. Key considerations include the bioavailability, blood–brain barrier (BBB) permeability, and safety of flavonoids; optimal dosages of flavonoids and chemotherapeutics; drug delivery platforms; and the potential for adverse interactions. The Challenges of GBM Therapy and the Potential of Flavonoids Glioblastoma (GBM) is an astrocyte-derived solid tumor of the brain or spinal cord that occurs at an overall rate of 3.19 cases per 100,000 individuals in the United States. Its incidence varies notably between subpopulations, with males and older individuals at higher risk [1]. GBM is fatal, with median survival times under one year [2]. Currently, conventional medical and surgical interventions predominate in GBM therapy. Standard treatment regimens include (1) radiation therapy with concurrent temozolomide (TMZ) chemotherapy and (2) surgical tumor resection with radiation therapy [3,4]. Recent advances in these therapies have improved patient outcomes; the addition of TMZ, an alkylating agent, to standard radiation-only regimens after 2005 greatly increased survival rates [2]. Nevertheless, conventional interventions remain constrained by GBM's malignant properties. Surgical methods, for instance, are hindered by widespread tumor invasion and metastasis, while drug and radiation resistance-particularly associated with glioma stem cells (GSCs)-pose challenges for chemo-and radiotherapy [5,6]. Intraand intertumoral heterogeneity further complicates anti-GBM regimens [6]. Therefore, Biomolecules 2021, 11, 1841 2 of 15 a need exists for alternative and supportive therapies with the potential to overcome these challenges. Dietary natural compounds constitute promising candidates in this regard; they have wide-ranging biological properties, including anti-cancer effects [7][8][9][10][11]. Among these compounds, flavonoids-polyphenolic plant secondary metabolites-are of interest. Flavonoids exert anti-cancer effects through chemosensitization, metabolic modulation, metastatic inhibition, and apoptotic induction [12,13]. Based on these well-evidenced oncostatic activities, flavonoids have great potential in modulating GBM cell responses to anti-cancer drugs by overcoming their therapeutic resistance. The efficacy of flavonoids in GBM is well documented in preclinical research [14]. This review aims to complement previous research by focusing on the synergistic efficacy of flavonoids and conventional chemotherapeutics in GBM therapy. Study Methodology Primary studies on flavonoid-chemotherapeutic synergy in GBM were obtained through a PubMed search with the keywords "flavonoid", "chemo *", "synerg *" and "glioblastoma" or "glioma." Approximately 15 articles published from 2011 to 2021 were included. Studies demonstrating the effects of flavonoids alone on GBM-without trials with chemotherapeutic drugs-were excluded. Flavonoids Bioactive flavonoids occur in fruits, vegetables, and other natural plant products and are unified by a three-ring structural backbone that includes two phenyl rings and one central heterocyclic ring. These compounds are classified based on structural differencesrelated primarily to the presence and positioning of substituents on the heterocycle ( Figure 1). A variety of flavonoids, including flavan-3-ols, flavones, isoflavones, flavonols, flavonol glycosides, and flavonolignans, demonstrate anti-GBM effects combined with chemotherapeutic drugs in vitro and/or in vivo (Table 1). Biomolecules 2021, 11, x 2 of 17 [5,6]. Intra-and intertumoral heterogeneity further complicates anti-GBM regimens [6]. Therefore, a need exists for alternative and supportive therapies with the potential to overcome these challenges. Dietary natural compounds constitute promising candidates in this regard; they have wide-ranging biological properties, including anti-cancer effects [7][8][9][10][11]. Among these compounds, flavonoids-polyphenolic plant secondary metabolites-are of interest. Flavonoids exert anti-cancer effects through chemosensitization, metabolic modulation, metastatic inhibition, and apoptotic induction [12,13]. Based on these well-evidenced oncostatic activities, flavonoids have great potential in modulating GBM cell responses to anti-cancer drugs by overcoming their therapeutic resistance. The efficacy of flavonoids in GBM is well documented in preclinical research [14]. This review aims to complement previous research by focusing on the synergistic efficacy of flavonoids and conventional chemotherapeutics in GBM therapy. Study Methodology Primary studies on flavonoid-chemotherapeutic synergy in GBM were obtained through a PubMed search with the keywords "flavonoid," "chemo*," "synerg*," and "glioblastoma" or "glioma." Approximately 15 articles published from 2011 to 2021 were included. Studies demonstrating the effects of flavonoids alone on GBM-without trials with chemotherapeutic drugs-were excluded. Flavonoids Bioactive flavonoids occur in fruits, vegetables, and other natural plant products and are unified by a three-ring structural backbone that includes two phenyl rings and one central heterocyclic ring. These compounds are classified based on structural differences-related primarily to the presence and positioning of substituents on the heterocycle ( Figure 1). A variety of flavonoids, including flavan-3-ols, flavones, isoflavones, flavonols, flavonol glycosides, and flavonolignans, demonstrate anti-GBM effects combined with chemotherapeutic drugs in vitro and/or in vivo (Table 1). Flavan-3-ols comprise a class of flavonoids with a hydroxyl substituent at the third position of the heterocyclic ring. One flavan-3-ol of particular interest in GBM therapy, epigallocatechin-3-gallate (EGCG), occurs predominantly in green tea and exerts proapoptotic, antiproliferative, and antioxidant effects in cancerous cells [15]. In contrast, the class of flavones and isoflavones includes flavonoid compounds with a ketone substituent at the fourth position of the heterocycle. Two flavones and one isoflavone are of interest in synergistic GBM therapy. Chrysin, a flavone found in passionflower, honey, and propolis, has anti-cancer, neuroprotective, and other beneficial properties [16]. Similarly, hispidulin, a flavone from Grindelia, Artemisia, and Salvia plants, exerts anti-cancer, antifungal, antioxidant, and anti-inflammatory effects; it is moreover a benzodiazepine (BZD) receptor ligand [17]. Finally, formononetin, an O-methylated isoflavone, and phytoestrogen in legumes and clovers, have anti-cancer properties [18]. Flavonols have both the third position hydroxyl substituent of flavan-3-ols and the fourth position ketone substituent of flavones. Flavonols and flavonol glycosides, including quercetin, rutin, and icariin, are of interest in synergistic GBM therapy. Quercetin, a flavonol found in oak, berries, apples, grapes, cilantro, and onions, exerts antioxidant, antihistamine, anti-inflammatory, and anti-cancer activities [19]. Rutin, the glycoside of quercetin, has similar biological activities and occurs in rue, apples, buckwheat, and citrus fruits [21]. Another flavonol glycoside, icariin, is commonly found in horny goat weed; in addition to its anti-cancer properties, it has aphrodisiac, neuroprotective, and anti-osteoporotic effects [20]. Finally, flavonolignans are flavonoid derivatives with both flavonoid and phenylpropanoid structural components. Silibinin, a flavonolignan of interest in synergistic GBM therapy, is found in milk thistle seeds and has broad anti-cancer and antimetastatic effects [22]. Chemotherapeutics Conventional chemotherapeutics leverage diverse mechanistic pathways to exert their anti-cancer effects. TMZ, the canonical anti-GBM drug, is an alkylating agent that induces apoptotic cell death through the p53-dependent and O6-methylguanine-based activation of the Fas/caspase 8 pathway ( Figure 2) [23]. In addition, several noncanonical and repurposed drugs hold promise in synergistic GBM therapy ( Table 2). Purine analogs Alkylating agent [23] One such drug, arsenic trioxide (ATO), exerts pleiotropic anti-cancer effects through ROS generation and cell cycle regulation [24]. In glioma cells, ATO induces caspaseindependent autophagic cell death [29]. Moreover, combinations of ATO and TMZ and ATO and vismodegib exert synergistic effects against GBM growth in vivo [30]. Chemotherapeutic Class Primary Function Reference Chloroquine, another compound of interest, is a repurposed antimalarial drug that induces p53-dependent apoptosis and disrupts the mitochondrial membrane potential in glioma cells [25]. In conjunction with standard radiation and chemotherapeutic treatment regimen, a recent clinical trial examined its efficacy against GBM [31]. Similarly, the naturally derived topoisomerase II inhibitor etoposide was extensively clinically trialed in GBM. Etoposide induces glioma cell apoptosis through sequential ceramide formation, Bax/Bcl-2 modulation, cytochrome c release, and caspase activation [27]. Finally, sodium butyrate (NaB) is a short-chain fatty acid histone deacetylase inhibitor that reduces glioma cell proliferation, migration, and cell cycle progression [28]. While NaB has anti-GBM potential, its effects remain unsubstantiated by clinical trials at this time. Mechanisms of GBM GBM tumorigenesis, progression, and metastasis are driven by numerous interconnected signaling mechanisms ( Figure 3). Rapid cell proliferation, an essential process at all stages of GBM development, is mediated by the Akt/mammalian target of rapamycin (mTOR), nuclear factor κappa of activated B cells (NF-κB), and other similar pathways. Uncontrolled proliferation of this nature is enabled by the inhibition of normal cell cycle controls (such as FOXO and p53), and the downregulation of key actors in autophagic (LC3, Beclin-1, and P62) and apoptotic (caspases) cell death. Moreover, a metabolic transition to aerobic glycolysis (the Warburg effect) energetically sustains rapid GBM cell division. Angiogenic and neovascular processes-stimulated mainly by vascular endothelial growth factor (VEGF) signaling-ensure oxygen and nutrient transport to growing tumors. GBM cells may further develop chemoresistance; this often occurs through O6methylguanine methyltransferase (MGMT), which confers resistance to alkylating agents and/or P-glycoprotein (P-gp), which enhances drug efflux from the cells. Finally, Snail, Slug, and matrix metalloproteinases (MMPs) contribute to the epithelial-mesenchymal transition (EMT), which causes GBM cells to develop migratory and invasive phenotypes. lial growth factor (VEGF) signaling-ensure oxygen and nutrient transport to growing tumors. GBM cells may further develop chemoresistance; this often occurs through O6methylguanine methyltransferase (MGMT), which confers resistance to alkylating agents and/or P-glycoprotein (P-gp), which enhances drug efflux from the cells. Finally, Snail, Slug, and matrix metalloproteinases (MMPs) contribute to the epithelial-mesenchymal transition (EMT), which causes GBM cells to develop migratory and invasive phenotypes. TMZ and Icariin While icariin functions primarily as an apoptotic enhancer in conjunction with TMZ, it also inhibits NF-κB-mediated proliferation and reduces migration and invasion in U87MG cells [36]. TMZ and Rutin TMZ increases both apoptotic and autophagic cell death in GBM cells. At the same time, the flavonoid rutin shifts the balance toward apoptosis by upregulating caspases and inhibiting autophagy by downregulating light chain 3 (LC3) and c-Jun N-terminal kinase (JNK). As such, TMZ and rutin synergistically decrease tumor weight and volume in both intracranial (orthotopic) and subcutaneous (heterotopic) murine xenograft models [33]. Other Combinations of Flavonoids and Chemotherapeutics Six additional flavonoid-chemotherapeutic combinations with promising synergistic anti-GBM effects are quercetin and chloroquine, quercetin and NaB, Gardenia jasminoides (GJ) extract and cisplatin, silibinin and etoposide, silibinin and ATO, and chrysin and ATO (Table 4). Quercetin and Chloroquine Co-administration of quercetin with chloroquine causes both apoptotic and autophagic cell death ( Figure 5). These compounds induce autophagy by upregulating Beclin-1, LC3, and P62 and increasing apoptosis through ER stress and mitochondrial dysfunction. ER stress, associated with the upregulation of ATF4 and CHOP and the buildup of ubiquitinated proteins, leads to calcium (Ca 2+ ) release into the cytosol. Intracellular Ca 2+ then enters mitochondria via the mitochondrial Ca 2+ uniporter (MCU); increased mitochondrial calcium concentrations ([Ca 2+ ] m ) upregulate the generation of reactive oxygen species (ROS), which in turn contribute to caspase-induced apoptosis [43]. ER stress causes the release of Ca 2+ into the intracellular space; some of this Ca 2+ enters mitochondria via the MCU, leading to mitochondrial ROS generation. In this case, both mitochondrial ROS and autophagic mechanisms contribute to apoptotic cell death. GJ and Cisplatin Flavonoid-rich GJ extract synergistically enhances cisplatin-induced apoptotic cell death through the upregulation of active caspases. However, GJ-cisplatin synergy differs from quercetin-chloroquine synergy. GJ inhibits cisplatin-induced autophagy in favor of apoptosis in a manner consistent with that of rutin-TMZ synergy (Figure 3) [44]. Quercetin and NaB Similarly to GJ extract, quercetin synergistically enhances apoptosis by upregulating caspases and downregulating Survivin and Bcl-2, and concurrently inhibits NaB-induced autophagy by downregulating LC3 and Beclin-1 [45]. Cellular senescence is another option for GBM therapy; NaB and quercetin together induce senescence-like growth arrest in U87 and C6 cells [48]. ER stress causes the release of Ca 2+ into the intracellular space; some of this Ca 2+ enters mitochondria via the MCU, leading to mitochondrial ROS generation. In this case, both mitochondrial ROS and autophagic mechanisms contribute to apoptotic cell death. GJ and Cisplatin Flavonoid-rich GJ extract synergistically enhances cisplatin-induced apoptotic cell death through the upregulation of active caspases. However, GJ-cisplatin synergy differs from quercetin-chloroquine synergy. GJ inhibits cisplatin-induced autophagy in favor of apoptosis in a manner consistent with that of rutin-TMZ synergy (Figure 3) [44]. Quercetin and NaB Similarly to GJ extract, quercetin synergistically enhances apoptosis by upregulating caspases and downregulating Survivin and Bcl-2, and concurrently inhibits NaB-induced autophagy by downregulating LC3 and Beclin-1 [45]. Cellular senescence is another option for GBM therapy; NaB and quercetin together induce senescence-like growth arrest in U87 and C6 cells [48]. Key Considerations and Challenges While recent preclinical findings on flavonoid-chemotherapeutic synergy in GBM therapy are promising, many mechanistic unknowns, intricacies, and challenges remain. One major limitation of current knowledge is inherent in the literature: all of the reviewed studies are in vitro or in vivo preclinical studies utilizing statistical significance as a threshold for treatment efficacy. However, statistical significance does not necessarily correspond to clinical significance, and laboratory studies are often insufficient to predict outcomes under genuine (and highly variable) physiological conditions. Another pertinent consideration related to the preclinical literature is the justification of synergistic effects. The data in Tables 3 and 4 represent synergism as defined in the reviewed primary studies. However, it is worth noting that synergism is poorly defined at present, with limited consensus across the scientific and biomedical communities; this ambiguity leads to the mischaracterization of additive and other combined effects as synergistic effects in some cases. As such, standardized measures of synergism have been proposed. One auspicious measure developed by Chou and Talalay evaluates synergism as a mass action-rather than statistical-phenomenon, using a combination index (CI) rather than p values [49]. Notably, a significant proportion of the reviewed studies utilized CI to measure synergism (or lack thereof). Zhang et al. presented CI < 1 for combinations of 40-320 µM formononetin and 250-2000 µM TMZ, indicating synergy between the two compounds [39]. Similarly, Wang et al. demonstrated synergy between hispidulin and TMZ, with CI = 0.584 [37]. Synergistic effects of EGCG-TMZ, quercetin-chloroquine, quercetin-NaB, chrysin-ATO, and silibinin-ATO combinations were likewise justified with CI < 1 [32,43,45,46]. Concerning the flavonoids themselves, their consideration as medicinal agents necessitates evaluating their toxicity, blood-brain barrier (BBB) permeability, bioavailability, and potential adverse effects under said physiological conditions. Most of the flavonoids included in this review are nontoxic: chrysin at up to 400-500 mg per day, EGCG at 338 mg, quercetin at 5000 mg, rutin at 1000 mg, and silibinin at 20 mg/kg [16,[50][51][52][53]. Icariin is well tolerated at lower doses; however, gastrointestinal side effects may occur at 1,680 mg [54]. Importantly, formononetin administration poses a risk of allergic immune responses through pro-inflammatory cytokines such as interleukin 4 (IL-4) [55]. Finally, the toxicity profile of hispidulin requires further assessment [56]. Beyond toxicity, the potential physiological side effects of flavonoids-both beneficial and detrimental-merit consideration. Hispidulin, for instance, is a BZD receptor antagonist with anti-convulsive effects in vivo [57]. Another flavonoid, formononetin, is a phytoestrogen. While this flavonoid exerts neuroprotective effects through estrogen receptor βeta (ERβ)-dependent inhibition of NF-κB activity and microglia-induced neuroinflammation, it may also promote angiogenesis and endothelial cell proliferation (both potentially detrimental) via estrogen receptor αlpha (Erα) [58,59]. Nontoxicity and a favorable side effect profile constitute the baseline for human consumption; however, effective anti-GBM agents must have high bioavailability (to be present in sufficient doses following oral administration) and BBB permeability (to enter the brain from the bloodstream). Flavonoids and other natural compounds are significantly limited by their low bioavailability and poor aqueous solubility; the bioavailabilities of chrysin, EGCG, formononetin, hispidulin, icariin, rutin, and silibinin are accordingly poor [16,18,22,52,56,[60][61][62]. Extensive metabolism in the intestine, colon, and liver (with the participation of gut microbiota) further limits the bioavailability of these flavonoids [13]. In this regard, a cooperative gut microbiome is essential for their bioavailability and absorption [63]. Quercetin's bioavailability is comparatively better but remains constrained by intestinal efflux and biliary excretion [64]. More promisingly, EGCG, hispidulin, icariin, quercetin, and rutin can cross the BBB; silibinin cannot, while the permeability of chrysin and formononetin remains unclear [17,[65][66][67][68]. In this light, developing novel formulations to enhance the bioavailability and brain delivery of flavonoids is of key interest in advanc-ing synergistic anti-GBM therapy. Current research particularly highlights the potential of nanotechnology approaches to this end [12]. Although flavonoids are associated with some challenges, especially in the clinical sphere, they can confront GBM drug resistance, which hinders current conventional therapies. TMZ's introduction, for instance, improved therapeutic outcomes; however, TMZ resistance in GBM-mediated by the overexpression of MGMT and alkylpurine-DNA-N glycosylase (APNG), which repair TMZ-induced DNA lesions and thereby prevent apoptosis-is now well documented [69]. Cisplatin resistance via hypoxia-inducible factors 1 and 2 (HIF-1/2) and cluster of differentiation 133 (CD133) is also reported in GBM cell lines [70]. Moreover, an etoposide-resistant glioma cell line has been established [71]. Flavonoids hold promise in overcoming these types of resistance, as they downregulate key factors such as MGMT and P-gp and can therefore serve as chemosensitizers. Taken together, the criteria of efficacy, nontoxicity, BBB permeability, and bioavailability suggest that (1) rutin and TMZ and (2) EGCG and TMZ are auspicious combinations. Rutin and EGCG are nontoxic, have favorable side effect profiles, and can cross the BBB. However, further preclinical experiments and eventually clinical trials are necessary to substantiate the efficacy and safety of these and other flavonoid-chemotherapeutic combinations. Conclusions and Outlook Despite recent medical advances, GBM's prognosis remains poor. Extensive tumor invasiveness and therapeutic resistance hinder conventional drug, radiation, and surgical therapies. In this regard, flavonoids hold potential as supportive agents that can mitigate the numerous challenges posed by GBM. The flavonoids chrysin, EGCG, formononetin, hispidulin, icariin, quercetin, rutin, and silibinin demonstrate synergistic anti-GBM effects in conjunction with TMZ, cisplatin, chloroquine, etoposide, NaB, and ATO. These beneficial effects are mediated by the enhancement of apoptosis and the reduction of proliferation, migration, and chemoresistance. As such, flavonoids could enhance individual outcomes of GBM therapy, especially by overcoming therapeutic resistance. While these findings are promising, supportive evidence for flavonoid-chemotherapeutic synergy is currently limited to the preclinical literature. It is additionally worth noting that although many flavonoids exert anti-GBM effects, only some have been evaluated as potential synergistic agents. As such, forward-looking studies should clarify the synergistic effects of promising yet underinvestigated flavonoids. Furthermore, rigorous evaluation of the physiological properties of flavonoids-including toxicity, side effects, bioavailability, and BB permeability-is necessary on the path toward clinical implementation. If and when appropriate, clinical trials should investigate and confirm the safety and therapeutic efficacy of flavonoid-chemotherapeutic combinations.
2021-12-12T17:34:44.761Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "17258ac3840c0f7fdf24492755184096f7d7b0a6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-273X/11/12/1841/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15c9d9189e4befb6518c05b47be0baf65751918b", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
105940178
pes2o/s2orc
v3-fos-license
Goat ’ s milk-derived bioactive components-a review A well balanced diet of modern population includes an increased consumption of products from goat’s milk, which has composition different from the commonly used cow milk. Goat’s milk is characterized by better digestibility, higher buffer capacity than cow’s milk and a lower content of αs1-casein which is responsible for causing allergic reactions. Goat’s milk also contains more free amino acids than cow’s milk. The advantage of goat’s milk is its approximately 30 % higher magnesium content, high selenium content and glutathione peroxidase enzyme, which means that goat’s milk has greater antioxidant properties than cow’s milk. Introduction Goat's milk production accounts for 2 % of world milk production.It should be kept in mind, however, that these are official statistics which do not reflect the individual production and consumption of goat's milk by the people of developing countries s not taken into account (Park and Guo, 2006;Riberio and Riberio, 2010).The largest producers of goat's milk are India (21.6 % of world production) and the Mediterranean countries (18.4 %) (Silanikove et al., 2010).Among the European countries, the largest producers of this milk are Greece (4.5 %), Spain (4.2 %), France (4.1 %) and Italy (4 %).European countries produce 26 % of world goat's milk (Danków and Pikul, 2011;Lasik and Pikul, 2012). General goat's milk characteristic Goat's milk is becoming more and more popular due to its better digestabilty, high protein, phosphorus and calcium levels, as well as the fact that more people are experiencing intestinal milk intolerance.The chemical composition of goat's milk is similar to that of cow's milk.The proportion of individual nutrients in goat's and sheep's milk is shown in Table 1. Casein proteins constitute 80 % of all milk proteins (Jaworski, 1997;Mohanty et al., 2016).Casein micelles are produced in milk cells from polypeptide chains.Calcium ions, which form bonds with the phosphate residues of polypeptide chains, play an important role in their formation.A colloidal solution with casein micelles is stable due to the presence of negative electrical charges in the pro-tein chains.This creates a hydration layer around the micelles (Jaworski, 1997).The nutritional value of casein proteins is similar to meat protein, but it is deficient in methionine and cysteine.Besides carbon, hydrogen, oxygen and nitrogen, it contains sulphur and phosphorus bound organically and comprehensively. There are some differences between the casein in goat's and cow's milk proteins.They concern not only the absolute content of this fraction, but also the composition and properties. The important differences in protein polymorphisms have not been discussed nor its significance in human health such as allergies. Casein micelles contain 94 % protein, and the remaining 6 % consists of calcium, phosphorus, magnesium and citrate, depending on the animal species.The casein micelles found in goat's milk are characterized by a higher calcium and phosphorous content and greater diameter than cow's milk micelles.They also have greater susceptibility to solvation and less thermal stability, which makes clotting difficult (Park et al., 2007).In cow's milk, αs1-casein constitutes the largest ratio in casein proteins (Table 3), and is responsible for triggering allergic reactions (Litwińczuk, 2004;Nongonierma and FitzGerald, 2015).Its share in the total nitrogen of goat's milk is 25 %, in cow's milk it is 38 % (Tziboula-Clarke, 2003; Mituniewicz-Małek et al., 2011).In goat's milk the level of this protein fraction is often much lower, and for some breeds it is completely absent.This makes goat's milk a good substitute for cow's milk in some allergies or intolerances.Goat's milk is dominated by the β-casein fraction.Its level in goat's milk is 50-64 % relative to total casein protein, compared to cow's milk, which accounts for 33-39 % (Tziboula-Clarke, 2003; Mituniewicz-Małek et al., 2011).The content of α-s-2 casein in goat's milk varies between 10-30 % of total casein, while casein-κ 10-20 %.For comparison, the level of these fractions in cow's milk is 10 % and 11-13 %, respectively (Wszołek, 2006).The whey proteins in goat's milk include: β-lactoglobulin, α-lactalbumin, immunoglobulins IgG, Ig, IgM, serum albumin, lactoferrin and lysozyme.β-lactoglobulin is composed of 162 amino acids and is structurally distinctly different from animal species.The level of β-lactoglobulin in goat's milk is lower than in sheep's and cow's milk.On the other hand, α-lactalbumin content is higher in goat's milk compared to sheep's and cow's milk (Morgan et al., 2001;Park, 2006;Herrero and Requena, 2006).β-lactoglobulin has anti-carcinogenic properties.It is responsible for the binding of many chemicals, vitamin A, mercuric chloride, long-chain fatty acids and supports the activity of lipolytic enzymes.It does not dissolve in water, and reduces the oxidation of fats in dairy products.α-lactalbumin is a carrier of calcium and other elements, binds zinc, cobalt and magnesium ions.It also has antitumor and antimicrobial properties, improves mood, helps to overcome stress, helps to fall asleep, and has a protective effect against gastric ulcers (Milewski and Kędzior, 2010).Accompanied by apoptosis, it acts as an anti-tumor and immunological agent.It is involved in the synthesis of lactose.Whey proteins contain significant amounts of exogenous lysine and sulfuric amino acids, i.e. cystine and cysteine (Molina et al., 2003;C abiddu et al., 2005;Haenlein and Wendorff, 2006). Peptides formed during digestion in the digestive tract have beneficial effects for human health and prevent many diseases (Kuczyńska et al., 2009).The amount of non-proteinaceous nitrogen compounds, i.e. urea, uric acid, free amino acids, creatine and creatinine in goat's milk is on average 8.7 %, in sheep's milk 13 %, and cow's milk 5.2 % (Rashida et al., 2004;Mituniewicz-Małek et al., 2011).The high content of easily digestible non-protein nitrogen in goat's milk results in faster growth of lactic bacteria and a faster rate of acidification.Goat's milk also contains more free amino acids than cow's milk.The free amino acid content in goat's milk varies from 16.02 to 20.7 mg/100 g of amino acids.The percentage of free amino acids in the total amount of non-proteinaceous nitrogen compounds is about 17 %.The proportion of individual free amino acids in goat's milk is varied and comprises threonine (5.1-16.1 %), glutamic acid (17.1-27.5 %), glycine (9.0-28.5 %) and valine (1.4-25.4%).There is no cystine in free amino acids and tryptophan is present in very small amounts (Table 4).Another valuable non-protein amino acid, taurine, which is about 20 times higher than in cow's milk, is worth mentioning.Taurine is involved in the stabilization of cell membranes, has antioxidant properties and also stimulates glycolysis and glycogenesis (Redmond et al., 1998;Ahmed et al., 2015).Goat's milk contains an average of 6.62 mg/100 g of taurine as a free amino acid, while in cow's milk it is 0.16-1.00mg/100 g ( Lipid fraction characteristic Milk fat is synthesized in the mammary gland from blood plasma components: acetate, β-hydroxybutyrate, triacylglycerols and chylomicrons and in smaller amounts from lipoproteins, sterols, phospholipids, free glycerol and free fatty acids (Szulc et al., 2010).In terms of quantity, this is the least stable component of milk.The fat in goat's milk is in the form of an emulsion consisting of fatty balls of a smaller diameter than that of sheep's and cow's milk.It does not contain the enzyme agglutinin, which causes the fat globules to stick together when milk is cooled.In the acylglycerols of goat's milk lipids, the highest proportion of triacylglycerols is 97.8 % (Table 5).Di-and monoacylglycerols are respectively 2.2 % and 0.9 % of goat's milk lipids.In goat's milk lipids, triacylglycerols dominate, with very small amounts of mono-, diacylglycerols, phospholipids and cholesterol.In goat's milk the content of phospholipids is much higher than in sheep's milk (Jandal, 1996;Bonczar and Paciorek, 1999).As with the other ingredients of milk's fat, the content depends on a number of factors (environmental, physiological and genetic).The cholesterol content in the milk fat of selected mammalian species is presented in Table 5 (Strzałkowska et al., 2012).Cholesterol comprises a small fraction of the total lipid content of goat's milk.It is an indispensable component of the cellular membranes of the myelin sheath of plasma lipoproteins and neural tissue.It also participates in the synthesis of vitamin D and bile acids (Bonczar et al., 2002;Strzałkowska et al., 2009).It has been argued that fat milk clearly affects the risk of cardiovascular disease among humans.Cholesterol levels in raw milk and dairy products depend on a number of factors.The animal's individual characteristics, diet, health and lactation contribute directly to the cholesterol level in milk.In dairy products, this cholesterol content is determined by the technology used (homogenization, heat treatment, storage), starter culture type and initial fat content in the milk (Rao and Reddy, 1984;Bonczar et al., 2011;Atti et al., 2006).The cholesterol level in goat's milk is 10 to 20 mg/100 mL (for comparison, cow's milk contains 10 mg/100 mL) (Park, 2000).Table 6 gives the percentage of the six fractions of phospholipids present in goat's and sheep's milk.Three of them, phosphatidylcholine, phosphatidylethanolamine and sphingomyelin, are present in the highest amounts.Phosphatidylserine, phosphatidylinositol and lysophospholipid constitute a small percentage of milk fat.acids (Table 7), which results in the milk having more beneficial nutritional value (Ryniewicz et al., 2000;Pieniak-Lendzion and Niedziółka, 2004).In comparison to sheep milk, goat's milk has higher content of cephalin that was a source of easily absorbable phosphorus (Szczepanik-Wiatr and Libudzisz, 1996). The content and composition of fatty acids are also largely dependent on the composition of the feed.In milk fat, attention should be paid to the content of conjugated linoleic acid dienes, which have the capacity to inhibit carcinogenesis, as well as counteract atherosclerosis and osteoporosis.Their proportion in goat's milk is 0.84 % and in cow's 0.55 % (Jandal, 1996 The richest sources of CLA are animal products, including meat and ruminant milk, whereby it is found in a significantly lower contents in foods of plant origin (Jahreis et al., 1999;Serafeimidou et al., 2012;Albenzio et al., 2016).Dairy products contain approximately 2.9-6.1 mg CLA/g fat, whereas in vegetable oils it ranges from 0.2-0.7 mg/g fat (Prandini et al., 2011).The data in Table 7 show that sheep's milk has a higher CLA content (1.1-3.0)than goat's milk (0.58-1.1) and cow's (0.41-2.5).There is a higher amount of CLA in fermented beverages compared to unfermented milk (Prandini et al., 2007).This is due to the ability of microorganisms used in the fermentation of milk products to produce CLA.Studies on CLA concentrations in fermented dairy products have shown that strains such as Bifidobacterium, Lactococcus, Lactobacillus, Streptococcus and Propionibacterium have this property (Prandini et al., 2007).In addition to the type of strains used, the number and the parameters of the fermentation process and the type of feed used are also important (Prandini et al., 2007 Conjugated linoleic acid and, more important, its isomers, exhibits a number of physiological and biological functions that have a beneficial effect on the human body.The general effect of CLA contributes to reducing the risk of heart disease, atherosclerosis, cancer or obesity.In addition to anti-atherogenic and anticancer activity, CLA has antioxidant activity that is significantly higher than α-tocopherol.It also supports the immune system, contributes to the reduction of body fat or muscle mass, and also demonstrates bacteriostatic properties to Listeria monocytogenes (Prandini et The biological activity of vaccenium isomers (VA) is associated with its anti-carcinogenic and anti-atherosclerotic properties.Both the cis and trans isomer of vaccenoic acid slow down the progressive growth of tumor cells, but the trans form of this isomer is characterized by greater inhibitory potency, unlike the cis form (Przybojewska and Rafalski, 2003; C iołkowska et al., 2012). The average diameter of the goat's milk fat is 2.76 μm (0.73 μm to 8.58 μm) and, in the case of cow's milk fat, there is an average of 3.51 μm (in the range of 0.92 μm to 15.75 μm).Approximately 90 % of goat's milk's fat globules reach a diameter of less than 5.21 μm and 90 % of the fat globule curd has a diameter of less than 6.42 μm.Thanks to this, goat's milk is characterized by high nutritional value and digestibility, which also results from higher levels of goat's milk in short-and medium-chain fatty acids and their better distribution in triacylglycerides (Ziarno and Truszkowska, 2005 Goat's milk contains more short chain fatty acids than cow's milk.In terms of the general profile, fatty acids such as C4: 0, C6: 0, C8: 0, C10: 0, C12: 0, C14: 0 and C18: 2 are present in greater amounts in goat's milk than in sheep's milk (Table 8).C18: 0 and C18: 1 fatty acids are present in smaller amounts (Ziarno and Truszkowska, 2005).The content of C6: 0, C8: 0 and C10: 0 in goat's milk is about 15 % relative to the total fat content of this milk.For comparison, cow's milk contains about 6 % of the listed fatty acids in relation to the total fat content (Danków and Pikul, 2011).It should be noted that goat's milk has a specific aroma due to the high content of free fatty acids (5.65 mg/dm 3 ).For the formation, the 'goat aroma' also corresponds to the goat's milk enzyme, i.e. lipoprotein lipase.It is located on the surface of fatty beads (46 %), milk serum (46 %) and on the casein micellar surface (8 %).Hence, goat's milk is more easily liable to lipoly-sis and is more susceptible to spontaneous lipolysis due to milk cooling. Carbohydrates fraction characteristic 80 % of lactose is produced from glucose in the Golgi apparatus and 20 % from acetate.This disaccharide supports the absorption of calcium in the lower sections of the small intestine, facilitates the conversion of calcium ions to erythrocytes and improves the absorption of magnesium, phosphorus and other elements.In addition, it has a positive effect on the body's utilization of vitamin D and is a natural source of glucose, which is involved in the synthesis of important structural relationships of the nervous system.Many people suffer from lactose intolerance.This condition is due to a decrease in the activity or the absence of β-galactosidase enzyme.Some symptoms of lactose intolerance are bloating, abdominal pain, diarrhoea, nausea and vomiting (Ziarno and Truszkowska, 2005).Lactose intolerance usually occurs after ingestion of 7-15 g.Intolerant people are advised to consume milk in the form of fermented beverages containing hydrolysed lactose (up to 50 % of its original content) and active β-galactosidase produced by lactic acid bacteria (Ziarno, 2006).The proportion of lactose in goat's milk is 0.2-0.5 % smaller than in cow's milk (Pandya and Ghodke, 2007).Goat's milk contains between 250 and 300 mg/L of oligosaccharide, four or five times more than cow's milk, but much less than breast milk (5-8 g/L).The oligosaccharides present in goat's milk have a complex structure, whose profile is similar to that of human milk oligosaccharides.For this reason, they can be used successfully to produce infant milk for newborns ( Minerals and vitamins Mineral components include mineral salts as well as salts of organic acids.They affect the physical properties of milk, mainly non-protein stability.The main mineral components found in goat's milk are calcium, phosphorus, potassium and chlorine (Table 9). Goat's milk is characterized by iron and copper deficiency.This can lead to anemia in children, who are given only one of these types of milk (Pełczyńska, 1995; Szczepaniak and Libudzisz, 2000).The levels of potassium and chlorine in goat's milk, which are high in relation to cow's milk, can contribute to the excess of these elements in the diet and the potential for intestinal disturbances.Hence, for infant feeding goat's milk should be diluted 2:1 (Danków and Pikul, 2011).The advantage of goat's milk is about 30 % higher magnesium content (15-18 mg/100 g) which is responsible for many enzymatic reactions in living organisms.In addition, it reduces tension in the nervous system, protects against lead accumulation and improves the body's resistance to the influence of biometeorological factors (Borek-Wojciechowska, 1994).The high selenium (0.013 mg/kg) and glutathione peroxidase enzyme (57.3 mU/mL) content give goat's milk strong antioxidant properties (Haenlein and Wendorf, 2006;Park et al., 2007).Goat's milk is considered a good source of retinol, B vitamins, especially B1, B2, vitamin C and niacin (Table 10). The ability to increase the content of bioactive ingredients in processed milk The nutritional value of goat's milk is high.It is used as an alternative to cow's milk in the diet of children and adults.As a result of the species specificity, goat's milk lipids are characterized by a higher content of short-and medium-chain fatty acids, which are faster to digest (Blasi et al., 2008).Goat's fatty acids profile is specific due to a unique cholesterol metabolism, which facilitates the dissolution of cholesterol in bile acids.Goat's milk is used in the diet of people suffering from cardiovascular disease and epilepsy, and in premature babies (Park, 1994;Jandal, 1996;Park, 2006).Allergy to goat's milk is about 72-73 % lower than to cow's milk with respect to α-lactalbumin and about 96 % with respect to β-lactoglobulin (Mituniewicz-Małek et al., 2011).It has been shown that in children aged 6-13 years who received approximately 1 L of raw goat's milk every day for 5 A. Biadała and P. Konieczny: Goat's milk-derived bioactive components, Mljekarstvo 68 (4), 239-253 (2018) months, mineralization and bone density improved and vitamin A content in serum plasma and blood plasma increased.In addition, studies in Western countries show that goat's milk provides relief for rheumatism (Piendziak-Lendzion and Niedziółka, 2004).Thanks to its nutritional and dietary value, goat's milk is especially recommended for population suffering from allergies, convalescents and children.Under natural conditions, goats eat nearly 450 plant species, many of which contain medicinal substances and important micronutrients (Milewski and Kędzior, 2010).The nutritional value of milk is closely related to its composition.It is believed that differences in the basic composition of milk are related to different needs of the individual species of young mammals.Therefore, the level of the most important components of milk fluctuates and is linked to genetic and non-genetic factors, i.e. environmental and nutritional factors (Pijanowski, 1980;Litivczuk, 2004;Danków and Pikul, 2011). The most important genetic factor is race, which is the main cause of changes in the content of individual components of milk, but also determines the amount of raw material.The milk composition can be modified by crossbreeding or by breeding selection (Krzyżewska, 2011).Genetic factors directly affect physiological factors.Among the physiological factors determining the chemical composition of milk is the lactation phase.The most important differences are observed during the initial lactation period, when the glands secrete colostrum.Its composition almost completely deviates from the milk secreted in later periods.Moreover, the age of animals and the time between the feeding of the offspring and the milk, as well as the state of health of the female, are also important.The most important environmental factors include the climatic conditions, the way of feeding and the season.The greatest variations are in the content and composition of milk fat (Litivczuk, 2004;Danków and Pikul, 2011).Recently, there has been a growing interest in research aimed at increasing the bioactive ingredients in milk and dairy products, not only from cow's milk.One of the ways this can be achieved is by supplementing ruminant diets under controlled conditions to alter the fatty acid composition and by using membrane processes to alter the composition of milk proteins. Controlled animal nutrition The way of feeding animals has a significant influence on the formation of biologically active ingredients in milk.There are three ways of feeding animals: pasture, ecological and barn.Increasing the proportion of fodder feed in animal feed reduces the milk fat content while increasing the protein level.A common method for increasing the amount of unsaturated fatty acids in milk is the addition of oilseeds, sea algae or fish oil to the feed (Krzyżewska, 2011).Due to the over-reaction of bio-hydrogenation of fatty acids to the feed, fats are added in the form of calcium salts of fatty acids.There are different nutritional strategies affecting the quantitative and qualitative composition of lipids in ruminant milk.One of them is the intensification of processes occurring naturally in the animal's body, i.e. the creation of optimal conditions for the development of microflora in the rumen (Szumacher-Strabel, 2010).The ruminal lipolysis process depends on the presence of Butyrivibrio sp.bacteria.Ruminal protozoa, mainly Epidinium spp., also play an important role.They constitute approximately 30 % of the yield.The source of biologically active compounds in ruminant milk are mainly unsaturated fatty acids, which are the substrate for the biohydrogenation and de novo synthesis of fatty acids in the mammary gland.Fats that increase the pool of unsaturated fats include fresh green fodder.Using flax seed, rape and soy in the animal feed and reducing the content of lauric and myristic acids as well as palmitic acid and rapeseed or flax seed oil can also facilitate the biophyhydrogen process (C ieślak et al., 2009; C ais-Sokolinska et al., 2011).The reduction of polyunsaturated n-6 acid to n-3 acids in milk can be achieved by the presence in the feed of saturated short-chain fatty acids, as they inhibit the conversion of n-6 acids, thus affecting the ratio of n-3 to n-6 favourably. Traditional summer grazing on pastures or fortification of fodder with sunflower, flaxseed or corn oil contributes to an increase in CLA in milk (Kuczyńska and Puppel, 2009).The main pathway for CLA formation is the biohydrogenation of unsaturated (linoleic, linolenic, oleic) acids in the rumen and the endogenous synthesis of vaccenic acid in the mammary gland (C astro et al., 2009;Szumacher-Strabel et al., 2011).The bio-hydrat- Membrane separation as a method to change a composition of processed milk One technique of standardizing milk which has been greatly appreciated in recent years is membrane separation.It is known that the composition of milk fluctuates depending on the season, lactation period, race and other factors, and membrane techniques enable effective normalization of milk components without the need for additives.The most commonly used membrane filtration methods in the dairy industry include microfiltration, ultrafiltration, nanofiltration and reverse osmosis (Kurkowska, 2001). In the diaphragm separation processes, mainly liquids containing many components with different dispersion levels in solution are used, therefore these methods are widely used in dairy technology.As a result of the flow of raw material through the membrane unit, two streams are formed: a permeate consisting of water and substances permeating through the membrane and retentate, or a stream containing the same components that form the retentate enriched with the components retained on the membrane.The concentration of the dry substance in the permeate is always lower than in the feed stream and the concentration of the retentate components is always greater than in the feed.The retentate is often called the concentrate (Saboya and Maubois, 2000). Transport through the membrane is achieved by using the right driving force.The propulsion of mass transport through the membrane is the difference in chemical potential on both sides of the membrane.This difference can be caused by differences in pressures, concentrations, temperatures or electrical potential.In membrane techniques, the transport of molecules is caused by a difference of potential on both sides of the membrane, and the separation is due to the difference in the transport rate of the solution's components (Saboya and Maubois, 2000;C outinho et al., 2009). Industrial production of fermented milk drinks requires a specific dry matter content in the processed milk.It is recommended that its content in milk for yoghurt production is 14-18 %.The chemical composition of the milk purchased for processing is not constant and is subject to seasonal variations.It is therefore necessary to standardize the dry matter content of milk.The most commonly used method for increasing it is the addition of skimmed milk powder.Other means of normalizing the processed milk include evaporation of milk, added milk or added milk proteins.Alternative methods for increasing dry matter content in milk are membrane techniques.Currently, the most commonly used for standardizing milk is ultrafiltration (Kowalska et al., 2000;Domagała and Wszołek, 2008).This allows for a high concentration of milk components, standardizing their contents and changing their proportions. Another membrane technique commonly used in the dairy industry is microfiltration.The wide range of microfiltration membrane pores (0.1-10 μm) allows the separation and fractionation of milk components to be used.At present, microfiltration is used to remove microorganisms from the milk, allowing it to produce pure microbiological milk.This process removes 99.91 % of the total bacteria from the milk, completely eliminating sulphite-reducing spores and dead cells and other microbial contaminants.It leaves the more drastic heat treatment and thus allows valuable nutrients to be retained in the milk.Milk produced on the basis of a microfiltration technique that has an extended shelf life is called Extender Shelf Life (Śmietana et al., 2004).In addition, microfiltration can be used to separate casein micelles from whey proteins using a suitably selected membrane (pore size 0.1-0.2μm).This creates the opportunity to separate or thicken casein proteins for cheese production and to modify A. Biadała and P. Konieczny: Goat's milk-derived bioactive components, Mljekarstvo 68 (4), 239-253 (2018) the proportion of whey protein in the casein of processed milk.Membranes used in microfiltration can be prepared from inorganic materials, i.e. ceramics, metals, glass and organic polymers.Because of the material that made up the membrane, the following can be used (Żulewska, 2010): / cellulose acetate with hydrophilic properties, so that they are characterized by low susceptibility to formation of sediment on the membrane, / polysulphones with high thermal resistance, / polyamides with greater tolerance to the pH of the microfiltered material / ceramics with high chemical and thermal stability (up to 130 °C). The greatest benefit of using membrane techniques for the standardization and removal of micro-organisms from milk is the absence of high temperatures and chemical or biological agents that can contribute to the degradation of valuable components.The use of membrane processes in dairying reduces production costs through lower energy and raw materials consumption.Modules for the process are not expensive, but easy to use and guarantee high performance.It is possible to use this kind of filtration for every production scale due to the modular construction of the process.The use of techniques using membrane separation methods provides the production of new products with increased bioactive ingredients, beneficial functional, sensory and nutritional properties, as well as the management of by-products (Tziboula et al., 1998;Kurkowska, 2001;Debon et al., 2010). Design of fermented dairy beverages with increased participation of bioactive ingredients In recent years the food market has been under increased pressure to expand its range of functional and convenient food.Increasing public awareness of healthy diets generates changes in dietary habits, in turn leading to changes in the production of all foods.This also applies to milk processing companies, who seek to increase the range of fermented milk drinks.Products come not only from cow's milk, but also from small-ruminant milk.These drinks contain a properly selected microflo-ra, both technologically and nutritionally (Minervini et al., 2009). The content of biologically active ingredients in a dairy product is the result of several factors.Feeding animals plays a very important role in this case.Modification of feed composition directly influences the chemical composition of milk.The aspect most susceptible to change is milk fat.Biologically active compounds present in milk fat are produced by bioreacting unsaturated fatty acids in the rumen.In the case of ruminants, using a suitably composed feed, we determine the composition of the ruminal microflora, which in turn determines the direction of fermentation and the composition of the milk.By utilizing milk-rich bioactive ingredients in processing, we can design a product with a higher nutritional value (C ieślak et al., 2009; Szumacher-Strabel, 2010).Nutritional methods for changing milk constituents can also be applied to the regulation of protein content.They involve the modification of animal feeds in such a way that the amino acid composition of the milk produced corresponds to the body's needs for all essential amino acids.C ais-Sokolińska et al., (2011) in their study of the effect of oilcake on goat's milk, found that this supplement reduced cholesterol content.Significant differences were observed in the composition of sheep's and goat's milk obtained under winter and summer (pasture) feeding conditions.Szumacher-Strabel et al., (2011) found that ruminant milk was characterized by higher C18: 0 and C18: 1 acids during summer and lower C4: 0 to C16: 0 content.Moreover, Jahreis et al., (1999) showed that goat's and sheep's milk in spring and summer exhibited a significantly higher content of conjugated linoleic acid dienes compared to autumn and winter months. Over the past 20 years a number of new whey protein products have been researched.These proteins have a high biological value, higher than soy proteins, eggs or milk casein (Smithers, 2008;Bhat and Bhat, 2011).Their properties and biological value are mainly related to the high content of branched exogenous amino acids such as isoleucine, leucine and valine, which stimulate the synthesis of muscle proteins.Whey proteins have many healthy properties.However, they contain β-lactoglobulin, which is a milk allergen.Inoculation of milk with lactic acid bacteria leads to A. Biadała and P. Konieczny: Goat's milk-derived bioactive components, Mljekarstvo 68 (4), 239-253 (2018) protein hydrolysis.Some strains also exhibit the properties of β-lactoglobulin decomposition in milk.It has also been found that some probiotic strains are capable of increasing tolerance to this allergen and affecting the distribution of proteins in the intestinal mucosa (Pescuma et al., 2010).Kirjavainen et al., (2003) have shown that supplementation of food for children with Lactobacillus GG causes a significant reduction or even elimination of milk allergy.Fermented milk is a metabolically active product and exhibits changes in storage (acidification, loss of viability of starter cultures).Increased whey protein content improves the viability of starter cultures, which is important in the design of functional foods with a high nutritional value (Smithers, 2008).Larger whey protein buffer capacity is likely to protect against secondary acidification of products during refrigerated storage.Research by Pescuma et al., (2010) in the field of whey protein-fortified milk drinks indicates that an increase in whey protein content improved the growth and proteolytic activity of Lactobacillus delbrueckii subsp.bulgaricus.As a result of protein fermentation, essential amino acids such as leucine, valine, isoleucine, lysine and threonine were released.Leucine, isoleucine and valine provide energy to the muscles and accelerate the synthesis of alanine and glutamic acid during stress caused by trauma, infection or intense exercise.Leucine, lysine, tryptophan, isoleucine and threonine also act as metabolic glucose regulators and affect protein metabolism, which increases their weight control (Pescuma et al., 2010). Whey proteins are widely used in the production of dietary foods, especially high protein supplements for children, athletes and convalescents.They are also used for the production of thin edible coatings suitable for storing fruits and vegetables.Functional properties of whey proteins include water retention capacity, foaming ability, emulsifying properties and gel formation, as well as viscosity improvement.These proteins are used in the production of yoghurts, cheeses, creams, sausages and bakery products (Glinowski, 2004;Livney, 2010).In yogurt production they improve yogurt flavor, texture, nutritional value, syneresis reduction, prolongs life, extend probiotic viability, reduce costs and reduce the addition of non-dairy ingredients such as starch, gelatine and pectin (Glibowski, 2004;Onwulata and Tomasula, 2006).Fortification of yoghurts with whey proteins increases the water-binding capacity.The higher content of this fraction of milk proteins results in a more homogeneous microstructure.The smaller pore diameter makes it difficult to migrate the solution from the depth of yogurt to the surface. Whey proteins are often defined as ideal proteins from the nutritional point of view, because they contain essential amino acids for the proper functioning of the human body and bioactive peptides.The amino acid composition of whey protein is almost identical to the essential amino acids in the correct diet, so their participation in the design of new dairy products has increased (Gad and Sayed, 2009;Fluegel et al., 2010). Increasing the functional properties of fermented milk drinks can also be achieved by using probiotic bacteria for their production.To manufacture a product with the desired organoleptic qualities and to preserve the pro-health properties of probiotic cultures, they must be carefully selected.The characteristics that they should have are: moderate acidogenic activity, milk growth ability, antagonism to food spoilage bacteria, and good survival during refrigerated storage.Fermented milk beverages made with probiotic microflora are a functional food due to their documented properties, ie.lactose intolerance, inhibition of pathogenic bacteria development, hypocholesterolemia, normalization of intestinal motility disorders and inhibition of bacterial nitroreductase which catalyses nitrosamines synthesis (Lacroix and Yildirim, 2007;Reid, 2008;Thirabunyanon et al., 2009;Aureli et al., 2011). Conclusion Global production of goat's milk stands at about 16 million tons per year.Outside the cheese industry goat's milk is used more often for the production of fermented milk drinks.These drinks are currently the fastest growing branch of the dairy industry.This is due to the growing consumption of such products, estimated at an increase of about 0.7 % per year on a global scale. Conscious and rational diets among modern population has resulted in an increased interest in products made from goat's milk, which has a composition A. Biadała and P. Konieczny: Goat's milk-derived bioactive components, Mljekarstvo 68 (4), 239-253 (2018) different to the commonly used cow's milk.Goat's milk is characterized by easier digestibility and a higher buffer capacity than cow's milk.In the latter, the largest part of the casein is casein-αs 1 , which is responsible for causing allergic reactions.Its contribution to the total nitrogen in goat's milk casein is 25 %, while in cow's milk it is 38 %.The casein-αs 1 content in goat's milk protein fraction is much smaller in the case of certain breeds or even absent.The result is that the goat's milk can be a good alternative for cow's milk allergy in some cases.Moreover, goat's milk fat is present in the form of an emulsion consisting of fat globules of a smaller diameter than is the case in cow's milk.It does not contain an enzyme agglutinin, which causes clumping of the fat globules when the milk is cooled.Compared to cow's milk, goat's milk fat contains more mono-and polyunsaturated fatty acids, which results in this milk having beneficial nutritional characteristics.Goat's milk also contains more short-chain fatty acids than cow's milk.In conclusion, because of its characteristic and functional properties, goat's milk products should be a significant part of a healthy and balanced daily diet. A . Biadała and P. Konieczny: Goat's milk-derived bioactive components, Mljekarstvo 68 (4), 239-253 (2018) ing process occurs in the rumen of ruminants in the presence of bacteria of the genus Butyrivibrio fibrisolvens or Megasphaera elsdenii.Mono-and polyunsaturated fatty acids are isomerized by bacterial enzymes.The first stage of the biohydrogenation process ends with the formation of vaccenoic acid, which is the second intermediate.The resulting vaccenic acid is hydrogenated under the influence of microorganisms into stearic acid (Szumacher-Strabel, 2005; C iołkowska et al., 2006).The second way to produce significant amounts of CLA, as much as 65 % of the cis9 isomer, trans11 C18: 2, is through endogenous synthesis in the milk gland.This process occurs with the participation of Δ9-desaturase (Meluchova et al., 2008; C astro et al., 2009). al., 2007; Tsiplakou et al., 2006; Szumacher-Strabel et al., 2011).Vaccenoic acid (C18: 1 trans-11) is an oleic acid isomer, the double bond is in the Δ11 position.Regardless of the cis or trans configuration, the source of the isomers of this acid are primarily lipids of meat and ruminant milk (Przybojewska and Rafalski, 2003).It is the second intermediate in the biocohydration process of unsaturated fatty acids into stearic acid, making it a major source of cis9 synthesis, CLA trans11 (Meluchowa et al., 2008; C astro et al., 2009).
2019-04-10T13:12:16.496Z
2018-10-11T00:00:00.000
{ "year": 2018, "sha1": "85d02de988dd4f8547ee7fd8e57276a026c49082", "oa_license": "CCBYNC", "oa_url": "https://hrcak.srce.hr/file/303651", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1fcf2cfd9dd7bf953c69d4adbc586914b9e38a3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
55800060
pes2o/s2orc
v3-fos-license
The effect of CPP-ACP containing fluoride on Streptococcus mutans adhesion and enamel roughness Background: Direct contact between the bleaching agent and the enamel surface results in demineralization, alteration in surface roughness and bacterial adhesion. Many studies try to minimize this side effect through different way. Purpose: The aim of this study was to determined the effect of Calcium Phospho Peptide-Amorphous Calcium Phosphate (CPP-ACP) containing fluoride application before and after bleaching procedure on the adhesion of S. Mutans and enamel roughness. Methods: The samples were 6 teeth which were divided into 4 groups, and each tooth was cut into four pieces. Group A and C were treated with CPP-ACP after bleaching, while group B and D were treated with CPP-ACP before and after bleaching. CPP-ACP used in group C and D was the one that contain fluoride. After treatment, all samples were sterilized, immersed in steril human saliva for one hour, then immersed into S. mutans suspension of 108 CFU. Samples were incubated overnight. On the next day the samples were put into steril BHI and vortexed for one minute to detach the bacteria. Fifteen ml BHI containing bacteria was poured into TYS agar then incubated 37°Cfor 48 hours. Bacterial colony was counted with colony counter. The SEM examination was done on all samples. Results: Application of desensitizing agent reduced the S.mutans adhesion significantly among groups (p<0.05) except between group A and C. SEM evaluation revealed significant differences among groups. Conclusion: The application of CPP-ACP containing fluoride before and after bleaching was effective to reduce the accumulation of S.mutans colony and enamel roughness. Correspondence: Yulita Kristanti, c/o Bagian Ilmu Konservasi Gigi, Fakultas Kedokteran Gigi Universitas Gadjah Mada.Jl.Denta, Sekip Utara Yogyakarta 55281, Indonesia.E-mail: litaugm11@gmail.comintroduction Information concerning esthetic dentistry that can be accessed easily make people more concerned to improve their performance.Brighter teeth are something that encourages people to see a dentist to bleach their teeth.Bleaching can be categorized into 2 groups: intracoronal bleaching and extracoronal bleaching.Intracoronal bleaching is performed to bleach non vital teeth, while extracoronal bleaching is indicated for vital teeth.Bleaching for vital teeth can be classified into at home bleaching (professionally dispensed) and in office bleaching (professionally administered). 1 People prefer their teeth to be bleached by a dentist (in office bleaching technique) than to do it by themselves because it does not take a long time to see the bleaching result.Beside that, they also feel much secured if the bleaching process is done by the dentist because of better gingival protection and better sensitivity control.This procedure is suitable for patient with bleaching tray intolerance although it result in higher tooth sensitivity than at home bleaching because of higher concentration used in this technique. 2he mechanism of bleaching discolored teeth still unclear.It differs according to the type of discoloration involved, the chemical and physical condition at the time of the reaction.Bleaching agent mainly oxidizers, slowly degrading organic structure into chemical product such as carbon dioxide that are lighter in color.The oxidation reduction reaction that occurs during bleaching is known as a redox reaction.The mechanism of bleaching involve the degradation of the extracellular matrix and oxidation of chromosphores located within enamel and dentin. 3,4eside tooth sensitivity, there are several bleaching side effect that must be anticipated such as mineral loss, surface roughness and increasing bacterial adhesion such as Streptococcus mutans (S. mutans).S. mutans has ability to metabolize many sugars and produces various organic acid.S. mutans is strong acid producer hence cause an acidic environment.Consequently it lowers the pH of the surrounding environment that will lead to tooth demineralization. 5,6According to Mithra and Moeny remineralization using desensitizing agent calcium phospho peptide-amorphous calcium phosphate (CPP-ACP) 3 minutes twice a day was achieved after 35 days. 7Meanwhile the bleaching effect results in enamel porousity which enhances trans-enamel diffusion to reach deep area of dentin and pulp chamber. 8In vitro studies have demonstrated that a high concentration of toxic components released from hidrogen peroxide 35% bleaching gels used for in office treatment are capable of diffuse through enamel and dentin and they significantly decrease the metabolism of pulp cells. 9,10][10] However little has been reported about the adhesion of S. mutans and its relationship with the difference morphological alteration of the outer enamel surface.The aim of this study was to determine the effect of fluoride and non fluoride desensitizing agent that applied prior to and after bleaching procedure on the adhesion of S. mutans to enamel and enamel roughness. materials and methods Six maxillary extracted premolar were used as sample of this study.Samples were cut into 4 pieces.Furthermore, the 24 pieces classified into 4 groups, each group contains 6 specimen.Group A was treated using non fluoride CPP-ACP after bleaching procedure was performed.Group B was treated using non fluoride CPP-ACP before and after in office bleaching was performed.Group C was treated using CPP-ACP containing fluoride after bleaching was performed, and Group D was treated using CPP-ACP containing fluoride before and after bleaching was performed. Group A was bleached using hydrogen peroxide 40% for one hour, washed, dried, followed by the application of non fluoride CPP-ACP for 30 minutes, immersed into human saliva for one hour then immersed into S. mutans suspension of 10 8 CFU, incubated 37°C (24 hours).The day after the teeth were put into 3 ml sterilized Brain Heart Infusion (BHI), vortex for 1 minute. 11After diluting 10, 3 0.1 ml BHI containing S. mutans were cultured in Trypticase-Soy-Sucrose-Bacitracin (TYS 20B) followed by incubating at 37°C for 48 hours.S. mutans colony were counted using colony counter.In group B, before in office bleaching procedure was performed, the non fluoride CPP-ACP was applied for 30 minutes then the samples were washed and dried.After bleaching, the non fluoride CPP-ACP was aplied again, followed by washing and immersed into human saliva for one hour, immersed into S. mutans suspension 10 8 CFU, incubated 37°C (24 hours).The day after the teeth were put into 3 ml sterilized BHI, vortex for 1 minute. 12After diluting 10, 3 0.1 BHI containing S. mutans were cultured in TYS 20B followed by incubating 37 °C for 48 hours. 12S. mutans colony were counted using colony counter.Group C was treated with the same procedure as group A, but in group C the CPP-ACP used was the one that containing fluoride.CPP-ACP (F) was also used in Group D, it was applied before and after bleaching procedure. results The average of S. mutans colony accumulation was shown in Figure 1.The result showed that the the accumulation colony of S. mutans decrease when the CPP-ACP was used before and after bleaching.CPP-ACP (F) also reduced the amount colony of S. mutans (Figure 1). The one way ANOVA showed there was a significant difference (p<0.005) of the accumulation colony of S. mutans among groups with once and twice CPP-ACP application, with and without fluoride (Table 1).The result of this research showed there was a significant difference between groups (p<0.005),except between group A and C (p>0.005).This may be caused by low fluoride dosage when the DA applied only once.There were Scanning Electron Microscope (SEM) evaluation for detecting enamel surface roughness with 2000 magnification. In group A (CPP ACP after bleaching only): the dissolved periphery of enamel prism and the porosities could be seen.Areas of remineralization noted although at some places dissolved prism core and dissolved interprismatic substance are still evident (Figure 2).In group B (CPP ACP before and after bleaching) the configuration of the enamel was apparent with few porous defect.Areas of remineralization are evident clearly (Figure 3).In group C (CPP-ACP (F) after bleaching only) the porosities still could be seen clearly, but not as much as in group A. Certain areas of remineralization were evident (Figure 4).In group D (CPP ACFP before and after bleaching): the areas of mineralized deposits were more evident.The different pattern among group A, B, C and D maybe due to variation in crystallite orientation in the enamel prism (Figure 5).From this different morphological alteration point of view, SEM evaluation support the result In group A (CPP ACP after bleaching only): the dissolved periphery of en In group A (CPP ACP after bleac bleaching only): the dissolved periphery of enamel seen.Areas of remineralization noted although at re and dissolved interprismatic substance are still B (CPP ACP before and after bleaching) the s apparent with few porous defect.Areas of arly (Figure 3).In group C (CPP ACFP after can be seen clearly, but not as much as in group A. are evident (Figure 4).In group D (CPP ACFP) areas of mineralized deposits are more evident.The , B, C and D maybe due to variation in crystallite (Figure 5).From this different morphological the areas of mineralized deposit were more evident of this research.The higher the topography irregularities (roughness) of a material, the more the S. mutans colony could be counted. discussion The result showed there were significant difference occured on the accumulation of S. mutans colony among group A and B, group A and D, group B and C.This fenomena has suggested that there is a relationship between the accumulation of S. mutans with different surface roughness among group A and B, group A and D, group B and C as a result of bleaching treatment with tooth bleaching containing 40% H 2 O 2 (Figure 2-5).Tooth bleaching releases reactive free radicals that will influence the reducing agent, so the yellowish pigment (xanthopterin) will be oxidated and become a whitening pigmen (leucopterin). 13leaching agent can produce undesirable effect such as hypersensitivity, gingival irritation, micromorphological defect due to demineralization and effect on restorative material. 1The result of this research shown that highest mean value of S. mutans colony can be noticed in group A, followed by group C, group B and group D. This means that application CPP-ACP (either with or without fluor) before and after bleaching were effective in reducing S. mutans colony.Mean value of S. mutans colony in group C below the mean value of S. mutans in group A. This means that fluoride containing DA showed better result in reducing S. mutans colony.SEM evaluation showed that group A display the highest roughness value, followed by group C, B and D. Surface roughness can be measured by qualitative method such as SEM or quantitative method such as profilometry.SEM is a powerful magnification tool that utilizes focused beams of electron to obtain information while profilometer is a simple tool to measure surface's profile in order to quantify its roughness.It determines line roughness, in either vertical or horizontal direction, but it can not penetrate certain micro irregularities.This tool is easy to operate and rapid to obtain the measuring result.SEM and profilometry have limitation in defining surface topography.The electron beam techniques in SEM does not allow visualization of three dimensional surface texture. 14,15ccording to Katsikogianni, the first stage of bacterial adhesion consist of the initial attraction of the cells to the surface followed by absorption and attachment.At the second stage, molecular-specific reaction between bacterial surface structure and substratum surface become predominant. 16here was no significant difference occured between group A and C.This fact can be studied from three perspectives.First, it was suggested that S. mutans can adapt to fluoride because of either widespread or longterm use of fluoride.There are few alteration could be detected in fluoride resistant S. mutans, one of them is the fatty acid membrane.Membrane of fatty acid plays an important role in maintaining normal physiological cell function, tolerance of physiological stressor including oxidative stress. 17,18Cell membranes, which are structurally made of large amount of polyunsaturated fatty acid are highly susceptible to oxidative attacks.Oxidative attack will result in oxidative stress when the balance between the existace of reactive oxygen species and antioxidant defence is lost.The free radical mediated oxidative stress result in oxidation of membrane of lipoprotein, glycoxidation, oxidation of DNA, subsequently cell death result. 19n fluoride resistance S. mutans, the unsaturated/ saturated ratio during the stationary phase was higher than the wild-type strain.A significant difference in the amount of long chain mono saturated fatty acids between fluoride resistant strain and wild-type strain was detected in acidic condition.Previous study conducted by Zhu et al. 17 showed that the level of gene that is responsible for biosynthesis fatty acid (fabM) RNA in the fluoride resistant-strain was significantly higher than that of the wild-type strain in the acidic condition as well although the sequence of the fabM gene was the same in the fluoride resistant strain as the one reported for the wild-type strain.The fabM gene sequence from the fluoride resistant strain was 100% homologous to the wild-type strain.A single gene product of fabM in S. mutans is responsible for the synthesis of monounsaturated fatty acid and is necessary for survival in acidic environment.Alteration in the lipid content of membranes of an organisme is the major importance in response to environmental stress.These findings are consistent with Zhu et al. 17 If that fluoride-resistant S. mutans exhibit greater ability to resist acid stress compare to wild-type S. mutans. Another possibilities that influence this result is the concentration of the CPP-ACP containing fluoride in group C is to low (900 ppm).If the CPP-ACP only applied once after bleaching it means the fluoride delivered still below the traditional treatment as in mouth rinse or in tooth paste (1500 ppm).Many researcher still confuse to formulate the precision consentration of fluoride, so debate around fluoride consentration still persist.Treatment with 5000 ppm fluoride significantly enhanced remineralization and inhibit demineralization when compared with treatment with 1500 ppm. 19In group D, CPP-ACP containing fluoride was delivered twice before and after bleaching.It means the total fluoride delivered was 1800 ppm.Present findings showed lower surface roughness value and lower S. mutans colony accumulation (data not shown). Our results indicated that fluoride combinated with CPP ACFP that was delivered before and after bleaching (group D) showed lower S. mutans colony accumulation than CPP ACP without fluoride that delivered in the same manner (group B) This is because fluoride can inhibit glucocyltranferase produced by S. mutans by inhibiting enolase that has an important role in glycolysis.Beside that, fluoride was involved in developing complex metalfluor, usually aluminium floride (ALF4) that account for inhibiting glucan formation from glucose 6-phosphatase that result in inhibiting glucosyltransferase activity. 20n the colony counting method, the difference between alive and death S. mutans could not be seen, so it is possible to have false positive.The surface roughness value of group A seems to be the same as in group C.This condition could be as a result of contaminant such as residual of the desensitizing agent used that can not be rinsed because of the certain retentive topography on the enamel surface.The variation of enamel topography it self has wide diversity.Every part of enamel surface has diferent surface roughness value that can influence its agent retention capability.In this study the surface roughness value of human enamel before treatment are almost above 1 µm.It means that the starting point of surface roughness value was above critical value for bacterial colonization (0.2 µm). 21he study suggested that the application of CPP-ACP containing fluoride before and after bleaching was effective in reducing S. mutans accumulation and enamel roughness. Figure 4 .Figure 3 . Figure 4. Group C (bleaching→CPP-ACP (F)) the porosities could be seen and certain area of remineralization were evident Table 1 . One way ANOVA of the accumulation colony of S. mutans after once and twice desensitizing agent application, with and without fluoride
2018-12-05T18:37:43.319Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "3244fb9a874d38297db74c33efdb1cedc5496eb4", "oa_license": "CCBYSA", "oa_url": "https://e-journal.unair.ac.id/MKG/article/download/758/558", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3244fb9a874d38297db74c33efdb1cedc5496eb4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry" ] }
121086081
pes2o/s2orc
v3-fos-license
Radiatively Generated Leptogenesis in S 4 Flavor Symmetry Models We study how leptogenesis can be implemented in the seesaw models with S4 flavor symmetry, which lead to the tri-bimaximal neutrino mixing matrix. By considering renormalization group evolution from a high-energy scale of flavor symmetry breaking the GUT scale is assumed to the low-energy scale of relevant phenomena, the off-diagonal terms in a combination of Dirac Yukawa-coupling matrix can be generated and the degeneracy of heavy right-handed neutrino Majorana masses can be lifted. As a result, the flavored leptogenesis is successfully realized. We also investigate how the effective light neutrino mass |〈mee〉| associated with neutrinoless double beta decay can be predicted along with the neutrino mass hierarchies by imposing the experimental data on the low-energy observables. We find a link between the leptogenesis and the neutrinoless double beta decay characterized by |〈mee〉| through a high-energy CP phase φ, which is correlated with the low-energy Majorana CP phases. It is shown that the predictions of |〈mee〉| for some fixed parameters of the high-energy physics can be constrained by the current observation of baryon asymmetry. Introduction The neutrino experimental data can provide an important clue for elucidating the origin of observed hierarchies in the mass matrices of quarks and leptons.The recent experiments of neutrino oscillation have gone into a new phase of precise determination of the mixing angles and squared mass differences 1, 2 , which indicate that the tri-bimaximal mixing TBM for the three flavors of leptons Advances in High Energy Physics 1.1 can be regarded as the PMNS matrix U PMNS ≡ U TB P ν 3-6 , where P ν is a diagonal matrix of CP phases.However, properties related to the leptonic CP violation have not been completely known yet.The large mixing angles, which may be suggestive of a flavor symmetry, are completely different from the quark mixing ones.Therefore, it is very important to find a model that naturally leads to those mixing patterns of quarks and leptons with a good accuracy.In recent years there have been a lot of efforts in searching for models which result in the TBM pattern naturally and a fascinating way seems to be the use of some discrete non-Abelian flavor groups added to the gauge groups of the standard model.There is a series of proposals based on groups A 4 7-16 , T 17-21 , and S 4 22-36 .The common feature of these models is that they are naturally realized at a very-high-energy scale Λ and the groups are spontaneously broken due to a set of scalar multiplets, the flavons. In addition to the explanation of smallness of observed neutrino masses, the seesaw mechanism 37-39 has another appearing feature so-called leptogenesis mechanism for generation of observed baryon asymmetry of the Universe BAU , through the decay of heavy right-handed RH Majorana neutrinos 40-44 .If this BAU was made via the leptogenesis, then the CP violation in leptonic sector is required.For the Majorana neutrinos of three flavors there are one Dirac-type phase and two Majorana-type phases, one or a combination of which in principle can be measured through neutrinoless double beta 0ν2β decays 45-48 .The exact TBM pattern forbids at low energy the CP violation in neutrino oscillations, due to U e3 0. Therefore, any observation of the leptonic CP violation, for instance, in the 0ν2β decay, can strengthen our belief in the leptogenesis by demonstrating that the CP is not a symmetry of leptons.It is interesting to explore this existence of the CP violation due to the Majorana CP-violating phases by measuring | m ee | and examine a link between observable low-energy 0ν2β decay and the BAU.The authors in 35, 36 have shown that the TBM pattern can be generated naturally in the framework of the seesaw mechanism with SU 2 L × U 1 Y × S 4 symmetry.The textures of mass matrices as given in 35, 36 also could not generate a lepton asymmetry which is essential for the baryogenesis.In this paper, we investigate possibility of radiative leptogenesis when renormalization group RG effects are taken into account.We will show that the leptogenesis can be linked to the 0ν2β decay through the seesaw mechanism. The rest of this work is organized as follows.In Section 2, we present the low-energy observables in two variants of supersymmetric seesaw model based on flavor symmetry S 4 .We especially focus on the effective neutrino mass governing the 0ν2β decay.In Section 3, we study RG effects on the Yukawa couplings and heavy Majorana neutrino mass matrices so that the leptogenesis becomes available.This leptogenesis in the two models due to the RG effects is studied in detail in Section 4. Finally, Section 5 is devoted to our conclusions. Two S 4 Models In this section we give a review of the main features of Bazzocchi-Merlo-Morisi BMM model 35 and Ding model 36 .We simultaneously discuss the 0ν2β decay, leptogenesis, and phenomenological difficulties associated with the models to be solved. Bazzocchi-Merlo-Morisi Model In this model the flavor symmetry is S 4 accompanied with cyclic group Z 5 and Froggatt-Nielsen symmetry The matter fields and flavons are given in Table 1.The superpotential for the lepton sector reads where X i ψψη, ψηη, ΔΔξ , Δϕξ and the dots denote higher-order contributions. The VEV alignment of flavons is where B 2|x d |υ ϕ , C 2|x t |υ Δ , and r C/B are real and positive.The phases α 1 and α 2 are the arguments of x d,t , and φ ≡ α 2 − α 1 being the only physical phase remained in M R .Notice that the M R can be exactly diagonalized by the TBM matrix: 2.5 Integrating out the heavy degrees of freedom, we get the effective light neutrino mass matrix, which is given by the seesaw relation, R m d ν 37-39 , and diagonalized by the TBM matrix: 2.6 In order to find the lepton mixing matrix we need to diagonalize the charged-lepton mass matrix: where U l is unity matrix.Therefore we get 2.8 where β 1 γ 1 /2 and β 2 γ 1 − γ 2 /2 are Majorana CP violating phases.The phase factored out to the left has no physical meaning, since it can be eliminated by a redefinition of the charged lepton fields.The light neutrino mass eigenvalues are simply the inverse of the heavy neutrino ones, a part from a minus sign and the global factor from m d ν , as can be seen in 2.6 .There are nine physical parameters consisting of the three light neutrino masses, three mixing angles, and three CP-violating phases in general.The mixing angles are entirely fixed by the G f symmetry group, predicting TBM and in turn no Dirac CP-violating phase.The remaining five physical parameters, β 1 , β 2 , m 1 , m 2 , and m 3 , are determined by the five real parameters B, C, υ u , x, and φ. The light neutrino mass spectrum can have both normal or inverted hierarchy depending on the sign of cos φ.If cos φ < 0, one has normal hierarchy NH , whereas if cos φ > 0, one has inverted hierarchy IH .In order to see how this correlation in the allowed parameter space is constrained, we consider the experimental data at 1σ 1, 2 : 2.9 Hereafter, we always use the experimental data at 1σ for our numerical calculations of lowenergy observables.The correlations between r and cos φ for the NH spectrum red light plot and IH one blue dark plot are, respectively, presented in Figure 1. Because there is no Dirac CP-violating phase as mentioned, the only contribution from the Majorana phases to the 0ν2β decay comes from β 1 .The effective neutrino mass governing the 0ν2β decay is given by In a basis where the charged current is flavor diagonal and the heavy neutrino mass matrix M R is diagonal and real, the Dirac mass matrix m d ν gets modified to Field where υ u υ sin β, υ 176 GeV, and the coupling of N i with leptons and scalar, Y ν , is given by Concerned with the CP violation, we notice that the CP phase φ originating from m d ν obviously takes part at the low-energy CP violation as the Majorana phases β 1 and β 2 .On the other hand, the leptogenesis is associated with both the Yukawa coupling Y ν and its combination: 2.14 This directly indicates that all off-diagonal H ij vanish, so the CP asymmetry could not be generated and neither leptogenesis.For the leptogenesis to be viable, the off-diagonal H ij have to be generated. Ding Model Ding model, proposed in 36 , possesses flavor symmetry group G f S 4 × Z 3 × Z 4 , where the three factors play different roles.The S 4 controls the mixing angles, the Z 3 guarantees the misalignment in flavor space between neutrino and charged-lepton eigenstates, and the Z 4 is crucial to eliminating unwanted couplings and reproducing observed mass hierarchies.In this framework the mass hierarchies are controlled by spontaneously breaking of the flavor symmetry instead of the Froggatt-Nielsen mechanism 49 .The matter fields of lepton sector and flavons under G f are assigned as in Table 2. The superpotential for the lepton sector reads Advances in High Energy Physics where the dots denote higher-order contributions. The VEV alignment of flavons are assumed as follows: The charged-lepton mass matrix is obtained by where all the components are assumed to be real.The neutrino sector gives rise to the following Dirac and RH-Majorana mass matrices where the quantity M is also supposed to be real and positive.The phase φ ≡ α 2 − α 1 , where α 1 , α 2 are denoted as the arguments of y ν1 , y ν2 , respectively, is the only physical phase survived because the global phase α 1 can be rotated away.The real and positive components a and b are defined as After seesawing, the effective light neutrino mass matrix is obtained from which can be diagonalized by the TBM matrix: where with m 0 υ 2 u a 2 /M and r b/a.The lepton mixing matrix is given by where 2.23 It is clear that the phase factored out to the left has no physical meaning.Moreover, the mixing angles are entirely fixed by the G f symmetry, predicting TBM and in turn no Dirac CPviolating phase.There remain only five physical quantities, β 1 , β 2 , m 1 , m 2 , and m 3 , completely determined by the five parameters M, υ u , a, b, and φ. There are two possible orderings in the masses of effective light neutrinos depending on the sign of cos φ: the NH corresponding to cos φ > 0 while the IH to cos φ < 0, which contrast with the previous model.The relation between r and cos φ for the NH spectrum red plot and IH one blue plot is included in Figure 4. Similarly to the previous model, the contribution to the 0ν2β decay entirely comes from the Majorana phase β 1 .The relevant effectiveneutrino mass is given by where m 0 a 2 υ 2 u /M.The behavior of | m ee | as a function of φ is plotted in Figure 5, where the horizontal line and dashed line are the current lower bound and the future one as mentioned.Moreover, the relation between φ and β 1 can be obtained from 2. In a basis where the charged current is flavor diagonal, we diagonalize M R in order to go into the physical mass basis of the RH neutrinos: where 2.27 In this basis, the Dirac mass matrix m d ν gets the form where υ u υ • sin β, υ 176 GeV and the coupling of N i with leptons and scalar, Y ν , is given by Again, the CP phase φ which comes from m d ν also takes part at the low-energy CP violation as the Majorana phases β 1 and β 2 .On the other hand, the leptogenesis is associated with both the Yukawa coupling Y ν and its combination: which directly indicates that all Im H ij vanish and in turn unflavored leptogenesis could not take place.However, flavored leptogenesis can work if the degeneracy of the heavy Majorana neutrino masses is lifted. Relevant RG Equations In both models, the CP asymmetries due to the decay of heavy RH Majorana neutrinos at leading order vanish; therefore the leptogenesis could not take place.The radiative effects due to RG running from a high to low scale can naturally lead not only to a degenerate splitting of heavy Majorana masses for Ding model , but also to an enhancement in vanished offdiagonal terms of H Y ν Y † ν for BMM model , which are necessary ingredients for a successful leptogenesis mechanism. The radiative behavior of heavy RH-Majorana mass matrix M R is dictated by the following RG equation 56-60 : where t 1/16π 2 ln M/Λ and M is an arbitrary renormalization scale.The cutoff scale Λ can be regarded as the G f breaking scale Λ Λ and assumed to be in order of the GUT scale, Λ ∼ 10 16 GeV. The RG equation for the Dirac neutrino Yukawa coupling can be written as and Y l are the Yukawa couplings of up-type quarks and charged leptons, and g 2,1 are the SU 2 L and U 1 Y gauge coupling constants, respectively. Let us first reformulate 3.1 in the basis where M R is diagonal.Since M R is symmetric, it can be diagonalized by a unitary matrix V R as mentioned: 3.3 As the structure of M R changes with the evolution of the scale, the V R depends on the scale too.The RG evolution of V R t can be written as where A is an anti-Hermitian matrix A † −A due to the unitary of V R .Differentiating 3.3 we obtain 3.5 Absorbing the unitary factor into the Dirac Yukawa coupling Y ν ≡ V T R Y ν , the real diagonal part of 3.5 becomes 3.6 Advances in High Energy Physics 13 The RG equation for Y ν in the basis of diagonal M R is given by Finally, we obtain the RG equation for H responsible for the leptogenesis: 3.8 The heavy Majorana mass splitting generated through the relevant RG evolution is thus given by where H is defined in 2.30 .Neglecting the RG evolution of Y ν and its combination H Y ν Y † ν , all the necessary components for the flavored leptogenesis in Ding model are available.The flavored CP asymmetries ε α i can be obtained from 2.29 , 2.30 , 3.9 , and 4.3 .Notice however that in BMM model a nonvanishing CP asymmetry requires Im H ij Y ν iα Y ν * jα / 0 with Y ν defined in 2.13 .Therefore, to have a viable radiative leptogenesis we need to induce a nonvanishing H ij i / j at the leptogenesis scale.Indeed, this is possible since the RG effects due to the τ-Yukawa coupling contribution imply at the leading order yields 56-60 Radiatively Induced Flavored Leptogenesis As already noticed, the leptogenesis cannot be realized in the S 4 models at the leading order, so this section is devoted to study the flavored leptogenesis with the effects of RG evolution.The lepton asymmetries, which are produced by out-of-equilibrium decays of heavy RH neutrinos in early Universe at temperatures above T ∼ 1 tan 2 β × 10 12 GeV, do not distinguish among lepton flavors, called conventional or unflavored leptogenesis.However, if the scale of heavy RH neutrino masses is about M ≤ 1 tan 2 β × 10 12 GeV, we need to take into account lepton flavor effects, called flavored leptogenesis. In this case, the CP asymmetry as generated by the decay of ith heavy RH neutrino far from almost degenerate is given by 61-71 where Y ν and H Y ν Y † ν are in the basis where M R is real and diagonal.Here the loop function This function depends strongly on the hierarchy of light neutrino masses. For an almost degenerate heavy Majorana mass spectrum, the leptogenesis can be naturally implemented through the resonant leptogenesis 72, 73 .In this case, the CP asymmetry is generated by the ith heavy RH neutrino N i when decaying into a lepton flavor α e, μ, τ and dominated by the one-loop self-energy contributions 74 , where Γ j H jj M j /8π is the decay width of jth RH neutrino and δ ij N is mass splitting parameter defined as As reminded in the previous section, by properly taking into account the RG effects, the nonzero flavored CP asymmetries ε α i as given above can be obtained.Once the initial values of ε α i are fixed, the final result of BAU, η B , can be given by solving a set of flavor-dependent Boltzmann equations including the decay, inverse decay, and scattering processes as well as the nonperturbative sphaleron interaction.In order to estimate the washout effects, we introduce parameters K α i which are the wash-out factors due to the inverse decay of Majorana neutrino N i into the lepton flavor α.The explicit form of K α i is given by where Γ α i is the partial decay width of N i into the lepton flavors and Higgs scalars, H M i 4π 3 g * /45 1/2 M 2 i /M Pl with the Planck mass M Pl 1.22 × 10 19 GeV and the effective number of degrees of freedom g * 228.75 is the Hubble parameter at temperature T M i , and the equilibrium neutrino mass m * 10 −3 .From 2.13 , 2.29 , and 4.5 we can obtain the washout parameters corresponding to the two models. Each lepton asymmetry for a single flavor ε α i is weighted differently by the corresponding washout parameter K α i , appearing with a different weight in the final formula for the baryon asymmetry 75, 76 : 4.8 Bazzocchi-Merlo-Morisi Model In this model, the RH neutrino masses are strongly hierarchical.For the NH case, the lightest RH neutrino mass is M 3 , then the leptogenesis is governed by the decay of M 3 neutrino.The explicit form of flavored CP asymmetries ε α 3 is given from 2.13 , 2.14 , 3.10 , and 4.1 : The corresponding washout parameters are 4.10 For the IH case, the lightest RH neutrino is of M 1 , then the leptogenesis is governed by the decay of M 1 neutrino.The flavored CP asymmetries ε α 1 are obtained as with corresponding washout parameters 4.12 Applying 4.6 , 4.7 , and 4.8 , the BAU for two cases are then obtained.Notice also that in the NH case the leptogenesis has no contribution from the electron flavor decay channel which makes the scale of the heavy RH neutrino mass for a successful leptogenesis higher than that of the IH case. The prediction for η B as a function of | m ee | is shown in Figure 7 where we have used B 10 13 GeV for the NH case, B 10 12 GeV for the IH case, and tan β 30 as inputs.The horizontal solid and dashed lines correspond to the central value of BAU experimental data η CMB B 6.1 × 10 −10 77-79 and phenomenologically allowed region 2 × 10 −10 ≤ η B ≤ 10 −9 , respectively.As shown in Figure 7, the current observation of η CMB B can narrowly constrain the value of | m ee | for the NH and IH spectrum, respectively.Combining the results presented in Figures 2 and 3 with those from the leptogenesis, we can pin down the Majorana CP phase β 1 via the parameter φ. Ding Model In this model, all the heavy RH neutrinos are exactly degenerate.By considering the RG effects, their masses get a tiny splitting almost degenerate , which lead to a resonant leptogenesis as contributed from all these heavy RH neutrinos.However, if we neglect the RG effects on the H matrix, the contribution of N 3 to lepton asymmetries ε α i can be negligible due to H 13 ε τ 2 a 2 r sin φ 32π 3 − 2r cos φ 3r 2 t . 4.13 Here the mass slitting parameter δ 12 N which can be calculated from 2.30 and 3.9 , 4.15 With the help of 4.6 , the BAU is obtained then. The prediction for η B as a function of | m ee | is shown in Figure 8 where we have used M 10 3 GeV and tan β 1.The horizontal solid and dashed lines correspond to the central value of the BAU experiment result η CMB B 6.1 × 10 −10 77-79 and phenomenologically allowed region 2 × 10 −10 ≤ η B ≤ 10 −9 , respectively.As seen in Figure 8, the current observation of η CMB B can narrowly constrain the value of | m ee | for the NH and IH spectrum, respectively.Again, combining the results in Figures 5 and 6 with those from the leptogenesis, we can pin down the Majorana CP phase β 1 via the parameter φ. Conclusions We have studied the S 4 models in the context of a supersymmetric seesaw model which naturally lead to the TBM form for the lepton mixing matrix.In BMM model, the combination Y ν Y † ν is proportional to unity whereas in Ding model the heavy RH Majorana masses are exactly degenerate.This would forbid the desirable leptogenesis to occur in each model.Therefore, for a viable leptogenesis the off-diagonal terms of Y ν Y † ν in BMM model have to be generated, while in Ding model the degeneracy of heavy RH Majorana masses has to be lifted.We have shown that these can be easily achieved by the RG effects from a high-energy scale to the low-energy scale which result in the successful leptogenesis. Figure 1 : Figure 1: Allowed parameter region by the 1σ experimental constraints 2.9 for the ratio r C/B as a function of cos φ.The blue dark and red light curves correspond to the IH and NH spectra. Figure 2 : Figure 2: Prediction of the effective neutrino mass | m ee | responsible for 0ν2β decay as a function of φ by the 1σ experimental constraints 2.9 .The blue dark and red light curves correspond to the IH and NH spectra. Figure 3 : 1 −3r sin φ 1 − 11 Figure 3 Figure 3: The Majorana CP phase β 1 as a function of φ plotted by the 1σ experimental constraints 2.9 .The blue dark and red light curves correspond to the IH and NH spectra. 23 as sin 2β 1 6r sin φ 1 − 3r cos φ 1 −Figure 4 : Figure 4: Allowed parameter region by the 1σ experimental constraints 2.9 for the ratio r b/a as a function of cos φ.The blue dark and red light curves correspond to the IH and NH ordering. Figure 5 : Figure 5: Prediction of the effective mass | m ee | responsible for 0ν2β as a function of φ by the 1σ experimental constraints 2.9 .The blue dark and red light curves correspond to the IH and NH ordering. Figure 6 : Figure6: Relation between the phase β 1 and φ as given by the 1σ experimental constraints 2.9 .The blue dark and red light curves correspond to the IH and NH ordering. Figure 7 : Figure 7: Prediction of η B as a function of | m ee | for the NH case a and IH case b .The horizontal solid and dashed lines correspond to the experimental central value and phenomenologically allowed region. Table 1 : Transformation properties of the lepton sector and all the flavons of the BMM model where ω e i2π/3 . Table 2 : Representations of the matter fields of lepton sector and flavons under S 4 × Z 3 × Z 4 . provided that the scale of heavy RH neutrino masses is about M ≤ 1 tan 2 β × 10 9 GeV where the μ and τ Yukawa couplings are in equilibrium and all the flavors are to be treated separately.And •10 9 GeV ≤ M i ≤ 1 tan 2 β •10 12 GeV where only the τ Yukawa coupling is in equilibrium and treated separately while the e and μ flavors are indistinguishable.Here ε 2 31H23 320.Actually, this is also correct if we take into account the RG effects on the H matrix since the radiative generation of H 13 31 , H 23 32 is very small.Combined with 2.29 , 2.30 , 3.9 , and 4.3 , the flavor-dependent CP asymmetries ε α i are obtained as
2018-12-21T15:00:40.660Z
2012-02-27T00:00:00.000
{ "year": 2012, "sha1": "7fa30905c9357ff63b9f23698cb1779ec5e3df86", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ahep/2012/254093.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7fa30905c9357ff63b9f23698cb1779ec5e3df86", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54002835
pes2o/s2orc
v3-fos-license
Innovation, competition and technical efficiency Contradictory empirical and theoretical evidence on the relationship between innovation and competition has been reconciled in a model that yields an inverted U-shaped curve. I test whether the predictions of the model are supported by the data with an unbalanced panel of firms for 1990–2003 in a high productivity growth, high-tech industry, Finnish ICT manufacturing. In particular, I investigate how well alternative, yet rigorous measures of innovation and the technology gap, such as R&D intensity, R&D elasticity, technical change, technical efficiency and total factor productivity fare with respect to competition measured by the Lerner index. The results prove sensitive to the choice of variable. Overall, the model is not supported by the empirical evidence of the industry. Subjects: Industrial Economics; Microeconomics; Production Research & Economics Introduction By definition, fully efficient firms form the global technology frontier. Since the frontier represents the state of the art in technology, it responds to innovation only. Meanwhile, neoclassical theory postulates competition to reduce productive inefficiency. Yet, the Schumpeterian paradigm (Schumpeter, 1934) recognized monopoly rent prospects of the innovator as the central innovation incentive. Hence, firms at the frontier are frequently sheltered from intense competition. The theoretical model of , postulating an inverted U-curve-shaped relationship between innovation and competition, has proposed a way to reconcile controversial empirical findings ABOUT THE AUTHOR Elina Berghäll is a senior research fellow with the VATT Institute for Economic Research (VATT) in Helsinki. The article is a part of her research on the Finnish ICT industry. Her research fields cover a range of topics from innovation policy, R&D and physical investment, ICT, technology frontier, productivity, technical efficiency, environment and sustainable development, economic growth, infrastructures, tax competition, regional policy, urban economics to international capital flows. In addition to VATT, she has worked among others for the OECD, the Association of Finnish Local and Regional Authorities and the UNU/ WIDER. Her research results have been presented and published on various international fora and publications. She has taken part in a number of scientific and policy-oriented working groups and acted as referee to numerous international journals. PUBLIC INTEREST STATEMENT The assumption that competition reduces slack while innovation depends on it, is fairly common. By definition, efficiency improvements narrow the gap with the technology frontier. On the other hand, the technology frontier shifts with innovation, which can generate at least temporary monopoly power. This somewhat contradictory relationship between innovation and competition has been reconciled by Aghion, Bloom, Blundell, Griffith, and Howitt (2005) into a model that yields an inverted U-shaped curve. I test the predictions of the model with a panel of Finnish ICT firms for 1990-2003, when radical innovation in mobile phones accelerated technological change and productivity growth. Since the model is not supported by the empirical evidence of the industry, I conclude that the model predictions do not represent stylized facts of the relationship between competition and innovation. between innovation and growth and the contradictions of industrial organization (e.g. Dixit & Stiglitz, 1977) and endogenous growth models (e.g. Aghion & Howitt, 1992;Grossman & Helpman, 1991;& Romer, 1990). The purpose of this paper is to explore the robustness of the predictions of the model on empirical evidence from an innovative industry subject to intense competition from abroad. According to the model, new technological breakthroughs can establish significant leads in competition and increasing returns for a while, until laggards copy and catch up with the innovators. Innovation increases at low levels of competition, reaches an optimum and thereafter declines as competition intensifies and begins to discourage innovations as monopoly rents from innovation decline. The inverted U-shaped interrelationship emerges from the escape competition effect at low levels of competition, which turns into a Schumpeterian effect as higher levels of competition begin to discourage R&D investment. As empirical predictions of the inverted U-model, Aghion and Griffith (2005, p. 57) list that (1) additional product market competition (PMC) in frontier industries increases innovation; (2) Vice versa, additional PMC in lagging industries reduces or increases innovation only weakly; (3) An increase in PMC in the economy on average reduces the share of frontier industries and raises the average technological gap. "The average fraction of frontier sectors decreasesnamely, the average technological gap between incumbent firms and the frontier in their respective sectors increases-when competition increases"; (4) As a result, the general effect of additional PMC on the economy follows an inverted U-shape. Additional PMC encourages innovation at low levels of competition, but discourages it at high levels of PMC. In efficiency terms, this means that additional PMC in efficient industries increases innovation, and vice versa additional PMC in inefficient industries reduces or increases innovation less, and that additional PMC in the economy on average reduces the share of frontier firms or raises average inefficiency. In other words, there is a positive relationship between inefficiency and (additional) competition at the industry or economy-wide level. The more efficient the economy on average, the more inefficiency increases as a result of competition. refer to productivity in the context of the innovation-competition dichotomy. They (p. 49) state that while competition appears to be effective at improving productivity levels in satisficing firms (those plagued with agency and managerial slack problems), this does not automatically translate into higher rates of productivity growth in such firms relative to more profitmaximizing ones. In other words, competition can be effective in raising productivity in inefficient firms relative to more efficient firms, but not its growth rate. Innovation has been proxied by the level and growth rate of TFP also by Nickell (1996), who found evidence that more intense PMC is reflected in more rapid TFP growth. In practice, while innovation may raise productivity, high innovation input or output does not automatically translate into TFP growth. In addition, the use of productivity for innovation confuses the relationship of innovation with the frontier, also referred to in the Aghion et al. model. In economic theory, inefficiency is a relative term that measures the gap between the production possibilities frontier and the realized output. Productivity improvements, in contrast, reduce inefficiency in lagging firms, but increase it if the improver is a frontier firm. If firms are inefficient, competition is more likely to raise efficiency. Moreover, although all may be pushed to seek ways to improve their efficiency, the gap between successful firms and the rest may widen and average inefficiency increases. In addition to productivity, applied patents as a measure of innovation and the Lerner index as a measure of competition. Subsequent literature, surveyed in the next section, has applied further measures of innovation and technology gaps. Tingvall and Poldahl (2006) test the predictions of the model on firm-level data, and find that the inverted U-shaped relation is supported by a Herfindahl index measure of competition, but not by a price-cost margin. I contribute to the literature by testing whether results correspond to the theory if more rigorous determinants of innovation and technology gaps are applied, such as R&D intensity and technical efficiency. In particular, I apply frontier methodology to estimate the technology gap, and R&D intensity, R&D elasticity and technical change as proxies for innovation. The data-set is from an innovative and competitive industry, the Finnish ICT manufacturing industry during a period of rapid technological change, 1990-2003. At the time, the industry attracted technology adopters from abroad in the form of FDI. Hence, one can assume it to have been close to the frontier, while being open to intense competition from abroad. According to the predictions of the theory, if the industry was indeed at the frontier at the time, average technical efficiency in Finnish ICT manufacturing should be high, competition neck-andneck on the upward sloping part of the inverted U-curve. That is, additional competition should increase innovation. I seek answers to questions, such as is average technical efficiency high in the industry? Has competition increased technical inefficiency or innovation? Does technical inefficiency or total factor productivity (TFP) provide a good measure of innovation? The results prove sensitive to the choice of variable. Overall, the model is not supported by the empirical evidence of the industry. The next section reviews empirical applications of the theory. I present the data and variables in the Section 3, and methodology in the Section 4. The Section 5 summarizes the results and their implications are briefly discussed in Section 6. Related theories and empirical findings The literature on firm performance, competition, sources of innovation and industrial organization extends beyond the Schumpeterian paradigm (Schumpeter, 1934), which recognized monopoly rent prospects of the innovator as the driving force of innovation. Arrow (1962) identified the profit appropriation opportunity of the new comer to arise from the public good properties of knowledge (spillovers). Bain (1951) found that rates of return of firms in relatively more concentrated industries were significantly higher than those in un-concentrated ones, interpreting it as evidence in favour of the now so-called structure conduct performance paradigm in industrial organization theory. Demsetz (1973) and Demsetz (1974) challenged this view by arguing that abnormal profits reflect higher efficiency levels rather than monopoly profits, and that researchers need to distinguish between the impacts of efficiency on performance from those of market power. To test the cause, if collusion is present, then smaller firms should earn similar (if not higher) rates of return than large firms. If in contrast, efficiency is driving the rates of returns, then a positive correlation with the industry rate of return should only emerge for large firms. Similarly, Carlsson (1972) found productive efficiency to increase with producer concentration, and explained it by the small size of the Swedish market relative to economies of scale in manufacturing. Caves (2007) has argued that efficiency rents and monopolistic profits (due to the dominance of one large buyer firm over many suppliers), may also coexist. The resource-based view (RBV) of the firm holds that if short-run competitive advantages are heterogeneous in nature and not perfectly mobile, they can be transformed into a sustained competitive advantage generating abnormal returns (Peteraf, 1993, p. 180). Most traditional models of PMC and innovation predicted a detrimental impact from competition on innovation and growth. These include, e.g. the Hotelling linear model and the monopolistic competition model by Dixit and Stiglitz (1977). Dasgupta and Stiglitz (1980) propose that the anticipation of future competition deter entry and hence competition today. In the mid-1990's, empirical findings began to contradict these theories, but the models applied so far suffered from linearity. The only exception was Scherer (1965), who showed how patenting activity increases with firm size, but with diminishing patenting relative to size. He questioned the role of large monopolistic conglomerates in technological progress, i.e. the Schumpeterian (Mark II) model of competition, innovation and growth. His view received support from subsequent empirical research . Empirical findings of a positive relationship between PMC and productivity growth have generated new models and theories on gradual technological progress that evolves step by step. That is first, lagging firms need to catch-up with market/technology frontier leaders by means of imitation, before they attempt to escape competition by means of innovation. (See e.g. Aghion, Harris, & Vickers, 1997;or Aghion, Harris, Howitt, & Vickers, 2001). Also in line with the theory, evaluate the predictions listed above with patent data on the UK firms, and find that the inverted U-curve is steeper in more neck-and-neck (efficient) industries. For example, the curve is steeper in the food and beverages sector, than in electronics and electrical products. Other positive evidence for the inverted U-curve has been found by Kilponen and Santavirta (2007), who found that R&D subsidies to have adverse effects on competition in only extreme cases, but generally positive influences on innovation. In contrast, Gorodnichenko, Svenjar, and Terrell (2010) claim to find no evidence on an inverted U-curve for emerging markets firms, although what they find is in accordance with the downward sloping part of the inverted u-curve. That is, competition has a negative effect on innovation, especially for firms further from the frontier, but they do not rule out an inverted U-relationship in more pro-business environments. Bos, Kolari, and Van Lamoen (2013) proxy innovation with input-based (cost minimization) technical efficiency, and estimate the presence of an inverted U-curve between competition and technology gaps in the US banking industry. They find consolidation to have reduced innovation. Similarly, Badunenko, Fritsch, and Stephan (2006) consider efficiency as an overall measure of innovativeness, resulting from high productivity in the production and sale of highly priced innovative goods and services. It is unexpected that the technology gap has been used to proxy innovation, since it is a rather common presumption that efficiency and innovativeness are contradictory, because innovation requires some degree of slack. Particularly, industrial organization theories typically expect innovation to decline with efficiency enhancing competition . Well before them, research by Hanusch and Hierl (1992) suggests that the relationship between profit margins and technical efficiency is not linear. Hanusch and Hierl analysed the relationship between profitability and technical efficiency in German electronics and machinery industries and found it to be convex, i.e. enterprises enjoy increasing returns to their attempts to raise efficiency. They concluded suggestively that leading enterprises may be subject to strong efficiency pressures to maintain profitability relative to competition. Their data on R&D expenditures was sufficient only for the machinery industry. Since deviations from the production frontier were small, they concluded that the sample firms' best strategy is innovation as opposed to imitation, in order to ensure technological leadership and above average profitability. The data and variables Empirical evidence is sought from a fairly homogenous innovative high-tech industry in the small country that is subject to intense competition from abroad. Asset seeking FDI into the industry during the sample period suggests it to have been close to a technology frontier, characterized by intense and rapidly evolving innovation and competition (Berghäll, 2015). It, therefore, offers potential to test the predictions of the model in a concise setting with few disrupting unknowns. While external validity requires more extensive evidence on other industries and countries, the empirical research needs to be carried out separately by industry to avoid unrealistic production function assumptions with respect to underlying technologies. The present exercise therefore contributes to the literature with an example of an innovative industry, which may have counterparts in other countries and high-tech industries. The ICT industry data The Table 1. In the production function, real value-added measures output (Y), the dependent variable. There are three main independent variables: Non-R&D labour (L), the physical capital stock (K) and the R&D stock (R). Labour input is proxied by total firm personnel due to data shortages on hours worked. As R&D was included as an input, R&D employees were deducted from the total number of labour input to avoid double-counting (see e.g. Hall & Mairesse, 1995). The LDPM database provides proxies for physical capital, built from machine and equipment investments using the perpetual inventory method with a 10% depreciation rate, i.e. K t = (1δ)K t−1 + I t , where δ is the depreciation rate. Similarly, R&D capital stocks were built from total intramural R&D investments, available in the R&D panel, based on the perpetual inventory method. The initial R&D stock was based on data from 1985 to 1989 when available, and estimated with a 30% depreciation rate, in line with rapid technological development (confirmed by the results) and a prior finding for electrical products Bernstein and Mamuneas (2006). 3 Firm level data on capital and labour was obtained by summing up plant levels in the LDPM database by firm. Analysis at the firm level avoids the questionable division of R&D capital plant-wise, as well as the comparison of units within the same firm as if they competed. Due to data shortages, as well as for homogeneity of the sample, the analysis concerns only innovative firms with at least 20 employees. Large firms dominate the industry in terms of sales and R&D. Though small and microfirms are large in number, 89% in 1993 and 86% in 2004, their share of total employees was only 12 and 7%, respectively, and even less of total turnover 6 and 2%, or total wage costs 9 and 5%, respectively, for 1993 and 2003. Their exclusion, therefore, cuts out only about 10% of total economic activity in the industry. In 2003, the true number of firms operating in the industry rose to almost 1,700, and 233 if only firms with over 20 employees are considered. Larger firms cover over 90% of the private R&D carried out in the industry, which in turn represents over half of total corporate R&D in Finland. The exclusion of smaller firms does not confuse the analysis because estimation results showed most results to be (strictly) increasing in size, and consequently, the potential direction of microfirms' impact is rather obvious. To avoid selectivity bias caused by the exclusion of loss-making and indebted firms when logarithms are taken, Lerner values and debt ratios were adjusted by adding the maximum loss or maximum debt to all observations plus one, as is recommended in the literature. About 3-4% of observations were removed due to negative or missing values when logarithms were taken, extreme annual variation or impossible value-added figures. Due to data secrecy requirements, there was no other basis for the exclusion of outliers other than their extremeness in value. In frontier analysis, such a cause is even more suspect than usual, as it could lead to the removal of frontier firms defeating the purpose of the exercise. Nominal variables were deflated with sectoral producer price indices at the 2 and 3 digit levels (1995 = 100), with the exception of R&D prior to 1995, for which the general earnings level index was used (due to the unavailability of alternatives). In consequence, panel firms can be assumed to be subject to similar (minimal) regulation, demonstrate similar behaviour, i.e. profit or revenue maximizing, allowing me to apply an output distance function, i.e. an output-oriented efficiency measure. In addition, firms can be assumed to fit into the same functional form of the production function for their relative efficiencies to be comparable. Aghion et al. use patents, i.e. an innovation output measure for innovation. Innovation output can also be approximated by technical change and R&D elasticity. Innovation, however, may refer to the innovation inputs, for which R&D provides more accurate estimates. Innovation input is measured by R&D intensity (R&D Capital/No. of Personnel), which does not vary by the model applied, though it is not constant over time and firm size (Figure 1). In contrast, innovation output measures (implemented innovative activity), which is measured by R&D elasticity and technical change were estimated. Innovation variables Various other firm characteristics are listed in Table 2 such as firm size (number of employees, six categories), firm age (four categories) and the firm leverage (debt ratio). have argued firm leverage to be positively related with innovation to escape the risk of bankruptcy. Also, firm size and age are expected to have a significant impact on innovation, the direction varying by technological regime. The Schumpeterian hypothesis deems firm size to be conducive to R&D, while the so-called Schumpeterian Mark I regime characterizes situation in which technological progress emerges from new technology-based firms through a process of creative destruction (see e.g. Nelson and Winter, 1982). Although most formal R&D is concentrated in large corporations, Acs and Audretsch (1991) argue that small firms account for a disproportionate share of new product innovation, given their low formal R&D expenditures. Audretsch (1995) confirms that the empirical evidence on their role as engines of innovative activity in certain industries is robust, and yet the link between R&D and innovation disappears as the unit of observation is reduced to the firm level, particularly with small firms. Small is typically new, but since it proved impossible to establish an exact age for each firm, firms were merely grouped into four age categories. Competition variables The primary competition measure was specified as a firm-specific Lerner index based on firm operating profit divided by the value of gross output (turnover), i.e. profit margin. Operating profit was derived from firm value added minus factor input costs, i.e. expenses including payroll taxes and social security payments incurred by the firm, as well as capital costs as indicated by financing expenses in firm profit and loss statements. The Lerner Index is common to the literature due to its significant advantages in measurement (Aghion & Griffith, 2005, p. 22) and is in accordance with the microeconomic principle that high profit margins equal imperfect competition. For example, domestic market shares are rather irrelevant since the share of international trade in the industry is high. The Lerner index overcomes the difficulties inherent in input and output price measurements, and quality change, since the measure of competition is defined by the profits of firms. Since all firms operate in the same country, general price level and foreign exchange conditions are the same for all firms. In accordance with the microeconomic principle that high profit margins equal imperfect competition, increased competition is expected to reduce profit margins. Rates of returns are indicated similar to with a firm-specific Lerner index, which is common to the literature due to its significant advantages in measurement (Aghion & Griffith, 2005, p. 22). Here, the Lerner index is based on firm operating profit divided by the value of gross output (turnover), i.e. profit margin, price cost margin or mark-up. In contrast, domestic market shares, such as the Herfindahl index, are rather deceptive measures of competition when most of it originates from abroad. Moreover, according to survey results by Gilbert (2006), empirical research based on market concentration to proxy competition has not reached definite conclusions on the relationship between market structure and R&D, once industry characteristics, technological opportunities and appropriability were controlled for. In 1990-1993, the economy plunged into a deep recession. Yet, profitability was rapidly regained in the industry (Figure 2) with radical innovation. Net entry into the industry was high until mid-1990's, falling subsequently to exit levels by the end of the decade. Entry-based competition revived only after 2005. Thereafter, profitability gradually declined and competition intensified. While average competition (1-Log Lerner) has intensified very gradually, competition actually declined for the largest firms as Figure 3 above shows over the sample period. On average, production growth (gy) correlates significantly (0.30) with the Lerner index, showing boom times to raise profitability. In contrast, recessions intensify competition, as one would expect. Low profitability is expected to signal intense competition. Another determinant of inefficiency related to global competition is foreign ownership. Data on foreign firms is available for 1993-2002, with an emphasis observable for 1997-2002, but since the entire industry is subject to global competition, foreign ownership is of little relevance. Related research has found inward FDI into the industry to have been most likely asset-seeking (see Berghäll, 2015). Determinants of the technology gap Technological gaps between leaders and followers are measured by Battese-Coelli (1995) technical inefficiencies following the inverted U-curve-shaped theoretical predictions of the relationship between innovation and competition. Thus, technical inefficiency also estimates innovation impacts. Technical efficiency results were compared with other reasonable indicators of innovation, such as technical change (implemented innovative activity), R&D intensity, an input measure and R&D elasticity, an innovation output measure. The analysis concerns only innovative firms. Since the industry is highly R&D intensive, the R&D requirement does not introduce a selectivity bias. It has the beneficial corollary that panel firms can be assumed to be subject to similar (minimal) regulation, demonstrate similar behaviour. In addition, I can assume the firms to fit into the same functional form of the production function for their relative efficiencies to be comparable. Methodology The inverted U-curve model does not argue causality. I am, therefore, only interested in correlations of competition and innovation in this context. Technology gaps, in contrast, are estimated with parametric and non-parametric methodologies. Otherwise estimation methods depend on the estimator. Technical change and R&D elasticity are estimated with maximum likelihood. The impact of competition on the technology gap is estimated with true fixed and Battese-Coelli efficiency. Firm-level estimates The key insight Farrell (1957) proposed was to extract information from extreme observations of the data to determine the best practice production frontier, rather than having to rely on some hypothetical production possibilities curve. A flexible translog functional form was assumed to approximate the production technology, following Heshmati, Kumbhakar, and Hjalmarsson (1995): where Y it is the output of the i-th firm observed in period t, f(.) represents the production technology, K is the physical capital, L is the non-R&D labour, R is the R&D capital input and θ is a vector of parameters to be estimated. The following flexible translog (transcendental) production function was assumed to approximate production technology: where the β's denote parameter estimates of the production function, i is the company, j and h denote inputs (i.e. logarithms of physical capital (k), non-R&D labour (l) and R&D capital (r)), and t is the time period (i.e. the year concerned). Also, Cobb-Douglas forms of the model were tested and found to apply only for the international data comparison. R&D elasticity, i.e. the percentage change of output divided by the percentage change of R&D, was obtained from the first derivative of the production function with respect to R&D: where E ijt is firm-, input-and time-varying, respectively. The rate of exogenous technical change was obtained as follows: (1) where TC it is neutral, if β jt = 0 for all inputs j. In other words, technical change merely represents the change in the production function with respect to time. Data envelopment analysis for technical efficiency measures Several estimation methods were used to confirm and check results. State-of-the-art true fixed and random effects estimate of technical efficiency proved unreasonable. Instead, for consistency and comparability of results, as well as to abstain from potentially distorting assumptions, non-parametric data envelopment analysis (DEA) was applied to estimate technical efficiency. DEA applies linear programming to compare relative performance when the production process involves multiple inputs and outputs. In contrast to stochastic frontier modelling, there is no need to specify a mathematical form for the production function beforehand, since the method simply seeks the points that maximize output given inputs (output-oriented measure) or minimize inputs given output (inputoriented measure). Hence, DEA efficiency results do not depend on the above formulation of the production function. Several programmes are available to carry out the linear programming problem. Hence, its complexity in terms of the number of inputs and outputs causes no constraint. Most efficient firms receive a score of one, and less efficient a score somewhere below one, but above zero. At the same time, the major drawback of the method is the fact that there is no adjustment for outliers. Yet, it is simple to check visually how the efficiency estimates are distributed and how "unreasonable" outliers are. The original constant returns DEA methodology was developed by Charnes, Cooper and Rhodes (1978). In 1984, Banker, Charnes and Cooper developed it further into a variable returns to scale (VRS) version. The differences in the input and output-oriented measures reveal whether returns to scale are not constant, decreasing or increasing. When input-based efficiency is smaller than the output-based, returns to scale are decreasing. 4 If returns to scale appear to be increasing, output-based efficiency measures are generally higher, but there is no clear rule on which measure should be selected. As a robustness check, so-called order-m efficiencies were also estimated. The methodologies are described in more detail, for instance, in Daraio and Simar (2007). Results According to the predictions of the model, if the industry is indeed at the frontier, average technical efficiency should be high, competition neck-and-neck and firms located on the upward sloping part of the inverted U-curve. That is, additional competition should increase innovation. Whether this is the case, is inspected by seeking answers to the following questions: Is average technical efficiency high in the industry? Has competition increased technical inefficiency or innovation? Does technical inefficiency or TFP provide a good measure of innovation? Is average technical efficiency high in the industry? Parametric efficiency estimates vary greatly by the methodology chosen. Hence, the assumptions underlying them appear to influence results significantly. Therefore, after checking for outliers, results for non-parametric DEA measures are presented (Figure 4). Both input-and output-based DEA measures are high. The input-based measure showed the smallest firms as most efficient, while the output measure showed the largest firms to huddle closest to the frontier. Since their difference suggests increasing returns to scale, and firm size clearly contributes to efficiency, the largest firms appear to be the most efficient, and the output-based measure more reliable. Has competition increased technical inefficiency on average? Determinants of inefficiency show competition to contribute significantly to inefficiency in the Battese-Coelli inefficiency model estimated with maximum likelihood. Efficiency is an increasing function of profit margins. In contrast, foreign ownership and exporter status did not prove to be (4) significant determinants of inefficiency (Table 3). Exposure to global competition does not seem to affect technical efficiency. Moreover, the correlation between competition and technical efficiency was significant and negative: input-based DEA −0.15 and output-based DEA −0.40 (Table 4). Competition increases technical inefficiency, but the relationship is not linear, as Hanusch and Hierl (1992) have suggested. These results run counter to the neoclassical assumption that efficiency increases with competition, but are in line with the inverted U-curve pp. 71-72), i.e. increasing the threat of competition advances innovation in the more efficient firms, but dampens it in inefficient firms. Yet, Figures 5-7 provide a better fit than inverted U-curves. Table 3. Maximum likelihood estimates on panel data and determinants of Battese-Coelli and DEA technical inefficiency 1 (N = 928; δ = 30% R&D depreciation rates) Notes: The positive delta, e.g. for debt ratio indicates that the more indebted firms in the sample tend to be less efficient. A negative delta for a dummy variable like size and age imply an opposite relationship with inefficiency. A negative delta for the Lerner Index indicates that the higher the firm's profitability, the lower its inefficiency. Can inverted U-curves be found between competition and innovation? Results with respect to innovation are somewhat contradictory. As Table 4 shows, the correlation is contradictory with respect to innovation and competition. While competition is associated with significantly increased R&D intensity (1% level), and R&D elasticity (at the 5% significance level), competition is associated with significantly decelerated technical change (1% level). It may be that product related innovation is conducive to intense competition, but when it comes to process innovation (technical change), competition decelerates it. Yet, as Figures 8-10 below show, an inverted U-curve relationship between innovation and competition could only be found for technical change and competition. For R&D intensity and elasticity, the relationship was more of a U-curve. In all cases, however, the fit was not convincing. Does technical inefficiency provide a good measure of innovation? The relationship between technical efficiency and technical change Results do not support technical efficiency as an appropriate measure of innovation. As Table 4 shows, input-based DEA correlates significantly at the 1% level, but negatively with R&D elasticity (−0.14) and technical change (−0.28). Its correlation with R&D intensity is insignificant and almost zero. Hence, input-based DEA technical efficiency is not a proxy to innovation. As for output-based DEA, it correlates positively and significantly at the 1% level with technical change (0.22), and positively and significantly, at the 5% level with R&D intensity (0.08). Its correlations with R&D elasticity is insignificant and almost zero. Hence, output-based DEA may proxy technical change and perhaps R&D intensity, but the correlations are fairly small. The finding that technical efficiency is not a good measure of innovation questions the external validity of estimations of Bos et al. (2013), which used input-based (cost minimization) technical efficiency to proxy innovation to estimate the presence of an inverted U-curve between competition and technology gaps. Even in this respect, the relationship resembles that of a (non-inverted) U-curve. As Figure 11 shows, competition is minimized at a higher efficiency level. 5 In other words, most efficient firms have indeed escaped competition. Even if technical efficiency could proxy innovation, the relationship between technical efficiency and competition is far from a robust inverted U-curve. An important factor that distinguishes efficiencies is firm size. As Figure 12 shows, the most (output-based DEA) efficient largest and smallest firms enjoy actually the most rapid technical change. This is the result also on average for the sample. The positive relationship is pronounced only for the largest firms with respect to input-based DEA. Even with a quadratic function the relationship is straightforward, more efficiency is good for innovation ( Figure 13). For R&D elasticity, a vaguely inverted U-curve could be traced only for input-based DEA (Figure 14). There seems to be an efficiency optimum that maximizes innovation below full efficiency. Outputbased DEA shows inefficient firms as typically small. Input-based DEA, however, showed the smallest firms as most efficient. Profit margins increased with efficiency across the board regardless of firm size, and small firms have been more R&D intensive on average. Thus, contrary to the predictions of the inverted U-curve theory, small firms that have on average been furthest from the frontier have also been most keen to escape competition by means of innovation. In sum, the predictions of the inverted U-curve theory are controversial in relating innovation and the concept of efficiency. TFP and technical efficiency do not appear to provide adequate proxies of innovation. Output-based DEA may proxy technical change, and perhaps R&D intensity, but the correlations are small though significant. Output-based TFP correlates significantly (at the 1% level) and positively with technical change (0.20), but not with R&D intensity or R&D elasticity. TFP may proxy technical change, but the correlation is rather small. Does TFP provide a good measure of innovation? Innovation has also been proxied by the level and growth rate of TFP. For example, Nickell (1996) found evidence in line with the neoclassical postulation that more intense PMC is reflected in more rapid TFP growth. In my sample, in contrast, there is little correlation between TFP and innovation. Output-based TFP correlates positively significantly at the 1% level and with technical change (0.20), but not with R&D intensity or R&D elasticity (Table 4). As Figures 15-18 below show, a slight positive correlation could be detected only for technical change. In conclusion, TFP would not appear to provide a good measure of innovation. Discussion and conclusions Average technical efficiency is high in the industry. Competition increases technical inefficiency on average. In these respects, the evidence with respect to the industry being on the technology frontier is clear, but overall, the evidence in support of the inverted U-curve relationship is weak and contradictory. Results are sensitive to the proxies and methodologies applied. In conclusion, I In addition to Schumpeterian models and the inverted U-curve, the finding that the most profitable firms and plants are also to be the most efficient, combined with the finding that profit margins increased with efficiency across the board regardless of firm size, are in line with so-called RBVs of the firm, in contrast to traditional structure conduct performance or contemporary industrial organization views. Efficient small firms are also profitable, although large firms are generally the most efficient. Hence, the causality may run from efficiency (and innovativeness) to profit margins and firm growth, i.e. there are efficiency rents that firms may be able to transform into long-term competitive advantages that generate abnormal returns. Overall, the industry seems to reflect the Schumpeterian Mark II hypothesis of creative accumulation, rather than creative destruction. Competitive and innovation conditions can, at least to some extent, be tampered with, by, e.g. generous R&D support to bridge the disincentive gap between private and social returns-hence their appeal. This evidence suggests, however, that tampering with competitive conditions to raise innovation is futile. Innovation within smaller firms is already relatively high in terms of R&D intensity, while technical change is R&D saving and the two correlate negatively. One should not confuse productivity with efficiency when discussing the beneficial effects of competition. Competition may, e.g. increase productivity, but not necessarily average efficiency. Second, when there are large differences in technical efficiencies that are due to other factors than innovation and competition, such as simple scale efficiencies, technological gaps may provide insufficient guidance on the impact of competition on innovation. Efficiency measures distance from the technology frontier, while it is technological progress that expands the production possibilities frontier through innovation. The most efficient firms are likely to be highly innovative, but for the rest, efficiency change merely measures imitation-based catch-up with frontier firms. Some level of slack may even be necessary in highly innovative industries. Efficiency-raising may be counterproductive to innovation.
2018-11-30T12:34:11.668Z
2016-06-22T00:00:00.000
{ "year": 2016, "sha1": "a206ecf224de0e29e0001f33f9f6a95a135ffa6a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311975.2016.1199522", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a206ecf224de0e29e0001f33f9f6a95a135ffa6a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
85454446
pes2o/s2orc
v3-fos-license
The Structure of the Milky Way's Bar Outside the Bulge While it is incontrovertible that the inner Galaxy contains a bar, its structure near the Galactic plane has remained uncertain, where extinction from intervening dust is greatest. We investigate here the Galactic bar outside the bulge, the long bar, using red clump giant (RCG) stars from UKIDSS, 2MASS, VVV, and GLIMPSE. We match and combine these surveys to investigate a wide area in latitude and longitude, |b|<9deg and |l|<40deg. We find: (1) The bar extends to l~25deg at |b|~5deg from the Galactic plane, and to l~30deg at lower latitudes. (2) The long bar has an angle to the line-of-sight in the range (28-33)deg, consistent with studies of the bulge at |l|<10deg. (3) The scale-height of RCG stars smoothly transitions from the bulge to the thinner long bar. (4) There is evidence for two scale heights in the long bar. We find a ~180pc thin bar component reminiscent of the old thin disk near the sun, and a ~45pc super-thin bar component which exists predominantly towards the bar end. (5) Constructing parametric models for the RC magnitude distributions, we find a bar half length of 5.0+-0.2kpc for the 2-component bar, and 4.6+-0.3kpc for the thin bar component alone. We conclude that the Milky Way contains a central box/peanut bulge which is the vertical extension of a longer, flatter bar, similar as seen in both external galaxies and N-body models. INTRODUCTION Gas kinematics and near-Infrared (NIR) photometry (e.g. Binney et al. 1991;Weiland et al. 1994) have shown that the Galactic bulge at longitudes |l| < 10 • contains a bar-like structure. These results have since been confirmed, and the barred bulge has been characterised and mapped with increasing accuracy. Using star counts of red clump giants (RCGs), Stanek et al. (1994Stanek et al. ( , 1997 reached a similar conclusion that the bulge was barred, a result since confirmed with increasing confidence and accuracy (Babusiaux & Gilmore 2005;Rattenbury et al. 2007). The studies using RCGs agree with results from brighter giant star counts (Lopez-Corredoira et al. 2005) and the COBE integrated NIR emission (Dwek et al. 1995;Binney et al. 1997;Freudenreich 1998;Bissantz & Gerhard 2002). Most authors have found a bar angle, defined as the angle of the major axis of the bar to the sun line-of-sight, in the range 20 − 30 • . Recent interest was stimulated by the discovery that close to the bulge minor axis at |b| 5 • the RCGs have two distinct magnitudes (McWilliam & Zoccali 2010;Nataf et al. 2010). This is because these lines-of-sight pass through both arms of an X-shaped structure which is characteristic of boxy/peanut (B/P) bulges in barred galaxies (McWilliam & Zoccali 2010;Ness et al. 2012). These B/P bulges arise naturally in N-body simulations of disk galaxies (e.g. Athanassoula 2005;Martinez-Valpuesta et al. 2006) and are common in external galaxies (Bureau et al. 2006;Laurikainen et al. 2014). This has stimulated further investigation of the full three dimensional structure of the E-mail: wegg@mpe.mpg.de bulge beginning with Saito et al. (2011) using 2MASS data. Vista Variables in the Via Lactea (VVV) has provided significantly deeper and more complete data and this was recently used to map the B/P bulge non-parametrically in three dimensions by Wegg & Gerhard (2013). It has also been suggested in NIR star counts using in-plane data at 10 • < l < 30 • that there is a less vertically extended ≈ 4 kpc bar at ≈ 45 • to the line of sight (Hammersley et al. 1994). This component has been termed the long bar. The existence of the long bar was confirmed using RCGs by Hammersley et al. (2000) and subsequently with increasingly more powerful NIR (Lopez-Corredoira et al. 2006;Cabrera-Lavers et al. 2007 and longer wavelength GLIMPSE data (Benjamin et al. 2005;Zasowski 2012). Understanding the nature of the long bar and its structure, amplitude, length and pattern speed is of great importance for many areas of Milky Way study. It influences for example the disk outside the bar (Minchev & Famaey 2010), the kinematics in the solar neighbourhood (Dehnen 2000), and the observed non-circular gas flow (Bissantz et al. 2003). One important unresolved issue is the relationship between the bar in the Galactic bulge at |l| < 10 • , and the long bar outside it. Studies of the long bar at l 10 • have often found a larger bar angle (typically ≈ 45 • , beginning with Hammersley et al. 1994) in comparison to the three-dimensional bulge (typically ≈ 25 − 30 • e.g. (27 ± 2) • : Wegg & Gerhard 2013). It has therefore been suggested that the Galaxy contains two bars, with the central three-dimensional bar not aligned to the long bar. However an in-plane long bar misaligned with a triaxial three-dimensional bar is difficult to reconcile In the top figure we show the surveys used in this study. We use, in order of preference, VVV in red, UKIDSS in green, and 2MASS in blue. Grey regions are those without data of sufficient depth i.e. close to the plane without VVV or UKIDSS data where 2MASS is insufficient. In the lower figure we show the surface density of stars in the K s -band in the extinction-corrected magnitude range 12.25 < K 0 < 12.75. Asymmetric number counts in l close to the plane are a result of non-axisymmetry due to the long bar. The star counts are smoothed with a Gaussian kernel of σ = 0.1 • . Extinction is corrected using the H − K colour excess as in equation (1) (i.e. K 0 ≡ µ K + M K,RC ) and data outside the colour bar range are plotted at its limit. dynamically; the suggested length ratio is low and therefore the mutual torques are strong. It has instead been suggested that rather than two distinct bars, the long bar is the in-plane extension of the central three-dimensional boxy/peanut structure structure (Martinez-Valpuesta & Gerhard 2011;Romero-Gómez et al. 2011). One of the motivations for this study is to help resolve this controversy. Throughout we use the terminology that the bar outside the bulge region at |l| > 10 • is the long bar, regardless of the details of thickness, bar angle, or alignment with the barred bulge. Our primary indicator of bar structure are RCGs which are core helium burning stars and provide an approximate standard candle (Stanek et al. 1994). We combine several surveys to have the widest view and the greatest possible scope on the bar density distribution. In the K s -band we use data from (i) the United Kingdom Infrared Deep Sky Survey (UKIDSS) Galactic Plane Survey (GPS, Lucas et al. 2008), (ii) the VVV survey (Saito et al. 2012) and, (iii) to extend the study further from the galactic plane than previous studies, we augment this with 2MASS data (Skrutskie et al. 2006). We homogenise the analysis of the surveys using a common photometric system and identify RCGs statistically in magnitude distributions rather than in colour-magnitude diagrams since this has worked well in the bulge (e.g. Nataf et al. 2013;Wegg & Gerhard 2013). We verify our results where possible using data at 3.6µm and 4.5µm, which is significantly less affected by extinction, taken from the Galactic Legacy Mid-Plane Survey Extraordinaire (GLIMPSE) survey on the Spitzer space telescope (Benjamin et al. 2005). Because this data only covers |b| 1 • we use the K s -band as our primary data, but the GLIMPSE data remains very important for cross checks, particularly of dust extinction. This work is organised as follows: In section 2 we describe the data and construction of magnitude distributions for the stars in bulge and bar fields. In section 3 we fit the red clump stars in these magnitude distributions and discuss the features of these fits. In section 4 we examine the vertical structure of the fitted red clump stars in longitude slices, and in section 5 we derive densities which fit and best fit the observed magnitude distributions. We discuss our results and place them in context in section 6, and finally conclude in section 7. MAGNITUDE DISTRIBUTION CONSTRUCTION A number of steps are required to combine the surveys (UKIDSS, VVV, 2MASS, GLIMPSE) and bands (H, K s , 3.6µm, 4.5µm) and construct consistent magnitude distributions: The surveys must be transformed to the same photometric system, extinction corrected, and to compare bands and convert to distances we require the characteristic magnitudes and colours of RCGs. H and K s -band data The first step in construction of the magnitude distributions is to transform all the surveys to the same photometric system. We choose to convert the UKIDSS and VVV surveys to the 2MASS system using the methods and transformations described in appendix A. where (H − K s ) RC is the intrinsic H − K s colour of RCGs, M K s ,RC is the absolute K s -band magnitude of RCGs, and A Ks E(H−K s ) is a constant which depends on the extinction law. The method of correcting for extinction on a star-by-star basis in equation (1) is similar to other studies with data close to the Galactic plane (e.g. Babusiaux & Gilmore 2005;Cabrera-Lavers et al. 2008). In the Galactic bulge most of the dust lies in a foreground screen making extinction correction using extinction maps possible. In-plane this is not true and necessitates the method of extinction correction in equation (1). In the near-infrared bands used in this work, the vast majority of the stars have colours close to that produced by a Rayleigh-Jeans spectrum, therefore this star-by-star extinction correction using the H − K s is relatively accurate (e.g. the NICE method, Lada et al. 1994). Moreover, we are concerned with RCGs in this work, which as well as being good standard candles naturally have very similar colours. For easy comparison between bands we work with the distance modulus, under the assumption that the stars are RCGs. Other stars form a background which we do not study. We use H − K s to correct rather than J − K s because the H-band has lower extinction and therefore remains deeper in high extinction regions. In addition H − K s is more constant across stellar types than J − K s improving the extinction correction (Majewski et al. 2011). To utilise equation (1) Nishiyama et al. (2009) using red clump stars close to the galactic plane towards the galactic center. This corresponds to A Ks E(H−K s ) = 1.37. In principle the H-band could be used as an additional confirmation of our results, however the adopted A H /A K s results in a H-band equivalent to equation (1) for µ H which is completely degenerate with µ K . Instead we use the GLIMPSE survey to confirm results where possible because the longer wavelengths are less susceptible to dust extinction. We estimate the absolute magnitudes of red clump stars in a two step process. First we estimate colours of red clump stars using the Padova isochrones. Constructing the luminosity function from the solar metallicity 10 Gyr isochrone with a Kroupa IMF and fitting a gaussian to the red clump results in red clump colour of H − K s = 0.09. This colour (and the GLIMPSE colours utilised later) are largely independent of age and metallicity, changing by less than 0.03 mag when varying age from 1 Gyr to 15 Gyr and metallicity from [Fe/H] = −0.7 to [Fe/H] = 0.17. Finally we use the calibration M K s ,RC = −1.72 which results in bulge red clump stars being approximately symmetric about R 0 ≈ 8.3 kpc (Wegg & Gerhard 2013). Catalogue Construction To utilise the star-by-star extinction correction methods we require band matched catalogs to calculate the reddening of each individual star. UKIDSS and 2MASS provide band matched catalogues while for the VVV survey we match between the H and K s -band DR2 catalogs using a radius of 1 . In combining the surveys we preferentially use VVV data when available, falling back to UKIDSS and finally 2MASS. The resultant footprint of the surveys is shown in the upper panel of Fig. 1. In the lower panel of Fig. 1 we show the surface density of stars in the extinction corrected K s -band of our combined data. The long bar The difference in K s -band extinction calculated using two methods. A Ks (EHK) is calculated using the H − K s reddening from equation (1). A Ks (RJCE) is the K s -band extinction calculated by converting the H −[4.5µ] reddening from equation (2) to A Ks . All stars with 12.5 < µ K < 14 are included. Contours are equally spaced in density at 10, 20... 90% of the peak density. Error bars represent the sigma clipped mean and standard deviation binned as a function of A Ks (RJCE). . Example histograms of distance modulus, µ, calculated using equations 1 and 2, assuming that all stars are red clump stars. We show K s -band (black), 3.6µm (red) and 4.5µm (blue). We also show the fits to the histograms made using equation (3) as the solid lines. For legibility the 3.6µm, and 4.5µm bands are offset by log N = 0.5 and 1 with respect to the axis. This example field has its center at l = 18.5 • b = 0.9 • and has size ∆l = 1 • , ∆b = 0.3 • . corresponds to the non-axisymetric in-plane enhancement of stars at positive longitudes. GLIMPSE data The longer wavelength GLIMPSE has lower dust extinction and we use it to cross check the K s -band data where possible. We use the H − [4.5µ] colour to correct for extinction corresponding the RJCE . Parameters of the Gaussian fits to red clump stars in the K s -band using equation (3). In the top panel we show the number of red clump stars N rc . In the middle panel we show the distance to the fitted peak of the red clump stars, d rc (calculated from µ rc ). In the lower panel we show the dispersion in magnitudes of the fit to the red clump, σ rc . Grey regions are those where no well localised fit to the RCG distribution was possible, either because the surveys where not deep enough in that field, or the fitted dispersion was too large to represent RCGs towards the Galactic centre. method (Majewski et al. 2011): To utilise the H − [4.5µ] colour for extinction correction we must band match the GLIMPSE data to surveys providing H-band measurements. We have band matched to UKIDSS and VVV Hband data, using a radius of 1 , while GLIMPSE already provides matches to 2MASS. We utilise the deeper UKIDSS and VVV Hband data over 2MASS to correct for extinction in GLIMPSE when possible. This is particularly important close to the galactic plane where otherwise 2MASS would limit the depth of the GLIMPSE data to brighter than the clump. In Fig. 2 we compare the K s -band extinction calculated using the RJCE method (equation (2)) to the value determined from the H − K s reddening used by equation (1). We find good agreement between the measurements using the two methods. The mean difference is less than 0.05 mag at all extinctions. The standard deviation in the difference between the estimates is typically less than 0.1 mag at magnitudes corresponding to long bar red clump stars. The majority of this dispersion is due to the photometric error in the 4.5µm band which contributes 0.07 mag at low extinction. We have performed the same comparison as a function of galactic longitude and latitude and found no significant differences. This confirms that the extinction law does not vary significantly across the region considered, and that the different surveys are consistent. In Fig. 3 we show, for an example field, histograms calculated using equations 1 and 2. In this field RCG stars in the bar are visible as the 'bump' of stars at µ ≈ 13.5 above the smooth background of non-RCG bar stars. GAUSSIAN RED CLUMP FITS To the magnitude distribution in each field we attempt to fit a Gaussian representing the red clump together with a exponential background (Stanek 1995;Nataf et al. 2013): (3) We assume that the error is the Poisson error on the number in each magnitude bin and fit this equation using χ 2 . Each fit is performed using an Markov Chain Monte Carlo (MCMC) of 20,000 steps and flat priors, starting from the maximum likelihood position. In regions covered by UKDISS or 2MASS we fit over the range 11.5 < µ K < 15.5, in the VVV bulge region we use the range 12 < µ K < 16, and in the VVV disk region we use the range (1) with the fitted exponential representing non-red clump stars subtracted. The data is divided by the volume of each bin to approximately convert the histograms to densities in kpc −3 . In the top row of plots we show these densities on a log scale. The lower rows are the same data, but each line-of-sight is normalised to its peak. On the 0.75 b < 1.05 slice we also plot the red clump distances found by Nishiyama et al. (2009) in blue, and in black, using our adopted m K,RC = −1.72. To reduce noise the histograms are smoothed using boxcar smoothing with a width of 0.15 mag, corresponding to three histogram bins. Red lines show longitudes of l = −30, −20, −10, 10, 20, 30 • and lines at α = 27 • and 45 • to the sun line-of-sight with half length 4.7 kpc. 12 < µ K < 16.5. These limits were determined by visual inspection. The GLIMPSE µ 3.6µ and µ 4.5µ data is shallower and in this case we extend the fit to 10.5 at the bright end, while the faint end is empirically determined in each field: In each field we estimate the distance modulus at which the counts drops to 50% of the peak due to incompleteness and fit to a maximum µ of 0.75 mag brighter than this. Examples of these fits are shown in Fig. 3. The resultant fitted parameters, N RC , µ RC , and σ RC , for the K sband are shown in Fig. 4. In this plot we only consider the fit as good if the red clump is detected at > 3σ i.e. N RC is at least three times larger than its error, and the background slope B > 0.55. The second criterion is equivalent to rejecting fields which are significantly incomplete at the faint magnitude limit since in this case the slope of the fitted background is reduced. We also require that the best fitting red clump magnitude lies within the range of magnitudes fitted, and that the dispersion of the fitted red clump, σ RC < 0.6 since visually fits that do not meet this criterion appear spurious. Finally we exclude all fields at l < −10 • without VVV data since 2MASS data is not deep enough the detect the clump in this region and the few fits which passed the previous criteria were visually rejected. We show in Fig. 6 the histogram of fitted values of B for fields that pass the other selection criteria. Several features of the fits in Fig. 4 are worth noting: (i) the red clump is detected and fitted up to |b| ≈ 5 • at l > 10 • out to l ≈ 20 • , outside the traditionally defined bulge region. In this region the fitted distance at the same l is close to the distance fitted in the Galactic plane, with an increased dispersion. (ii) Fits no longer pass our criteria for a well fitted red clump at l 30 • . This is similar in longitude to that identified as the end of the long bar in the literature (e.g. Lopez-Corredoira et al. 2006;Cabrera-Lavers et al. 2007). We address this in greater detail in subsection 5.5 and subsection 6.3. (iii) The feature visible from l = −20 • to l = −25 • in the surface density of stars in Fig. 1 is not visible here. This is either because it is composed exclusively of young stars which have not yet evolved to become clump stars, or is too extended along the line-of-sight to give a well localised red clump. (iv) Along the minor axis of the (3). The grey region are fields which are subsequently rejected for having B < 0.55 and therefore being significantly incomplete. The majority of these are crowded central bulge fields. If all stars are giants then because of the power law slope of the luminosity function on the giant branch we would expect this parameter to lie in the range 0.6 B 0.78 (Méndez et al. 2002). bulge at |b| > 5 • the fitted dispersion is larger than off-axis. This is a result of using one Gaussian to fit the wider split red clump (Nataf et al. 2010;McWilliam & Zoccali 2010) in this region. (v) In the Galactic plane at l > 10 • the scatter in fitted distance and dispersion are considerably smaller than in Cabrera-Lavers et al. (2008). In Fig. 5 we show the raw histograms but with the exponential background of non-red clump stars subtracted i.e. we subtract the fitted A exp B(µ − µ RC ) from each histogram. This background subtracted histogram for each line-of-sight is then plotted as if viewed from the north Galactic pole. In addition we divide the number in each histogram bin by its volume, where the volume of each equally spaced bin in magnitude increases with distance like ∝ d 3 . This is equivalent to plotting the density assuming that red clump stars are perfect standard candles. From these plots it is already evident that the bar more closely matches the bar angle shown in the figure of α = 27 • (e.g. Wegg & Gerhard 2013) than ≈ 45 • (e.g. Cabrera-Lavers et al. 2008), although the data at low latitudes shows a possible curvature towards the end of the bar in a leading sense similar to that suggested by Martinez-Valpuesta & Gerhard (2011). We have performed the same analysis with the 3.6µm and 4.5µm bands. The equivalent plots to Figs. 4 and 5 are shown figures B1-B3 in Appendix B. The GLIMPSE data for b > 0.45 • are consistent with the K s -band data. In particular, they independently support a bar angle near 27 • . At b < 0.45 • GLIMPSE does not provide sufficient completeness for RCGs for a good comparison. In summary we find that the long bar matches a bar angle of 27 • more closely than 45 • in both the K s -band and the GLIMPSE data, and that the long bar is detected to |b| = 5 • outside the bulge at l < 20 • . LONG BAR VERTICAL PROFILE: LONGITUDE SLICE FITS We estimate the scale height by performing fits to the number of red clump stars as a function of latitude. We treat each longitude slice independently and fit the N rc found in from equation (3) . The vertical profile of the surface density of red clump stars identified in the K s -band in several longitude slices. The blue curve is the best fitting single exponential to the K s -band, while the best fitting double exponential is the red curve. Error bars show the statistical error resulting from fitting equation (3) to each field. exponential: where b 0 is an offset and b 1 the exponential scale height. In fields from 15 • l 25 • we find that a single exponential is a poor fit to our data, which extends to higher latitudes than previous investigations. We therefore additionally fit a double-exponential: (5) Examples of these fits for the long bar region, where we find evidence for two scale heights, are shown in figures 7 and 8. In Fig. 7 we show the K s -band data and the fits to these profiles. In Fig. 8 we show the GLIMPSE data but with the fits to the K s -band overplotted. The GLIMPSE data do not extend sufficiently far from the galactic plane to verify the double exponential fits, however the agreement in the region close to the galactic plane is an important check since this is the region where the extinction is greatest and the longer wavelength GLIMPSE data is affected less. The resultant scale heights of these fits are shown in Fig. 9. The black points show the exponential scale height of RCGs. The scale height increases from the bulge minor axis to a peak near the end of the Box/Peanut region, before continuously decreasing through the long bar region to < 1 • near the bar end. In the long bar region where we find that a double-exponential fits the data better the scale heights are typically ≈ 0.5 • and ≈ 2 • . At l = 20 • the bar lies at a distance of ≈ 5.2 kpc and these therefore correspond to scale heights of ≈ 45 pc and ≈ 180 pc. The ≈ 180 pc scale-height component is similar in scale height to the thin disk in the solar neighbourhood and we therefore analogously refer to it as the thin bar. This is appropriate since the scale height of thin disks in edge-on external 1981;de Grijs & Peletier 1997). We refer to the significantly thinner ≈ 50 pc component as the super-thin bar, both due to its small scale height and in analogy to super-thin components found in edge-on external galaxies (Schechtman-Rook & Bershady 2013). We find that the fitted profiles are not symmetric about b = 0 but instead require an offset. In Fig. 10 we show the fitted offset It slightly overestimates the offset at l 15 • although typically by < 0.05 • . This could be explained by the apparent bar density peak lying behind the geometrically thin prediction for these longitudes, as is also seen in Fig. 5. We conclude that the long bar, and barred bulge is consistent with lying in the physical mid-plane of the galaxy to within approximately 0.05 • in latitude, or around 5 pc for these distances. This could equally be seen as evidence supporting the assumptions that Sgr A*lies in the Galactic mid-plane, and the sun lies 25 pc above the mid-plane. In Fig. 11 we show the number of stars integrated over b as a function of longitude and the integrated number of stars converted to a number of stars along the bar major axis, assuming a thin bar with angle α = 27 • . While the errors are large, the number of stars in the thin bar component in these plots decreases along the bar in a manor consistent with an exponential. Fitting the lower panel of Fig. 11, where we assume that the bar is thin and lies at 27 • to the sun line-of-sight gives the exponential scale length of this component to be 1.5 kpc, somewhat shorter than the thin disk scale length of 2 − 3 kpc (e.g. 2.5 kpc: Binney et al. 1997 (4) and (5). In black we show the result from fitting a single exponential (equation (4)). In red and orange we show the results from fitting a a double exponential (equation (5)) in the region where this is a significantly better fit. In the lower panel we show the same results with longitude converted to distance along the major axis of the bar assuming that it is geometrically thin along the line-of-sight and lies at bar angle α = 27 • . a peak of age ∼ 1 Gyr (Girardi & Salaris 2001;Salaris & Girardi 2002). The larger scale height component would then correspond to old red clump stars analogous to old thin disk stars in the solar neighbourhood. We discuss this scenario further in section 6 after our more detailed modelling. We also note that both the scale height and the total number of red clump stars transitions smoothly from the bulge region (|l| < 10 • ) to the long bar region (10 < l < 30 • ), providing further evidence that the two structures are not distinct. BEST FITTING PARAMETRIC DENSITY MODELS In this section we construct bar/bulge models that match the magnitude distributions and describe their key features such as the resultant bar angle, bar length and bar mass. Our approach is to model the stellar density, and convolve this with suitable luminosity functions to produce model magnitude distributions. The stellar density is then adjusted until the model magnitude distributions match the extinction corrected magnitude distributions. We again focus on the K s -band because of the wider and deeper coverage than GLIMPSE. We fit the magnitude distributions over the range 11 µ K 15; this range corresponds to 1.6 kpc < r < 10 kpc and therefore contains red clump stars in the bulge and long bar. Their nature as an approximate standard candle, together with their abundance, provides the statistical power in the method. We model only the density and do not attempt to build dynamical models. In the future we will use the observational constraints from this work together with the made-to-measure method to construct self-consistent models whereby the stellar density generates the potential in which the stars orbit, similar to Portail et al. (2015). In principle fitting a non-parametric model similar to Wegg & Gerhard (2013) would be preferable. Unfortunately the signal-to-noise of RCGs relative to the background of non-RCGs is significantly smaller in each field in the long bar than in the bulge region. The parametric models are fitted all fields simultaneously, thus improving the bar signal. The planned made-to-measure modelling would again be non-parametric, but would connect different fields by requiring that the model be dynamically self-consistent. We regard the parametric models constructed in this section as a useful step in uncovering the properties of the data even though they do not statistically match the data perfectly. Our model density consists of three distinct parts: (i) An N-body model taken from Portail et al. (2015). These Nbody models were adjusted to fit the density and kinematics in the galactic bulge at |l| < 10 • and therefore fit this region well. As in Portail et al. (2015) the model is placed so that the bar is at an angle of α = 27 • . We use an SPH cubic spline kernel to evaluate a smooth density field from the N-body model. For clarity we repeat the formulae here following the notation of Hunt & Kawata (2013). We use the cubic spline kernel otherwise. (6) The density at a point x x x i is then given by where the sum runs over all N-body particles, m j is the mass of particle j. The smoothing length of particle j is evaluated from the local density through where η is a parameter for which we have chosen η = 3. Equations 7 and 8 are solved iteratively until the difference between iterations is less than 10 −3 . The choice of both these parameters mirrors Hunt & Kawata (2013 where x, y, z are right-handed galactocentric coordinates so that x is orientated along the bar major-axis, y along the bar intermediate-axis, and z towards the north Galactic pole. R = x 2 + y 2 is the galactocentric radius. We use a Gaussian cutoff for the inner and outer edges of the bar: Note that because of the Gaussian cutoffs the bar mass is less than the parameter M bar . The inner cutoff is required because the N-body model already fits the magnitude distributions in the central region well as it was tailored to fit the data there using the made-to-measure method in Portail et al. (2015). (iii) The Galactic disk scale length of the N-body model is 1.1 − 1.2 kpc, significantly shorter than that of the Milky Way, ≈ 2 − 3 kpc (see section 4). This is true in many N-body bar models formed from initially unstable disks where the bar length is typically several disk scale lengths. That the long bar of the N-body model is less prominent than in the data is also related to the discrepancy in disk scale length, since moving outwards along the bar there are fewer stars than in the Milky Way. Here we do not attempt to model the disk to reconcile the discrepancy between the N-body disk and the Milky Way. Fitting the disk would be an involved task worthy of a separate work, while here we concentrate on features of the non-axisymmetric part of the density i.e. the bar. Instead we add a best fitting additional exponential component to each modelled field magnitude distribution independently: The result is that we are fitting the difference between the magnitude distribution and an exponential. We have verified that for the fields considered in this work, over the magnitude range considered, that the Besançon Model (Robin et al. 2003) predicts a nearly exponential magnitude distribution. Note that if the disk had a central hole that was significant for this work then the lack of RCGs in this region would result in a dip in the magnitude distributions in a similar manner to the peak caused by the bar. Since this is not seen in the fields fitted here we consider equation (11) adequate. In order to match the magnitude distributions in each field we convolve the resultant density from (i) and (ii) with a luminosity function to predict the number of stars as a function of µ K before adding the disk from (iii) seperately. For clarity of notation we define the colour C ≡ H − K s , the corresponding reddening free colour which is a constant given by the extinction law. Consider Φ(M K , M C ) dM K dM C as the joint number of stars produced per unit mass between M K to M K + dM K and M C to M C + dM C where capital M denotes the absolute magnitude in the respective band. This two-dimensional colour-magnitude analogue to the luminosity function can be readily calculated by populating isochrones in a similar manner to the luminosity function. Define N(K s ,C) dK s dC as the number of stars observed in a pencil beam line of sight with solid angle ω jointly in the range K s to K s + dK s and C to C + dC. Then the colour-magnitude version of the equation of stellar statistics is where A K s (r) and E C (r) are the K s -band extinction and the reddening in C ≡ H − K s along the line-of-sight respectively. This can also be written in terms of distance modulus where ∆ is given by ∆(µ) ≡ (ln10/5)ωρr 3 expressed in terms of µ. Changing variables from (K s ,C) to (µ K , M C ) allows us to calculate N(µ K ) dµ K , the number of stars from µ K to µ K + dµ K . This is the quantity which we compare to the data. Using equation (1) then For an extinction law characterised by the constant R K ≡ A Ks E(H−K s ) the extinction can be calculated from the reddening, Therefore where This differs slightly from the luminosity function in M K because not all stars share the color of the red clump. Because we use the extinction free magnitudes (equation 1) extinction does not enter in the final equation provided the survey is complete and the extinction law is correct. This is true of all stars, not just RCGs. Practically the convolutions in equation (17) are performed as parallel FFTs for speed. The exponential in N(µ K ) used to represent the disk described in (iii) is then added to the result of equation (17). The luminosity function Φ µ K is calculated from the BASTI isochrones (Pietrinferni et al. 2004). We fiducially use the 10 Gyr, α-enhanced isochrones together with a Kroupa (2001) IMF and the metallicity distribution measured by Zoccali et al. (2008) in Baade's window to generate the luminosity function. To allow for the uncertainty in red clump magnitude, when fitted we allow a global shift in the luminosity function ∆K. We then fit by minimising the χ 2 between the model and observed magnitude distributions assuming that the error is the Poisson error in observed number. If we label magnitude bins i and fields j then we minimise: where N model i, j is the prediction of the model and N i, j the observed number. We use bins in magnitude of 0.05 which are sufficiently narrow to not artificially broaden the luminosity function. We fit only fields where the exponential slope fitted to the background in section 3 was greater than 0.55 since, as discussed in that section, smaller values indicate significant incompleteness at the faint end resulting in a smaller slope. Throughout we place the Sun a distance R 0 = 8.3 kpc from the Galactic center (Reid et al. 2014;Chatzopoulos et al. 2014) and 25 pc above the Galactic plane (Maíz-Apellániz 2001; Chen et al. 2001;Jurić et al. 2008), and Sgr A*in the physical Galactic mid-plane at b = −0.046 • . This results in a slight tilt of the b = 0 plane with respect to the physical Galactic mid-plane (Fig. 2 of Goodman et al. 2014), but well within the errors when the Galactic coordinate system was originally defined. To move from axes x, y, z aligned with the principle axes of the bar to Galactic coordinates we first rotate by bar angle α about the z-axis, then move the sun to R 0 = 8.3 kpc and z 0 = 25 pc from the Galactic centre, and finally tilt the Galactic coordinate system by 0.12 • towards Sgr A* (Goodman et al. 2014). We describe the fitting as a three stage process: We first fit the N-body model to the central fields in subsection 5.1, we then add an additional component to fit the insufficient long bar component in the N-body model in subsection 5.2, and finally we add an additional component required to adequately fit the super-thin component towards the bar end in subsection 5.3. The model we regard as our best model is that with two parametric long bar components. We give the fitted parameters in Table 1 and the resultant physical quantities such as the bar length and the mass of the barred components in Table 2. Bulge Fitting To demonstrate and check the method we first fit the central bulge region, |l| < 10 • . We take the model named M85 from Portail et al. (2015) as a fiducial fit. We fit for two global parameters: the normalisation of the model density G, and an offset in red clump magnitude ∆K from that assumed by the luminosity function constructed from the isochrones. We show in the top left panel of Fig. 12 the resultant χ 2 value in each field across the entire range of longitudes, although only the central region was fitted. We also show in the top right panel of Fig. 12 the mean absolute fractional error in each field over a magnitude range covering the red clump. Comparing these panels it can be seen although that the reduced χ 2 is formally poor in the central fields the fractional error is extremely small. The larger χ 2 here is due to the large numbers of stars and resultant small Poisson errors. The reverse is true at |b| 8 • : the small number of stars results in larger statistical errors and therefore larger mean absolute fractional deviations, however the χ 2 values demonstrate the fit is statistically good. It is evident that the model fits poorly in the fields close to the plane at l > 10 • . This is a result of the insufficient long bar component in the N-body model in comparison to the Milky Way. In the left hand column of Fig. 13 we show some example fields and the resultant fits. Again it is clear that the fit in the bulge region is excellent, however the long bar is insufficient outside this central region. The fact that the N-body model together with the disk model fits well in the bulge region shows that the disk model is reasonable, since the N-body model was constructed to fit bulge-only deconvolved data. We give our fitted parameters in Table 1. The best fitted shift in red clump magnitude, ∆K, is 0.17, suggesting that either R 0 is slightly less than our adopted 8.3 kpc, or bulge red clump stars are slightly brighter than assumed in the luminosity function. Since our assumed bulge luminosity function places the red clump at −1.59, this would correspond to a fitted red clump absolute magnitude of M K,RC = −1.75 − 5 log[R 0 /8.3 kpc]. This is brighter than the M K,RC = −1.61 measured from nearby Hipparcos stars (Laney One Component Long Bar Fitting We now add an additional parametrised function to represent the deficient long bar in the N-body model clear from subsection 5.1. The functional form given in equation (9) is equivalent to the exponential shape function fitted in Robin et al. (2012). We choose this form based on the results of section 3 and section 4 that the vertical structure is approximately exponential. The resultant horizontal density profiles are elliptical in the case of c ⊥ = 2 and boxy for c ⊥ > 2. We have added an additional Gaussian inner cutoff function since the N-body model already fits well in the central region. We fit this functional form, together with the N-body model and minimise χ 2 in fields with l > −10 • . The best fitting parameters are given as the second row of Table 1. The reduced χ 2 and mean absolution deviation are plotted in the second row of Fig. 12 In the left-hand three panels we show the reduced χ 2 for the magnitude distribution in each field. In the right-hand three panels we show the mean absolute fraction error in each field over the range 13 K 15. In the upper panel we show the fit using just the N-body model. In the middle panel we add an additional parametric bar component. In the lower panel add a second parametric component to fit the super-thin component at the bar end. White regions are those which were not fitted because they did not pass our completeness test. Table 1. Parameters found when fitting model densities to the magnitude distribution. The parameters and the method of fitting is described in section 5. Note that M bar differs from the physical bar mass give in Table 2 because of the Gaussian cutoffs. The bar mass of the N-body model is calculated by integrating the face-on surface density over all radii with the minor axis profile in the surface density subtracted. The thin and super-thin bar masses are the total mass in each component. The bar half lengths are as defined in Athanassoula & Misiriotis (2002) and all are measurement on the face on density map: L drop is the radius at which the ellipticity drops fastest, L m=2 is the radius at which the m = 2 component of the face on image drops below 20% of its maximum, and L prof is the radius beyond which the major and minor axis density profiles agree within 30%. Our threshold for L prof is larger than the 5% used by Athanassoula & Misiriotis (2002) which we found gave spuriously large bar lengths for the densities considered here. L mod is the point at which the difference between the major and minor axis face-on surface densities falls to below 1/e of a fitted exponential long bar surface density profile. It has the advantage that it is independent of the axisymmetric disk. This measurement is spurious for the pure N-body model since it does not occur within the solar radius. which it is clear that the resultant fit is greatly improved close to the plane in the region 10 • < l < 30 • . Note that the region near 5 • < l < 15 • and |b| ≈ 5 • is not yet well fitted. The fitted bar angle is α = 28.4 • in agreement with the bar angle in the bulge region found by Wegg & Gerhard (2013) The additional mass associated with this component is 8.8 × 10 9 M for our luminosity function calculated from the BASTI isochrones for a 10 Gyr population with a Kroupa (2001) IMF . If we also include the non-axisymmetric mass from the N-body model, calculated by integrating the face-on surface density over all radii with the face-on minor axis profile subtracted, then the total bar mass is 1.99 × 10 10 M . Note though that the mass derived here, which is constrained by the number of red clump stars, is degenerate with the IMF and population age. This is because the mass per red clump star varies in a similar manner to the mass-to-light ratio. Refitting changing from our fiducial Kroupa IMF to a Salpeter IMF (Salpeter 1955) increase all masses by a factor of 1.43, while keeping all other parameters the same. Changing from a 10 Gyr to a constant star formation rate would reduce the mass by a factor of 2.0. In this case since the luminosity function changes slightly the best fitting parameters change marginally from our fiducial values, however the physical properties of bar angle and length do not change significantly.Uncertainty in IMF and population age dominate the bar mass measured in this work and are larger than the differences between models or bar parameterisations. Two Component Fitting The adopted parametric density in equation (9) has a single constant scale height, however the results of section 4 suggest that the scale height is not constant becoming thinner towards the bar end. For this reason we have added a second component parameterised with the same functional form to represent the super-thin component. We construct the luminosity function of the super-thin component assuming a constant star formation rate, as opposed to the 10 Gyr age of the bulge and 180 pc scale height thin bar. As discussed in the previous section this choice reduces the mass-per-clump star compared to a simple 10 Gyr population. We initially allowed all parameters of the fit to vary, however we found that this resulted in an unsatisfactory fit. The higher number of stars in the fields nearer to the bulge region result in smaller Poisson errors and therefore higher weight placed on fitting these well, so that outside this region the fit is less good. In particular the thin bar component became too thick in order to fit the fields near the bulge, and the resultant vertical structure in plots similar to Figs. 8-11 was poorly represented. Instead, motivated by Fig. 9 we fixed the vertical scale height of the two components, finding that exponential scale heights of 200 pc and 40 pc produced a reasonable representation of the vertical structure. The resultant fit is considerably improved, particularly near the bar end and between 10 • < l < 20 • , as can be seen in the lower panels of Fig. 12. We also observe this model similarly to the data, in slices from above, and plot the result as the lower half of Fig. 5, finding it does a remarkably good job of reproducing the features in the data. We show in Fig. 14 Bar Angle In our fiducial model the fitted bar angle of the parametric long bar ranges from α = (29 − 30) • depending on the number of components (Table 1). Fitting the other N-body models described in Portail et al. (2015) with the same procedure gives angles in the range α = (28 − 31) • . Altering the age and metallically of the bar components while holding the bulge luminosity function fixed changes the fitted bar angle slightly. This is because the RCG luminosity then changes between the inner galaxy and the long bar region. Making the super-thin component super-solar metallicity ([z/H] = 0.17) increases the fitted angle of this component to 32.4 • , while changing the 200 pc component to have a constant star formation history increases its angle to 32.7 • . We therefore increase our range of possible long bar angles to α = (28 − 33) • to encompass this. This is consistent with alignment with the angle found in the central |l| < 10 • in Wegg & Gerhard (2013) of α = (27 ± 2) • which is our assumed N-body bulge angle, and with α = (29 − 32) • found by Cao et al. (2013) for a simpler parametric Dwek et al. (1995) bulge model. We regard the one component long bar model described in subsection 5.2 as a useful confirmation that the fitted bar angle is insensitive to the parametric model. This is a significantly simpler model that fits the data less well than the two component long bar described in subsection 5.3 but still recovers a consistent bar angle. We have performed an MCMC to estimate errors on both the model parameters, and the physical properties such as bar mass and length. The resultant statistical errors are extremely small, significantly smaller than the difference between fits with different models. We therefore consider the statistical errors to be negligible in comparison to systematic errors quoted above. We have also refitted starting from different initial conditions to verify that the fitted parameters such as the bar angle is not a local minimum, or strongly dependent on initialisation position. While the same minimum in χ 2 was not always reached by the minimisation procedure the difference was smaller than between models. Bar Length In external galaxies and N-body simulations the bar length is dependent on definition with many possible (e.g. Athanassoula & Misiriotis 2002). Even in N-body simulations with strong bars, where the full 6-dimensional phase space of every particle as a function of time is available with negligible error, the different definitions produce different bar lengths, typically at the level of ≈ 10% for strong bars. We therefore do not expect an unambiguous and definitive bar length for the Milky Way, particularly with the more challenging viewing geometry. Instead we estimate the bar length of our best fitting model densities using several definitions. These bar length measurements are summarised in Table 2. Specifically we use three bar length definitions found in Athanassoula & Misiriotis (2002) to give reasonable bar lengths of N-body models: (i) L drop , the point where the ellipticity of the face on profile drops fastest. (ii) L m=2 , the point where the m = 2 Fourier mode of the face on image drops to 20% of its maximum. (iii) L prof the point where the major and minor axis profiles agree within 30% (this a larger threshold than the 5% used by Athanassoula & Misiriotis 2002, which we found gave spurious results for our densities). We add one additional bar length measurement, L mod . We take the difference between the face-on major and minor axis surface density profiles along the bar and fit an exponential to the long bar region. We then define the bar length as the point at which the density falls to 1/e of the exponential profile. In the case of an analytic exponential bar with a Gaussian cutoff, like our parametric long bar functions, this corresponds to defining the bar length as R o + σ o . All these methods were applied to face-on images of our densities to which we added the density of the disk in the Besançon model. We show these face-on images in Fig. 14 and the resultant bar half lengths are given in Table 2. All our stated bar lengths are the half length, defined as the distance from the galactic centre to the bar end. Before considering the bar length measurement we first return to the data near the bar end. In Fig. 15 we show histograms stacked in Galactic latitude as a function of longitude. We show both the positive and negative longitude sides to demonstrate the peak at positive longitudes is non-axisymmetric. To make the plot clearer we also subtract an exponential in µ K which can be thought of as representing the background of non-RCGs. The bar is clear and well localised to l < 26 • . At l > 30 • while non-axisymmetry still appears it is much less significant and fainter than would be expected for the bar. In the region 26 • < l < 30 • the non-axisymmetric excess weakens, broadens and becomes fainter. We therefore presume that the bar ends in this region, possibly transitioning into the spiral arms. If we convert these longitudes of the bar end to a bar length assuming that the bar lies at α ≈ 27 • and that projection effects are negligible we would recover a bar half length between 4.4 and 4.8 kpc. We produced a similar plot to Fig. 15 showing just the N-body model and it is clear that the bar ceases to be significant at too low longitude. In contrast plotting the one component long bar model it is clear that the bar extends beyond the data in longitude to where there is no non-axisymmetry in the data. For this reason we disregard the bar length measurements of these models in Table 2. Instead the two component model is a significantly better fit to the stacked data near the bar end in Fig. 15. We show it compared to the data together with variations in which the bar was artificially lengthened and shorted by adjusting the outer cutoff by 0.5 kpc. The model with an artificially shortened bar is insufficient particularly beyond l = 25 • . In contrast the model with an artificially lengthened bar predicts excessive non-axisymmetry beyond l = 30 • when the positive and negative latitudes have similar counts at the distance of the bar. Because the two component model appears to reasonably fit the stacked data in Fig. 15 we consider the measurements of this model to be our fiducial bar half length. These measurements lie in the range 4.73 − 5.23 kpc. We have repeated this process on the other N-body models finding that the variation between models is smaller than the variation between methods of measuring bar length. The one component bar length appears longer, however it is evident from Fig. 12 that this model fits poorly in the region beyond l > 30 • . Therefore taking the average and standard deviation of these measurements we consider our fiducial estimated bar half length for the Milky Way to be (5.0 ± 0.2) kpc. Continuity Between Box/Peanut Bulge and Long Bar Two lines of evidence in this work support that bar and bulge appear to be naturally connected: the angle between the Box/Peanut (B/P) Bulge and Long Bar is small, and the scale height along the bar decreases smoothly. We find in this work that the long bar has bar angle in the range α = (28 − 33) • consistent with recent determinations of angle found at |l| < 10 • in the B/P Bulge (e.g. Wegg & Gerhard 2013). We find this angle by fitting the magnitude distributions through Figure 15. Histograms of stars near the bar end. In black we show histograms of stars with positive longitudes and 0.15 • |b| < 1.35 • in one degree longitude intervals. The histograms of µ K (equation (1)) were converted to distance assuming all stars are red clump stars and an exponential in µ K was subtracted for visualisation, which can be thought of as representing non-red clump stars. In orange we show the same procedure applied to the data at negative longitues to highlight the non-axisymmetric features. In red we show our best fitting density model from section 5, while in blue and green we show the same model but with the bar length reduced and increased by 0.5 kpc respectively. the parametric density functions in section 5. It was already clear however directly from the data in Fig. 5 that the bar angle between 10 • < l < 20 • was close to the α ≈ 27 • found in the B/P Bulge. The alignment of the B/P Bulge and long bar in this work is in contrast to some previous claims of misalignment (Lopez-Corredoira et al. 2006;Cabrera-Lavers et al. 2007. Benjamin et al. (2005) found a large angle of the long bar of α = (44 ± 10) • using GLIMPSE data. However the GLIMPSE long bar data was subsequently analysed finding α = (38 ± 6) • (Zasowski 2012, or α = 35 • , Zasowski et al. 2012 due to a fainter assumed RCG absolute magnitude. This results in the derived distances to the long bar being reduced and therefore the bar angle decreasing for the same Sun-Galactic center distance, R 0 . Note that because we use data that extends in longitude to the bulge region and allow the red clump absolute magnitude to vary in our fitting we are insensitive to this degeneracy in this work. Some misalignment claims appear to be a result of using the endpoints to derive the bar angle (e.g. Gonzalez-Fernandez et al. 2012;de Amôres et al. 2013), however the projection effects can be extreme especially for the far end of the bar. Large bar angles from fitting the distance of clump giants (e.g. Cabrera-Lavers et al. 2007 are more difficult to reconcile with this work and may be related to the very different selection criteria. In those works RCGs are selected from the colour-magnitude diagram together with an extinction model which predicts the reddening as a function of distance modulus, while in this work they are statistically identified as an excess above the smooth background of non-RCGs. In addition we find that the scale height shown in Fig. 9 appears to smoothly transition between the B/P bulge and the long bar which also suggests that both are part of the same connected structure. This transition is similar to N-body models. To demonstrate this we show in Fig. 16 the scale height of one of the initial barred N-body models. The transition in vertical structure from a short central scale height, to a large scale height through the 3D B/P region, to a short scale height in the long bar can be seen to arise naturally in the N-body model prior to fitting the bulge data. Given the near alignment it seems natural that the long bar is the Figure 16. The scale height of the N-body model M85 from Portail et al. (2015) as a function of distance along the major axis of the bar. We use a sech 2 scale height since this is a better fit to the N-body vertical structure than an exponential. The model shown is prior to fitting the bulge data to demonstrate that the transition from short central scale height, to large scale height in the B/P region, to short scale height in the long bar region arises naturally. in-plane extension of a vertically extended inner part of the bar, the Box/Peanut bulge. This is similar to the structures found in N-body simulations where the buckled three-dimensional boxy/peanut bulge is shorter than the bar (e.g. Athanassoula 2005;Martinez-Valpuesta et al. 2006), as well as in external galaxies (Erwin & Debattista 2013). This situation was previously suggested in the Milky Way by Martinez-Valpuesta & Gerhard (2011) and Romero-Gómez et al. (2011) despite the possible misalignment. In the lower panel of Fig. 14 we show the side-on projection of the best fitting model. These side projections where the three dimensional peanut has a lesser extent than the bar is similar to the side projections in Athanassoula (2005) and the observations of Bureau et al. (2006). Thin and Super-thin Bar Component From fits to the vertical profile of the RCGs we find evidence for two components associated with the long bar of the Milky Way. A component with a scale height of ≈ 180 pc which we term the 'thin bar' since it would appear to be formed from the counterpart of old solar neighbourhood stars, and a second 'super-thin bar' component with scale height ≈ 45 pc. The density better fitting a broken exponential alone would be weak evidence for the presence of two distinct components: As demonstrated by the controversial thin/thick disk discussion in the solar neighbourhood it is possible for a continuum stellar distribution to mimic a double exponential in vertical profile (e.g. Bovy et al. 2012). However the relative contributions of the thin and super-thin components change along the bar. The thin component decreases outwards approximately exponentially with a scale length similar to the Milky Way disk, while the super-thin component increases outwards towards the bar end. This spacial mismatch strengthens the argument for two distinct components. Additional abundance and age measurements would strengthen the argument further. The short scale height for the super-thin bar is similar to the 60-80 pc found in K s -band imaging of NGC 891 by Schechtman-Rook & Bershady (2013). It is slightly lower than the expected value in the solar neighbourhood. Young stars locally have a scale height of ∼ 60 pc (e.g. Joshi 2007, from measurements of OB stars and young open clusters) and the vertical disk heating in the last 2 Gyr is insignificant Holmberg et al. (2007). If we extrapolate the local thin disk properties inwards we can predict the resultant velocity dispersion of the components. Assuming that the thin bar's scale height is set by its self gravity then the vertical dispersion, σ z , will vary as σ 2 z ∝ Σh where Σ is the surface density and h the scale height. Assuming that the surface density Σ is exponential Σ ∝ exp (R/R d ) then the surface density at Galactocentric radius 4 kpc is a factor 7.4 higher than in the solar neighbourhood for R d = 2.15 kpc (Bovy & Rix 2013). Since the local thin disk scale height (e.g. 225 pc: Veltz et al. 2008, 300 pc Jurić et al. 2008 is slightly larger than the thin bar scale height the resultant dispersion would then be a factor ∼ √ 7.4 × 180/225 = 2.2 larger, or approximately σ z ≈ 46 km s −1 assuming a local vertical velocity dispersion of 21 km s −1 (Binney et al. 2014). Estimating the velocity dispersion of the super-thin component is less certain since it depends on the extent to which its vertical structure is governed by self-gravity. We consider the two extremes of completely self-gravitating and non-self-gravitating and expect these to bracket the true structure. Assuming that it is self-gravitating a similar extrapolation gives σ z ≈ 30 km s −1 , assuming locally that stars younger than 2 Gyr have σ z ≈ 10 km s −1 (Holmberg et al. 2007) and a scale height ∼ 60 pc (Joshi 2007). The dispersion of a non-selfgravitating structure would be lower. Assuming that the potential is near harmonic Φ ∝ z 2 and that the vertical frequency is set by the thin component then the vertical frequency will be ν z ∝ σ z /h and hence 2.7 times higher than in the solar neighbourhood. The resultant dispersion for the super-thin component would be σ z ∝ ν z h and therefore σ z ∼ 2.7 × 50 pc/60 pc × 10 km s −1 ≈ 22 km s −1 . The origin of the super-thin component is unclear. The small scale height means the stars cannot have experienced much vertical heating and therefore should be younger than the thin component. To have formed RCGs the stars must still be 500 Myr old but galaxies with ongoing star formation have a strong bias to younger RCGs with a peak of age ∼ 1 Gyr (Girardi & Salaris 2001;Salaris & Girardi 2002). They could be related to star formation towards the bar end (Phillips 1996). Because their lifetime is longer than The line-of-nodes of the disk is horizontal. As described in Erwin & Debattista (2013), this is an orientation where both the boxy and barred zones are visible with the angle of the boxy zone closer to the line-of-nodes than the projected angle of the long bar. The full length of the bar and boxy zone were measured from this image visually in a manner similar to Erwin & Debattista (2013) and are marked in blue and red respectively. the orbital timescale the interpretation would be that they are stars on bar following orbits that formed towards the bar end and spend more time towards their apocenter near the bar end. Even without any star formation, since the bar grows with time (e.g. Martinez-Valpuesta et al. 2006), the bar will have grown into the star forming disk. These recently captured bar stars would then spend more of their orbital period at their apocenter near the bar end and could therefore have a similar density profile to the super-thin component. Further dynamical and chemical information is needed to distinguish between these possible scenarios. Bar Length Our estimated bar half length of (5.0 ± 0.2) kpc is larger than most previous estimates (e.g. 4.5 kpc by Cabrera-Lavers et al. 2008). This is partly because our bar angle is less than most previous estimates. If one assumes that the super-thin component is not part of the bar the same bar length estimators applied only to the thin component would lead to an only slightly shorter bar length of (4.6 ± 0.3) kpc. Without dynamical information the exact nature of this region near the bar end and the possible transition to spiral arms or ring structures is difficult to determine. A further uncertainty is that in estimating bar lengths we use the face on view of our model together with the Besançon model disk. If the Besançon disk density is not accurate in this region it would slightly impact our estimated bar length. For example increasing the density by a factor of 2 reduces the bar length from (5.0 ± 0.2) kpc to (4.6 ± 0.2) kpc. This is why L mod , the point where the difference between the major and minor axis profiles of the face on surface density drops to 1/e of the otherwise smooth and slower decline along the bar, is useful. Since it considers only the non-axisymmetric part of the density it is independent of details of the disk. A firm lower limit to the bar length can be determined visually from Fig. 5: it is clear from the slices at b > 2 • that the bar extends to l 22 • and therefore given a bar angle α = 30 • the bar cannot be shorter than ≈ 4.0 kpc. External galaxies show a strong correlation between the size of their boxy bulge, and their in-plane long bar which is seen as 'spurs' extending from the boxy region. Erwin & Debattista (2013) finds the ratio of the size of the boxy 3D bulge to the thinner in-plane bar is 0.42 ± 0.07. This was estimated by measuring a sample of moderately inclined external galaxies. We therefore inclined our model density and measured the boxy zone and spurs in a similar manner. In Fig. 17 we place an external observer of the Milky Way inclined at 60 • with the bar at ∆PA = 45 • to their line of sight. We find they would measure a box bulge size of 2R box = 2.3 kpc, a bar length 2L bar = 8.8 kpc and therefore a ratio to the bar length of 0.26. This is towards the lower end of the external galaxies measured by Erwin & Debattista (2013), at the edge of the observed range. The bar length measured in this work here has implications for the pattern speed of the bar. The dimensionless bar rotation velocity is R ≡ R CR L bar where R CR is the corotation radius. Since the bar cannot extend beyond corotation R 1 and our measurement of bar length implies corotation must lie outside (5.0±0.2) kpc. In turn this limits the pattern speed to be lower than (45 ± 2) km s −1 kpc −1 for a flat rotation curve near the bar end with circular velocity v c = 218 km s −1 . This is in some tension with the pattern speed derived form the interpretation of the Hercules stream as being due to the outer Linblad resonance of the bar. Antoja et al. (2014) find a pattern speed of (48.2 ± 0.5) km s −1 if the bar lies at α = 29 • with R 0 = 8.3 kpc and circular velocity in the solar neighbourhood of v c = 220 km s −1 . Reconciling these results requires either an extremely fast bar with R ≈ 1, where the bar ends at corotation, or that the Hercules stream has a different origin than the outer Lindblad resonance. In contrast, by constructing dynamical models of the B/P bulge, Portail et al. (2015) found 25 − 30 km s −1 kpc −1 . Combining this with our bar length and its error gives R in the range 1.4 − 1.9 and therefore the Milky Way would be a slow rotator (R > 1.4, Rautiainen et al. 2008). In the future we plan to combine the constraints from this work with kinematic data in the long bar region and perform madeto-measure modelling in a manner similar to Portail et al. (2015). Constructing self-consistent models of the long bar with added kinematic data will better indicate which stars are on bar following orbits, and thus constrain the dynamical structure near the bar end and the bar length. In addition the Gaia mission (de Bruijne 2012) is capable of some elucidation of the long bar. Extinction will limit the effectiveness of its broadband visual measurements near the galactic plane. However the result that the long bar extends out of the plane means that at |b| 1 • it should be possible to measure reliable parallaxes and proper motions for a sufficient number of bright RGB stars. In addition GAIA will allow for better characterisation of RCGs as standard candles, and therefore further their use in studying Galactic structure in the higher extinction regions, where NIR measurements are required. In particular, better characterisation of the population effects on RCGs will be of importance in the future since, for example, our largest uncertainty in bar angle is due to the uncertainty in the difference in RCG magnitude between bar components. CONCLUSION We have investigated the structure of the Milky Way's bar outside the bulge, termed the long bar, using red clump stars as a standard candle and tracer of the underlying density. We have combined data from UKIDSS, VVV, 2MASS and GLIMPSE and have constructed magnitude distributions in many fields covering the central |l| < 40 • , |b| < 9 • . We concentrated on the K s -band and corrected for extinction star-by-star. On the basis both of fitting the clump stars in each field individually, and of constructing parametric density models that simultaneously fit all fields we reach the following key conclusions: • The bar extends to l ∼ 25 • at |b| ∼ 5 • from the Galactic plane, and to l ∼ 30 • at lower latitudes. • The long bar of the Milky Way lies at an angle of α = (28 − 33) • . This is consistent with being aligned to the Milky Way's Bulge (e.g. Wegg & Gerhard 2013;Cao et al. 2013). We find this angle from the parametric density fits, but it is also visually evident from the raw data in Fig. 5. • The overall scale height of the bulge transitions from a short central scale height, to large scale height in the B/P region, to short scale height in the long bar region. This transition arrises naturally in N-body models of barred galaxies, and together with the alignment of the long bar and B/P region indicates that the bar and bulge are part of a single innately connected structure. • We find evidence for two scale height components. We find a ≈ 180 pc 'thin bar' component, which decreases in density outwards, and which we interpret as the barred counterpart of the solar neighbourhood thin disk. In addition there is a ≈ 45 pc 'super-thin bar' component whose density increases outwards along the bar. This component is likely to be related to younger stars (∼ 1 Gyr) towards the bar ends. • The offset in b of the bar is consistent with symmetry about the physical Galactic mid-plane under the assumption that the sun lies 25 pc above the Galactic plane and Sgr A*lies in the physical Galactic plane. The vertical structure is also consistent with the bar lying in the physical Galactic mid-plane to 5 pc. • We find a bar half length of (5.0 ± 0.2) kpc measured by applying commonly used bar length definitions to our best fitting parametric model. This agrees with simple estimates directly from the data near the bar end (Fig. 15). The bar length is still somewhat ambiguous since it depends on the interpretation of the nature of the super-thin bar, and excluding this results in a bar length of (4.6 ± 0.3) kpc. It is also visually clear from the data in Fig. 5 that the bar appears straight extending to at least l ≈ 22 • and therefore cannot be shorter than ≈ 4.0 kpc as a lower limit. • Our estimated total bar mass in our fiducial parametric model is 1.8 × 10 10 M . This arises from the number of clump stars in the bar together with a 'mass-to-clump' ratio arising from our assumed IMF and component ages. Varying these assumptions dominates the error. For example changing from our assumed Kroupa (2001) IMF to a Salpeter (1955) IMF increases the long bar mass by a factor 1.43, while changing from an old 10 Gyr stellar population to a constant star formation rate reduces the mass by a factor of 2.0. This paper has been typeset from a T E X/ L A T E X file prepared by the author. APPENDIX A: TRANSFORMATIONS TO 2MASS PHOTOMETRIC SYSTEM A1 Transformation from UKIDSS The UKIDSS survey is performed in the WFCAM photometric system. From DR2 onwards the zero points were calibrated from 2MASS but the photometric system is slightly different (equations 4-8 of Hodgkin et al. 2009). The most straightforward method of transforming WFCAM magnitudes to 2MASS would be to use the inverse of the calibration transformations. However these rely on the J-band magnitude and the J-band would then be the limiting band in high extinction regions, having almost twice as much extinction as the H-band. Instead for the H and K bands we measure the transformation from the H − K color for giants, which can then be applied directly in equation 1. To derive the transformation we cross match 2MASS and UKIDSS sources within 1 and 1 mag with 2MASS photometric quality classified as 'A' or 'B' in the J, H and K bands. A cut was applied in color-color space of UKIDSS to remove nearby dwarfs and select mostly giants: 0.25 < (J − K) − 2.5(H − K) < 0.55. In order to avoid saturation in UKIDSS and Eddington bias (over representation of the more numerous faint objects due to measurement error) in 2MASS we also select only sources with 11 < K < 13.5, 12.5 < H < 15 and 12 < J < 15.5. With this sample of cross-matched sources we then derive the photometric transformations. We divide the sample into 20 color bins each with equal numbers of stars and in each calculate the difference in magnitude between 2MASS and UKIDSS using an iterative sigma-clipped mean at the 2.5 sigma level. Our derived transformation is then the linear regression of these points, neglecting the first and last bins to reduce outliers. This process is shown applied to the H and K bands in figure A1, and the J band in A2. where subscript 2 refers to the 2MASS system and W the WFCAM system used in UKIDSS. A2 VVV Transformation The VVV survey is performed in the VISTA/VIRCAM photometric system. As for the WFCAM system this is tied to, but different from the 2MASS system 1 . In principle the same method as for the WFCAM system could also be used for VISTA. However the VVV system is significantly closer to the 2MASS system making this less necessary. There is however a field-to-field scatter in high extinction regions where there can be variations in zero point of ∼ 0.1 mag. We instead take a different approach and as in and Wegg & Gerhard (2013) we re-estimate the zero points for each field by cross matching bright but unsaturated VVV stars with 2MASS. APPENDIX B: GLIMPSE DATA In this section we show the results of GLIMPSE 3.6µm and 4.5µm data with a similar analysis as applied to the K-band in the main text. Although the GLIMPSE data does not have the wide coverage of the K s -band data, the reduced extinction in GLIMPSE makes it useful to corroborate the results of the NIR data. In Fig. B1 we plot the surface density of stars over a narrow range of extinction corrected magnitudes, the equivalent to Fig. 1. In Fig. B2 we show the fitted parameters of the statistically identified red clump stars: their number density N rc , the distance d rc calculated from µ rc , and their dispersion σ rc . This figure is the equivalent to Fig. 4 in the K sband. Completeness limits the reliable identification of RCG stars to l 8 • . In Fig. B3 we show the equivalent top down view of the raw data as was made for the K s -band in Fig. 5. As for the K s -band a bar angle of α = 27 • appears close to the data while 45 • does not. Near the dispersion of the fitted RCG is larger than the K s -band. This is expected because the RCG fits lie close to the magnitude limit of GLIMPSE in this more crowded and higher extinction area. Figure B1. The surface density of stars in the GLIMPSE [3.6µ]-band over the extinction corrected magnitude range 10.5 < [3.6µ] 0 < 11. Asymmetric number counts in l close to the plane demonstrate non-axisymmetry. Extinction is corrected using the RJCE method i.e. the H − [4.5µ] colour excess as in equation 2 and data outside the colour bar range are plotted at its limit. Grey areas are those not covered by the GLIMPSE survey. The equivalent plot for the K-band is shown in Fig. 1. 3.6 µm 4.5 µm Figure B2. As Fig. 4 but for the GLIMPSE 3.6µm (left hand panels) and 4.5µm data (right hand panels). 4.5 µm Figure B3. As Fig. 5 for the long bar of the Milky Way viewed from above using the GLIMPSE data in the 3.6µm band (above) and 4.5µm band (below).
2015-04-06T20:04:11.000Z
2015-04-06T00:00:00.000
{ "year": 2015, "sha1": "f575728ad853f2ecb955177effe10cd5a7142ed4", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/450/4/4050/5766721/stv745.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "f575728ad853f2ecb955177effe10cd5a7142ed4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17811910
pes2o/s2orc
v3-fos-license
Comparison of the Fecal Microbiota of Healthy Horses and Horses with Colitis by High Throughput Sequencing of the V3-V5 Region of the 16S rRNA Gene The intestinal tract houses one of the richest and most complex microbial populations on the planet, and plays a critical role in health and a wide range of diseases. Limited studies using new sequencing technologies in horses are available. The objective of this study was to characterize the fecal microbiome of healthy horses and to compare the fecal microbiome of healthy horses to that of horses with undifferentiated colitis. A total of 195,748 sequences obtained from 6 healthy horses and 10 horses affected by undifferentiated colitis were analyzed. Firmicutes predominated (68%) among healthy horses followed by Bacteroidetes (14%) and Proteobacteria (10%). In contrast, Bacteroidetes (40%) was the most abundant phylum among horses with colitis, followed by Firmicutes (30%) and Proteobacteria (18%). Healthy horses had a significantly higher relative abundance of Actinobacteria and Spirochaetes while horses with colitis had significantly more Fusobacteria. Members of the Clostridia class were more abundant in healthy horses. Members of the Lachnospiraceae family were the most frequently shared among healthy individuals. The species richness reported here indicates the complexity of the equine intestinal microbiome. The predominance of Clostridia demonstrates the importance of this group of bacteria in healthy horses. The marked differences in the microbiome between healthy horses and horses with colitis indicate that colitis may be a disease of gut dysbiosis, rather than one that occurs simply through overgrowth of an individual pathogen. Introduction The intestinal tract contains one of the most dense, dynamic and complex bacterial populations (microbiomes) of any environment on the planet. It has been called the '2 nd genome' in testament to its size and complexity. In humans, it is believed that the intestinal microbiome contains up to 1000 different species and approximately 10 12 bacteria/gram of feces [1]. The large intestine of the horse is an anaerobic fermentative chamber where fibrolytic bacteria produce short chain fatty acids that account for most of the horse's energy requirements [2,3]. A properly functioning intestinal tract and microbiome is critical for maintenance of normal health. The homeostatic balance in the equine intestinal microbiome is very sensitive to factors like gastrointestinal disease and dietary change, which may lead to catastrophic consequences, even culminating in death [4,5]. Indeed, diseases affecting the gastro-intestinal system are the main cause of mortality in this species [4]. Yet, despite the clear importance of the intestinal microbiome, our understanding of what constitutes 'normal' and 'abnormal' is to date very limited. Colitis in horses can be associated with a variety of infectious agents such as Clostridium difficile, Salmonella spp., Clostridium perfringens and Neorickettsia risticii [5,6]. In most cases, the etiologic agent(s) remain(s) undetermined; however, disruption of the normal microbiome is likely a key factor in most cases of colitis. Accordingly, characterization of the equine intestinal microbiome is critical, since a good understanding of the 'normal' intestinal microbiome is needed for interpretation of 'abnormal'. Most investigations of the equine microbiome have typically involved bacterial culture of feces or intestinal contents. However, culture based methods only allow for superficial assessment of the components of the microbiome, which is a significant limitation, as a large component of the microbiome is thought to consist of unknown or unculturable microorganisms [7,8]. Therefore, molecular approaches are required in order to analyze bacterial diversity in fecal samples. The development of next generation sequencing has led to a revolution in characterization of complex microbial populations, and opened new doors to the understanding of disease pathophysiology and to the development of new treatment approaches. The objectives of this study were to characterize the fecal microbiome of healthy horses and compare to that of horses with undifferentiated colitis. Metrics The total number of reads, number of base pairs and the mean length of the reads obtained from the original fasta file of each fecal sample before and after quality control filters are presented in Table 1. The total number of reads after pyrosequencing noise, and chimera removal are shown in Table 2, along with kingdomlevel identification. A total of 135,803 reads were classified as Bacteria and therefore used for calculation of relative abundance. The rarefaction curves generated by MOTHUR plotting the number of reads by the number of operational taxonomic units (OTUs) indicates that using 4712 reads per sample (the minimum number of sequences passing all quality control measures across the samples) for the final analysis was adequate since increasing the number of reads beyond that value had minimal impact on number of OTUs ( Figure 1). Relative Abundances Bacteria phyla representing more than 1% of total reads are presented in Table 3. In healthy horses, Firmicutes predominated (68.1%) followed by Bacteroidetes (14.2%) and Proteobacteria (10.2%). Interestingly, the high overall abundance of Proteobacteria in healthy horses (Figure 2a) was due to predominantly to horses 1 and 2, who had very high relative abundances of this phylum. Acinetobacter was the most common genus in those horses, while Lysinibacillus, Carnobacterium (2 horses) and Bacillus were the predominant genera in the other healthy horses. Bacteroidetes was the most prevalent phylum among horses with colitis accounting for 40.0% of reads, followed by Firmicutes (30.3%) and Proteobacteria (18.7%). Clostridium, Enterococcus, Prevotella (2 horses) Coptotermes, Porphyromonas (2 horses), Pseudomonas (2 horses) and Fusobacterium were the most common genera. Data variation within each phylum in the two groups of horses is presented in Figure 2b. The abundance of Firmicutes or Bacteroidetes was not statistically different between the groups (P = 0.086 and P = 0.091, respectively). However, the abundance of Actinobacteria and Spirochaetes was significantly higher among healthy horses (P = 0.002) and abundance of Fusobacteria was higher among horses with colitis (P = 0.009). Fusobacterium necrophorum, F. nucleatum and F. equinum were the most common Fusobacteria in horses with colitis, with significantly greater abundance of F. necrophorum and F. nucleatum compared to normal horses (P = 0.015 and, P = 0.028, respectively). Clostridia was the only class significantly different between groups (P = 0.019), with greater abundance in healthy horses. Similarly, the abundance of the order Clostridiales was significantly higher in healthy horses (P = 0.018). Several families accounted for this difference including Heliobacteriaceae (P = 0.0001), Lachnospiraceae (P = 0.003), Eubacteriaceae (P = 0.008), Peptococcaceae (P = 0.012), Clostridiaceae (P = 0.035) and Ruminococcaceae (P = 0.044). Among Clostridiaceae, Trepidimicrobium (P,0.0001) and Clostridium (P = 0.039) were the genera more frequently found in healthy horses (11% of sequences) when compared to horses with colitis (5.5%). The Order Lactobacillales, which comprises the lactic acid bacteria, was not significantly different between groups (P = 0.990). The genus Lactobacillus was found more frequently in horses with colitis, however, this difference was not statistically significant (P = 0.258). Species richness is presented in Table 3. Species diversity assessed by the inverse Simpson index and confidence intervals for the OTUs and phylotypes are presented in Tables 4 and 5, respectively. Comparison between the groups was not statistically different for either the OTU (P = 0.658) or the phylotype (P = 0.194) approaches. Population Analysis -OTU Approach The total number of sequences, coverage, number of OTUs and inverse Simpson index with confidence intervals for each fecal sample are presented in Table 4. The Phylogenetic trees generated using the Yue & Clayton measure and Jaccard index are presented in Figures 3a and 3b, respectively. Results of the Parsimony test obtained after phylogenetic analysis were significantly different for both the Yue & Clayton measure (P = 0.035) and for the Jaccard index (P,0.001), ignoring the branch length, indicating that the structures of the bacterial communities from both groups were different. When the branch length was considered, significantly different structures were still present between the two groups using the weighted UniFrac for the Yue & Clayton measure (P,0.001) and the Jaccard index (P,0.001) and also using the unweighted UniFrac for the Yue & Clayton measure (P = 0.014) and Jaccard index (P = 0.002), demonstrating that the groups were significantly different, regardless of the test used for comparison. Figure 4a and 4b are the graphic representation of the PCoA analysis of each sample for the Yue & Clayton measure and Jaccard index, respectively. Figures 4c and 4d represent the NMDS analysis for the Yue & Clayton measure and Jaccard index, respectively. The spatial separation between centers of the clouds from the two groups in the NMDS plot was statistically different when compared by the AMOVA test (P,0.001). We found 1159 OTUs significantly different between the two groups using the Metastats. In an attempt to identify a core microbiome present in healthy horses, the OTUs shared among Healthy 3, 4, 5 and 6 was investigated. Since Healthy 1 and 2 were residing in a Teaching Hospital and had different fecal microbiomes when compared to samples originated from regular stables, these two horses were not included in this analysis. Overall, 1620 different OTUs (richness) were found in Healthy 3, 4, 5 and 6, of which, only 123 OTUs were shared between them and only 6 were present at least 25 times per horse. The most abundant OTU shared between those animals was classified as Roseburia sp, a member of the Lachnospiraceae family. From the remaining shared OTUs, four were unclassified bacteria from the Lachnospiraceae family and one was unclassified bacterium at the phylum level. Population Analysis -Phylotype Approach The total number of sequences, coverage, number of OTUs and inverse Simpson index with confidence intervals for each fecal sample are presented on Table 5. The Phylogenetic trees generated by the MOTHUR using the Yue & Clayton measure and Jaccard index are presented in Figures 5a and 5b, respectively. When the Parsimony test was applied to compare the structure of the bacterial communities from healthy horses and horses with colitis obtained with the phylogenetic analysis, statistically significant differences were identified for both the Yue & Clayton measure (P = 0.004) and the Jaccard index (P,0.001) ignoring the branch length. When the branch length was considered, significantly different structures were identified between the two groups using the weighted UniFrac for the Yue & Clayton measure (P,0.001) and the Jaccard index (P,0.001), and also using the unweighted UniFrac for the Yue & Clayton measure (P = 0.006) and Jaccard index (P,0.001). The spatial separation between the centers of the clouds from the two groups in the NMDS was statistically different when compared by the AMOVA test for the Yue & Clayton measure (P = 0.002) and the Jaccard index (P,0.001). One hundred three of 180 OTUs were significantly different between the two groups using the Metastats program. Selected Species-level Identifications While the study was not designed to determine the etiology of diarrhea, some notable species-level identification was investigated. Sequences with more than 98% identify with C. difficile were present in feces of Colitis 1, 3, 5, 9 and 10 and Healthy 5. Sequences consistent with Clostridium perfringens were detected in the feces of Colitis 4 and 8. Clostridium sordellii was present in feces from Colitis 10 only. No Escherichia coli sequences were present in feces of any of the healthy horses; however, the organism was found in eight of the were not identified. Sequences consistent with Shigella boydii, S. flexneri or S. dysenteriae were present in Colitis 6, 7 and 9. In contrast, no sequences consistent with Shigella spp. were present in feces of healthy horses using the proposed cut-off values. Discussion Our results characterize the fecal microbiome of six healthy horses by high throughput sequencing technology. Firmicutes was found to be the major bacterium phylum populating the distal intestine of healthy horses, which is consistent with a recent smaller metagenomic study of feces [16]. The predominance of Firmicutes may be related to the anatomical physiology and feeding habits of this species, which ingests mainly insoluble fiber and uses the cecum and large colon as the main sites for fermentation. In fact, significant bacterial changes have been reported in dogs after supplementation with dietary fiber, which led to an increase in Firmicutes and decreased Fusobacteria [17]. In contrast, Willing et al. [18] compared the bacterial component of feces from horses submitted to two different diets and observed that horses receiving supplementation with concentrate had 10 times more lactic acid producing bacteria than horses receiving a forage-only diet. In addition, almost 50% of sequences from feces of horses receiving a forage-only diet were classified as Bacteroidetes and 46% were Firmicutes, while 27% of the sequences from horses receiving concentrate were Bacteriodetes and 73% were Firmicutes. However, that study only involved evaluation of 67 sequences, greatly limiting the conclusions that can be made. The microbiomes of Healthy 1 and 2 were closely related, and both contained a high proportion of Proteobacteria. Increases in Proteobacteria have been reported in humans with IBD [19] and recurrent C. difficile infection [20]; however, these horses were clinically normal. Interestingly, Healthy 1 and 2 were the only animals housed in the same barn (side by side stalls) with identical diet and management, and these data suggest that dietary and management factors may have a significant impact on the intestinal microbiome in healthy horses. Additionally, those two horses were teaching horses that resided in a veterinary teaching hospital, albeit in a separate ward from clinical patients. The high abundance of Proteobacteria in these two horses was largely due a high number of Gammaproteobacteria, particularly Acinetobacter spp. Since Acinetobacter spp. can be hospital-associated pathogens (albeit rarely diagnosed in this institution), it is possible that the place of residence of these horses also had an impact on the microbiome composition. These results are also consistent with a recent study that demonstrated similar bacterial communities in the rumen of dairy cows housed together [21]. However, Durso et al. [22] suggested that the environment may not be the most important source for intestinal bacteria, as the bacteria present in the surface of feedlot pens were very different from ones present in feces of beef cattle. These studies therefore raise questions about the role of the environment in establishment and maintenance of the gut microbiome. General management factors must be considered when designing and interpreting microbiome studies, but it is clear that further study of factors that influence an individual horse's gut microbiome is required. Alteration of the intestinal microbiome in colitis was not unexpected. However, these results indicate a rather profound alteration, given the numerous phylum-level differences in relative abundance. Significant changes at the phylum level have also been shown in people with chronic inflammatory conditions [23,24], obesity [25] and in dogs with diarrhea [26]. Differentiating cause and effect is not possible without a greater understanding of pathophysiology, but identification of organisms disproportionately present in horses with colitis could lead to investigation of their potential role as causative agents. Bacteroidetes was the dominant phylum among horses affected by colitis. This phylum has been reported to be the most abundant in healthy people [8] and a decrease in its relative abundance has been associated with obesity [25] and chronic diarrhea in humans [24]. In healthy horses, it only accounts for a minority of sequences, presumably because of the lesser role in hindgut fermentation compared to the dominant Firmicutes phylum, and the reason for the apparent proliferation of members of this phylum in horses with colitis is unclear. Fusobacteria were rare in healthy horses but significantly more abundant in horses with colitis. While cause versus effect cannot be discerned, this raises some interesting questions given increasing information about the role of Fusobacterium spp in various gastrointestinal diseases of humans, including Crohn's disease (Fig. A and B) and MNDS (Fig. C and D) showing the representation of vectorial analysis of sequences found in feces of healthy horses (blue dots) and horses affected by colitis (red dots). Results were obtained using the Yue & Clayton measure (Fig. A and C), the Jaccard index (Fig. B and D). doi:10.1371/journal.pone.0041484.g004 [27], colorectal cancer [28,29] and appendicitis [30]. Ulcerative colitis caused by F. varium has been experimentally induced in mice [31]. Whether the higher percentages of Fusobacterium spp. found in the colitis group was a consequence of overgrowth due to bacterial dysbiosis, or whether this genus has a major role in the etiology of disease, remains uncertain, but requires investigation. While Fusobacterium spp. have been isolated from horses with pleuropneumonia [32], there do not appear to be any studies that have investigated Fusobacteria as equine enteropathogens. Traditionally, C. difficile, enterotoxigenic C. perfringens and Salmonella spp. have been incriminated as the most important etiological agents causing diarrhea in horses [5,6]. However, most cases of equine colitis remain without a clear etiologic diagnosis [5]. Indeed up to 45% of foals referred to a hospital because of diarrhea had no infectious agents detected [33] and only 25% of horses with diarrhea had clostridial toxins in their detected feces [34]. Whilst metagenomic study is best suited for high level assessment of overall microbiome composition, scanning of individual bacterial genera or species can provide some insight into potential causes of disease. Care must be taken when assigning identities at the species level because of the variable discriminatory power of 16S rRNA gene identification for some species. Regardless, the presence of novel potential pathogens in horses with colitis, but not in healthy horses (e.g. Shigella spp.) deserves attention, since detection of these organisms is not part of the routine diagnostic workup in horses affected by colitis. This study certainly does not implicate these species as etiologic agents but suggests that study of a potential role in disease is indicated. Culture and sequencing or organism-specific quantitative real time PCR could have been performed in order to confirm the presence of those pathogens, but discovery of novel pathogens was not the focus of this study. Microbiome studies such as this cannot incriminate novel pathogens but provide preliminary information about organisms that should be further studies as causative agents. Clostridium difficile was detected in feces of five of ten horses with colitis, while none had detectable C. difficile toxins in fecal samples. This bacterium can be found in healthy horses [35], so this could simply reflect colonization, however it is also known that fecal ELISA assays are only moderately sensitive, so the relevance of this result is unclear. Our results are in agreement with the findings of Shepherd et al. [16] who reported Firmicutes to represent 43.7% of sequences obtained from feces of healthy horses. Daly et al. [7] also reported this phylum to be the most prevalent as assessed by cloning of the 16S rRNA gene. The higher abundance of Firmicutes and the genus Clostridium among healthy horses reported here is important, as clostridia have been traditionally associated with pathogenicity, despite the fact that only a few of the Clostridium spp. found here are known enteropathogens. Assessment of clostridia is further compounded by the archaic taxonomy, with Clostridium species spanning several families, including Clostridiaceae, Ruminococcaceae, Eubacteriaceae and Lachnospiraceae, with the potential that relevant differences are masked by weaknesses in current taxonomical assignments. The vast clostridial abundance and diversity in healthy horses should also be considered in light of the common recommendation of metronidazole as a treatment for known or suspected clostridial colitis, as well as idiopathic colitis. Since clostridia may be a key core component of the equine intestinal microbiome, such a non-specific approach to treatment could be detrimental in some situations through further disruption of the already altered intestinal microbiome. The impact of metronidazole on the intestinal bacteria of horses deserves further investigation. The core microbiome of the equine species has not been objectively investigated. The studies cited above have either used culture-based methods or did not have enough depth for an adequate characterization of microorganisms at lower phylogenetic levels. Our finding that Clostridiales, members of the Lachnospiraceae family, were the most abundant bacteria shared between healthy horses may suggest that this group of organisms is part of the core bacterial population of healthy individuals and deserves further investigation. The use of probiotics has been suggested as a prophylactic and therapeutic adjuvant in cases of chronic diarrhea in humans [36]. To date, the development of probiotics for the equine species has not been successful [37][38][39]. There are many potential reasons for this, but it may relate in part to our previously poor understanding of the equine intestinal microbiome. Specifically, probiotic approaches in horses have focused on lactic acid bacteria, which comprise only a small component (#7.1%) of the microbiome of healthy horses and which were not decreased in disease. It is possible that probiotic therapy should target other, more common, components of the microbiome, particularly clostridia and other abundant members of the Firmicutes phylum. Bacterial species richness and diversity are thought to be important components of a 'healthy' intestinal microbiome. Decreases in richness and diversity have been associated with conditions such as chronic diarrhea and recurrent C. difficile infection (CDI) in humans [24,40]. Restoration of bacterial diversity and richness is the principle behind fecal microbiota transplantation, an approach that has received much attention recently for successful treatment of recurrent CDI [41,42]. Surprisingly, equine colitis was not associated with loss of diversity and richness, but further studies using more uniform groups of horses with specific etiologies are required. Microbiota transplantation might potentially be an effective treatment to restore this complex environment towards is considered more 'normal'. The majority of reads obtained in this study were correctly classified as bacteria, however, one horse (Colitis 5) had 17.6% of unassigned reads and another (Healthy 5) had 8.6% as unclassified sequences derived from bacteria. Some metagenomic studies have reported higher proportions of unclassified bacteria [7,16], however those have typically involved studies that did not report rigorous quality control and chimera screening efforts. Most recent studies have reported lower counts of unclassified bacteria, similar to what was obtained with the other 14 horses in this study. The reason for these two outliers is uncertain, as it would be surprising that after all filtering and cleaning procedures, so many chimeras would remain present in those files. A high abundance of truly unclassified bacteria is unlikely, but not impossible. In fact, when unknown sequences from Healthy 5 were compared to the NCBI BLAST nucleotide collection (nr/nt), it was consistent with Streptococcus infantarius (Total score: 3253, E value: 6e-151 and maximum identity: 100%, accession: CP003295) and with an uncultured bacterium previously found in horses (Total score: 520, E value: 3e-144 and maximum identity: 100% accession: EU775872). When unknown sequences from Colitis 5 were compared to this databank, several hits were not bacterium specific and other sequences were consistent with Pseudomonas spp., Serratia spp. and several uncultured bacteria. The more uneven distribution of bacteria found among horses with colitis on the PCoA and NMDS may reflect the different etiologies affecting each horse. Therefore, further studies using more uniform groups with an established diagnosis (e.g.: Clostridium difficile infection, salmonellosis, etc) may reveal better patterns of changes in the intestinal flora that may aid in the development of prophylactic and therapeutic procedures. Despite slightly different phylogenetic trees, PCoA and NMDS plots generated by the different statistical tests, all the results were very consistent and clustered the six healthy horses together. Those differences could be due to the differences between the Jaccard and the Yue & Clayton measures of dissimilarity, which use geometric and arithmetic means, respectively attributing different weights for the relative abundance. While culture-independent methods and next generation sequencing eliminate many of the biases inherent in culture or cloning-based techniques, there can be some PCR amplification bias, so certain groups (e.g. Bifidobacterium spp.) may be underestimated [43]. Therefore, evaluation of other target genes is indicated for further comprehensive study of this microbiome. Our results confirm the need for non-culture-dependent methods, since various organisms refractive to culture (e.g. Clostridium sordellii) and previously unidentified organisms were found here. In addition, low-abundance species may be missed with metagenomic studies compared to targeted enrichment culture based approaches. Rarefaction curves for this study indicated good sampling completeness, with few new OTUs expected to be identified with assessment of further sequences, however there will always be some under-estimation of overall diversity and species number, with sporadic detection of uncommon species. The number of horses used in this study was small and as only one sample per animals was analyzed some of the differences between groups may be due to interhorse variation. However, the similarities among microbiomes of healthy horses housed at the same management, and among horses with colitis is indicative that interhorse variation may not be great, at least at the phylum level. Therefore, our study is the basis for further studies using a larger number of animals addressing the impact of environment and different management systems and diets on the gut microbiome. Finally, considering the large anatomical size of the equine gastrointestinal tract and the differences in intestinal environments throughout the intestinal tract, it remains uncertain if fecal bacteria directly reflect the bacterial population present in the large colon. However, differences between the groups at higher phylogenetic levels (phyla) found here were suggestive that, as in other animals, evaluation of the fecal microbiome is an appropriate way to gain a high-level view of intestinal microbiome diversity. However, further studies comparing the bacterial population from different compartments of the equine intestinal tract are ultimately required. Conclusions The marked differences in the microbiome between healthy horses and horses with colitis indicate that colitis may be a disease of gut microbiome dysbiosis, rather than one that occurs simply through overgrowth of an individual pathogen. The predominance of clostridia and related organisms demonstrates the importance of this group of bacteria in healthy horses. The abundance of Fusobacteria in horses with colitis deserves special attention and further investigation, as the role of this phylum in equine colitis is currently unknown. The species richness reported here indicates the complexity of the equine intestinal microbiome and this study provides the most comprehensive indication of this important and complex microbiome to date. Animal Selection The study was approved by the University of Guelph Animal Care Committee. Six healthy horses (Healthy 1 to 6) were enrolled in this study. Healthy 1 and 2 were mature Thoroughbred teaching mares that resided at the Ontario Veterinary College, were housed beside each other and had a diet exclusively of grass-hay. Healthy 3 was a 4-year-old female mixed-bread pony fed grass-hay only with free access to pasture. Healthy 4 was a 23-year-old retired Quarter Horse gelding fed grass-hay and a commercial complete pelleted feed (14% crude protein: 3.5 kg per day). Healthy 5 was a 7-yearold mixed-bread mare used for pleasure riding fed grass-hay and a 5kg/day of a commercial hi-fat/hi-fiber feed (5 kg per day). Horse 6 was a 6-year-old Quarter Horse mare used for pleasure riding fed grass-hay and 6 kg/d of a commercial complete pellet feed (12% crude protein). The last four horses were housed on four different farms in Ontario. Samples were collected during November of 2010. None of the horses had received antimicrobials or anti-inflammatory drugs, or had gastrointestinal related disease in the six previous months. One fecal sample from each horse was collected off the ground immediately after defecation. Approximately 10g of feces were collected from fecal balls, avoiding collection of fecal material that was touching the ground. Fecal samples were kept frozen at 280uC until DNA extraction. Fecal samples from 10 horses (Colitis 1 to 10) that presented to the Ontario Veterinary College for evaluation of acute colitis (1-3 days duration) were collected within the first 24h of hospitalization. Inclusion criteria were to have five negative cultures for Salmonella spp, as well as single negative fecal ELISA results for Clostridium perfringens enterotoxin and C. difficile toxins A and B, as those are the most prevalent infectious agents isolated from cases of colitis in this area. Samples were collected between November 2009 and April 2011 and kept frozen at 280uC until DNA extraction. Three of the horses were Thoroughbred, three were Warmblood, two were ponies and two were mixed-breed. Mean age was 6.35 years (range: 0.5 to 18 years). DNA Extraction, 16S rRNA Gene PCR and Sequencing DNA was extracted from fecal samples with the use of a glass bead based extraction kit (E.Z.N.A. Stool DNA Kit, Omega Bio-Tek Inc., USA) using the manufacturer's ''stool DNA protocol for pathogen detection'' protocol. DNA quantification and quality were accessed by spectrophotometry using the NanoDropH (Roche, USA). DNA concentrations were diluted to a final concentration of 20 ng/mL for PCR amplification of the V3-V5 region of the 16S rRNA gene using the primers 357f (CCTACGGGAGGCAG-CAG) and 926r (CCGTCAATTCMTTTRAGT), as described by Wu et al. [9]. Forward primers were designed with the adaptor A sequence (CGTATCGCCTCCCTCGCGCCA) plus a key sequence (TCAG) and reverse primers with the adaptor B sequence (CTATGCGCCTTGCCAGCCCG) plus a key sequence (TCGA) as recommended by the 454 Sequencing Technical Bulletin No. 013-200. In addition, each sample had a different ten base pair sequence in the forward primer used as a Multiplex Identifier (MID). For a final volume of 50mL, 2 mL of each DNA sample was added to a solution containing 16 mL of water, 25 mL of ReadMis (Invitrogen, USA), 2 mL of BSA (Invitrogen, USA), 2 mL of each primer (1000 pmol/mL) and 1 mL of MgCl 2 (50Mmol). The mixture was subjected to the following PCR conditions: 5 min at 95uC for denaturing, and 28 cycles of 15 sec at 95uC for denaturing, 45 sec at 56uC for annealing and 60 sec at 72uC for elongation followed by a final period of 8 min at 72uC and kept at 4uC until processed (within 2 hours). A negative control, as well as a positive control (DNA of Clostridium difficile), was used. PCR products were evaluated by electrophoresis in 2% agarose gel and purified using the QIAquick PCR purification kit (Qiagen, Valencia, CA). Amplicons were then purified using the Agencourt AMPure XP (Beckman Coulter Inc, Mississauga, ON), and quantified by the Quant-iT TM PicoGreenH dsDNA Assay kit (Invitrogen, Burlington, ON) following the ''Amplicon Library Preparation Method Manual'' of the 454 GS Junior Titanium System (454 Life Sciences, Roche, USA). Emulsion PCR was performed according to the ''em-PCR Amplification Method Manual -Lib L'' and sequencing was performed using a 454 GS Junior Titanium System following the ''Sequencing Method Manual''. Data was made publicly available at the NCBI Sequence Read Archive under the accession number SRA052596.1 and at the MG-RAST project 435 (MGP435). Sequence Analysis and Statistical Analysis The MOTHUR package of algorithms [10] was used for pyrosequencing noise removal from the original flow files [11] and for chimera removal [12]. Sequences that were less than 200 bp in length or that contained homopolymers longer than 8 bp were removed, allowing for 1 mismatch to the barcode and 2 mismatches to the primer. Output files were uploaded to the MG-RAST server [13] using the SILVA Small Subunit rRNA Database (SSU) as reference. An e-value of 10E-30, a minimum alignment length of 75bp and a minimum percentage identity of 97% were used as cut-off values for quality control in addition to the initial trimming and filters. The total number of sequences classified as Bacteria were then used for relative abundance calculation. A 100% stacked column chart comparing the relative abundances of each phylum in the two groups was generated using Microsoft Excel. Intra-phylum variance was represented by a boxplot created using R! software. Finally, MOTHUR was used to align sequences with the SILVA 16S rRNA reference database, with taxonomic classifications obtained from the Ribosomal Database Project (RDP) and assigned into OTUs using a cutoff of 0.15 for the distance matrix and into phylotypes by clustering all OTUs belonging to the same genus. To provide further assessment of species-level identification of selected organisms, sequences were loaded into the NCBI Basic Local Alignment Search Tool (BLAST) website using the nucleotide collection (nr/nt) database [14]. The MOTHUR software was used for diversity and richness estimation by generating collector curves of the Chao1 richness estimators, the inverse Simpson diversity index and rarefaction curves at 0.03 distances, which were plotted on a line chart using Microsoft Excel. Two sample T-test with 95% confidence intervals was used to compare the Simpson indexes between groups. The significance of the dissimilarity between the two groups was measured by the Yue & Clayton measure of dissimilarity taking into account the relative abundance of OTUs present in each group and by the traditional Jaccard index taking into account the number of shared OTUs between the groups. The same tests were repeated using the sequences classified into Phylotypes at the genus level. Dendrograms were created using MOTHUR to compare the similarity of the intestinal bacteria among all samples used in the study using both, the Jaccard index and the Yue & Clayton measure, which account for the relative abundances in each sample. Figures were generated by TreeView 1.6.6. The parsimony, unifracunweighted and unifrac-weighted tests were applied to determine significance of clustering between the groups in both, OTUs and Phylotypes based dendrograms. Clustering of individuals was also evaluated by plotting the resultant vector of the Principal Coordinate Analysis (PCoA) and by the non-metric multidimensional scaling (NMDS) with 2 dimensions. The R! software was used to generate figures. Analysis of molecular variance (AMOVA) was used to test if the distance between the centers of the clouds of the two groups was greater than individual variation among samples. The correlation of the relative abundance of each OTU with the two axes in the NMDS dataset was calculated in order to determine which OTUs or Phylotypes were responsible for shifting the samples along the two axes. Finally, the Metastats program [15] through MOTHUR was used to identify statistically different OTUs or Phylotypes among groups. Comparison of bacteria between the groups at different phylogenetic levels was performed by using an unpaired t-test after data had been normalized to values between 0 and 1 using MG-RAST.
2017-04-14T13:49:59.422Z
2012-07-31T00:00:00.000
{ "year": 2012, "sha1": "5883e53911b8ae2a1b933f8cabc28db878a89aaa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0041484&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5883e53911b8ae2a1b933f8cabc28db878a89aaa", "s2fieldsofstudy": [ "Biology", "Medicine", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14639109
pes2o/s2orc
v3-fos-license
Changes in physical activity during the transition from primary to secondary school in Belgian children: what is the role of the school environment? Background Key life periods have been associated with changes in physical activity (PA). This study investigated (1) how PA changes when primary school children transfer to secondary school, (2) if school environmental characteristics differ between primary and secondary schools and (3) if changes in school environmental characteristics can predict changes in PA in Belgian schoolchildren. Moderating effects of gender and the baseline level of PA were investigated for the first and third research question. Methods In total, 736 children (10–13 years) of the last year of primary school participated in the first phase of this longitudinal study. Two years later, 502 of these children (68.2%) agreed to participate in the second phase. Accelerometers, pedometers and the Flemish Physical Activity Questionnaire were used to measure PA. School environmental characteristics were reported by the school principals. Cross-classified regression models were conducted to analyze the data. Results S elf-reported active transport to school and accelerometer weekday moderate to vigorous PA (MVPA) increased after the transition to secondary school while self-reported extracurricular PA and total PA decreased. Pedometer weekday step counts decreased, but this decrease was only apparent among those who achieved the PA guidelines in primary school. Secondary schools scored higher on the school environmental characteristics: provision of sports and PA during lunch break, active schoolyards and playgrounds and health education policy but lower on sports and PA after-school than primary schools. Changes in the school environmental characteristics: active commuting to school, active schoolyards and playgrounds and health education policy resulted in changes in self-reported extracurricular PA, total PA , pedometer/accelerometer determined step counts and accelerometer determined MVPA. Moderating effects were found for baseline PA and gender. Conclusion PA changed after the transition to secondary school. In general, secondary schools seem more likely to foster strategies to promote PA during school hours than primary schools who seem more likely to foster strategies to promote PA after school. Changes in school environmental characteristics may contribute to changes in PA. Thus, if confirmed in future studies, efforts are needed to implement these components in schools as early as possible to positively affect the change in PA. Background The promotion of physical activity (PA) among young people has become a public health priority because of the well-known health benefits [1,2]. The belief that the behavior of young people is easier to control, to develop or to change into more healthy behavior than the behavior of adults [3], underscores the importance of primary prevention in young people. Furthermore, persistent PA at a young age considerably increases the probability of being active in adulthood [4] which in turn can contribute to the quality of adult life. Despite these benefits, a large proportion of school-aged youth does not achieve the public health recommendations of 60 minutes moderate-to-vigorous physical activity (MVPA) per day [5]. Moreover, PA levels tend to decline during adolescence [6,7]. Research has shown that important life events are key periods that coincide with changes in PA patterns [8]. For children, the transition from primary to secondary school is an important life event. Many European countries including Belgium have distinct primary and secondary schools. In the Belgian educational context, children spend the first 9 years in a primary school (3)(4)(5)(6)(7)(8)(9)(10)(11) year olds) after which they normally move to a secondary school (mostly located elsewhere) where they spend 6 years (12)(13)(14)(15)(16)(17)(18) year olds). Different longitudinal studies have shown that the transition from primary to secondary school is characterized by changes in PA patterns has been shown by different longitudinal studies [8][9][10][11][12]. However, the directions of these changes are not straightforward. Two longitudinal studies have shown that PA levels gradually decreased during childhood but were characterized by an obvious drop during the transition from primary to secondary school [8,9]. Specifically, Niven and colleagues observed a decrease in total PA, PA during school day break and lunch time and PA after school. In contrast, Cooper and colleagues described that objectively determined MVPA slightly increased during this transition [10]. Jago and colleagues [11] investigated after-school and weekend MVPA and concluded that objectively determined after-school MVPA tended to decline during the transition from primary to secondary school, while objectively determined weekend MVPA tended to increase during this transition. Furthermore, Cardon and colleagues found an increase in time spent bicycling to school during the transition from primary to secondary school [12]. Based on these results, the changes in PA during the transition from primary to secondary school seem to be dependent on the context of PA. Furthermore, according to several authors the changes seem to be more apparent in boys than in girls [9,11]. The results of these longitudinal studies clearly underscore the need for studies to further scrutinize the change in context-specific PA during the transition from primary to secondary school. An important question to inform the development of PA interventions is why PA changes during this transition. In line with the perspectives of the ecological models, behavior change occurs as a result of the dynamic interaction between the individual and the environment in which he/she lives, learns, works and plays [13]. Young people spend a large proportion of their time at school. Virtually all children and adolescents attend school-based education. In general, children and adolescents spend between 4 and 8 hours per day for 12 years or more of their lives at school. Schools therefore have the opportunity to offer an environment with the potential to facilitate and to promote PA [14] in a large number of children and adolescents, across different ethnic and socio-economic groups. To foster regular PA, schools worldwide are recommended to strive for a 'whole-school' approach that maximizes the opportunities for school-aged youth to be active [15,16]. The 'whole-school' approach that has been outlined in the Toronto Charter for Physical Activity [17] goes beyond providing physical education classes. Key components of a 'whole-school' approach are: providing resources and suitable physical environments to support structured as well as unstructured PA, supporting active travel to school and provision of opportunities to be active during the school day, during class, during breaks at lunch time and after school (Toronto charter). In line with the 'whole-school' approach, a conceptual framework for PA programs within school-community partnerships was developed for the schools in Flanders (Belgium) to offer schools guidelines to develop extracurricular PA programs [18]. The framework consists of five complementary components: sports and PA during lunch break, active school yards or playgrounds, active commuting to school, health education policy and sports and PA after school. Since a few years, Flemish primary and secondary schools are sensitized and supported to have a school policy that promotes PA. However, no specific strategies were developed by the Flemish government: the Flemish schools are free to choose which promotion programs they implement. Consequently, the extent of PA promotion differs; some schools focus on one of the five framework components, while other schools focus on all five framework components [19]. The differences in approaches between schools may contribute to the change in PA observed when moving schools. To our knowledge, the only study, investigating the contribution of school environmental factors to the change in PA when moving schools was a qualitative study [20]. Through the use of narrative interviews, Knowles and colleagues (2011) investigated factors (PA opportunities, environment, sense of self when active and individual issues) related to the decrease in PA during the transition between primary and secondary school in 14 British girls. A main conclusion of this study was that the change in school environment plays a central role in the change in PA patterns that occur when moving schools [20]. Specifically, according to this study, a positive environment could be provided by ensuring a choice of activities in the physical education lessons, reducing the focus on competence and competition and recognizing the importance of social support by the involvement of friends in PA [20]. Based on the shortcomings in the present literature, the aim of this study was threefold. A first study aim was to investigate how PA behavior changes when Belgian primary school children make the transition to secondary school. Changes in self-reported walking and cycling to and from school, extracurricular activity at school and total PA as well as changes in pedometer and accelerometer determined step counts and MVPA were examined. A second aim was to investigate if school environmental characteristics with regard to the five framework components (sports and PA during lunch break, active school yards or playgrounds, active commuting to school, health education policy and sports and PA after school) differed between Belgian primary and secondary schools. The third and last aim of this study was to investigate if changes in school environmental characteristics can predict changes in PA in Belgian schoolchildren. Since in previous studies the changes in PA during the transition from primary to secondary school were found to be more apparent in boys than in girls [9,11], the moderating effects of gender was investigated with regard to the first and third research question. Furthermore, since change in PA over time has shown to be related to the start level of PA [21], the moderating effects of the level of PA at baseline was also investigated with regard to the first and third research question. Participants and procedure A longitudinal study with two phases was conducted: baseline measurements took place during the school year 2009-2010, follow up measurements took place during the school year 2011-2012. During the school year 2009-2010, 148 schools were randomly selected from all elementary schools in Eastand West-Flanders. The principals of these schools were contacted by phone and informed about the study. Forty-four principals agreed to participate in the study (response rate at school level = 29.7%) and were visited during school hours. They were asked to give written consent and filled in a questionnaire on school environmental characteristics. Further, with the permission of the principal, the children of one class group of the final year (10-13 year old) and their parents were invited to participate in the study. In total, 976 children and parents received an informative letter about the study with an invitation to participate. The parents of 749 children consented to be involved in the study and agreed to let their child participate in the study (response rate at individual level = 76.7%). Each class that participated in the study was visited by a research assistant. During this visit, the children involved in the study and present at the time of the visit (n = 736, 98.3%) were asked to complete a questionnaire. The questionnaires were completed under the supervision of the research assistant. Every child was also asked to wear an activity monitor for seven consecutive days. In total, 439 (59.6%) children received a pedometer measuring step counts (Yamax Digi-walker CW-701) and because of the limited availability of accelerometers a subsample of 297 (40.4%) children received an accelerometer measuring activity counts per minute and step counts (model GT1M, Actigraph MTI, Manufacturing Technology Inc., Pensacola, FL, USA). The activity monitor was accompanied by a non-wear time activity diary. The research assistant explained the protocol of the activity monitor and its nonwear time activity diary. At the end of the visit, every child was given a questionnaire to be completed by one of the parents at home. One week later, the research assistant revisited the schools to collect the parental questionnaires, the activity monitors and the non-wear time activity diaries. The parental questionnaires, activity monitors or diaries that were forgotten at that time, were either redirected by post or collected by the research assistant during a third visit. After the first phase of the study, 736 (98.3%) child questionnaires were received, 686 (93.2%) children had complete pedometer or accelerometer step count data for weekdays and 273 (91.9%) children had complete accelerometer data (activity counts per minute) to determine weekday MVPA. Furthermore, from 93.5% of the parents (n = 701), a complete parent questionnaire was returned to school and from all 44 schools we received the completed questionnaire concerning the school environmental characteristics. During the school year 2011-2012, follow up measurements took place. Since the children had moved schools, they could not be contacted through the school. Therefore, children and parents (n = 736) who participated in the first phase of the study were contacted by phone to ask if they were willing to participate in the second phase. In total, 502 (68.2%) children and their parents agreed to participate in the second phase of the study. Eighty-seven children (11.8%) declined to participate and 147 (20.0%) were not reached after three attempts on different days and times of the day. The children who consented to participate, who were willing to wear an activity monitor (n = 427) and had worn a pedometer during the first phase (n = 249) received an envelope via regular mail, containing a pedometer, a nonwear time activity diary, a child questionnaire, a parent questionnaire and a manual with guidelines concerning the pedometer, the non-wear time activity diary and the questionnaires. The envelope also contained a pre-stamped envelope to send everything back via regular mail. The children who consented to participate, who were willing to wear an activity monitor (n = 427) and had worn an accelerometer during the first phase (n = 178) were visited at home. During the home visit, the accelerometer, the non-wear time activity diary, the child questionnaire, the parental questionnaire and the manual were delivered. One week later, during a second home visit, everything was collected. The principals of the 124 secondary schools, in which the children entered after primary school, were contacted by phone and asked to fill in a questionnaire on school environmental characteristics. In total, 107 (86.3%) schools were willing to participate, 17 (13.7%) declined to participate. The questionnaires were sent to the schools via regular mail. The envelope contained a pre-stamped envelope to send the questionnaire back. The study protocol received approval from the Ethics Committee of Ghent University Hospital. At the end of the second phase of the study, 420 (83.6%) child questionnaires returned, 369 (86.4%) children had complete pedometer or accelerometer step count data for weekdays and 140 (78.7%) children had complete accelerometer data for weekdays. Furthermore, we received 416 (82.8%) parent questionnaires and 100 (93.5%) schools returned the school questionnaire. Measures The outcome and exposure measures that were of interest for the purposes of this study are described in the following sections. The way in which these measures were obtained, was identical at baseline and follow up. Demographic factors In the child questionnaire, the child's age and gender were assessed. The parent questionnaire contained questions about their own and their partner's level of education. The educational attainment of the children's parents was used as a proxy measure of children's socio-economic status. The educational level of the child's mother and father was determined based on four options: less than high school, completed high school, completed college or completed university. The educational levels of mother and father were coded into 'reached a college or a university education level' or 'did not reach a college or a university education level'. School environmental characteristics A school questionnaire was developed and used to assess the implementation of the five framework components concerning extracurricular PA promotion on schools: active commuting to school, sports and PA after school, sports and PA during lunch break, active school yards or playgrounds and health education policy. An outline of the content and response options of the 24 items is given in Table 1. Active commuting to school was questioned by seven items on four themes: promotion of active school commuting, facilities for active school commuting, safety from traffic and safety from theft. Sports and PA after school was questioned by seven items on six themes: PA after school hours, use of facilities, use of equipment, promotion of activities after school, information about activities after school and cooperation with local partners. Sports and PA during lunch break was questioned by five items on 2 themes: PA during lunch break and promotion of PA during lunch break. Active school yards or playgrounds was questioned by two items on two themes: availability of facilities and availability of equipment. Health education policy was questioned by three items on three themes: subjective norm, involvement of students and training of teachers. The questions were mainly based on previous research investigating the PA promotion framework in Flemish elementary and secondary school children [19] and on the questionnaire used in the New South Wales (NSW) Schools Physical Activity and Nutrition Survey (SPANS) [22]. Physical activity Self-reported physical activity To determine the duration (hours and minutes per day) of school related active transportation (walking and cycling to and from school), extracurricular activity at school (participation in physical activities during playtime, lunch break, after school hours or at class or school tournaments) and total PA (including school related active transportation, leisure time active transportation, physical education, extracurricular PA at school and sports during leisure time), the Flemish Physical Activity Questionnaire (FPAQ) was used [23]. The paper and pencil version of the FPAQ was found to be a reliable and reasonably valid questionnaire for the assessment of different dimensions of PA in children, especially when completed with (parental) assistance (test-retest reliability coefficients: ICC = 0.74 to 0.93, with exception from ICC = 0.26 for leisure time active transportation and criterion validity: r = 0.27-0.44) [24], and in adolescents (test-retest reliability coefficients: ICC = 0.68 -0.84 and criterion validity: r = 0.43-0.48, excepts for the extracurricular activity at school: r = −0.16 and school related active transportation: r = −0.19) [23]. Pedometer and accelerometer assessed physical activity Weekday step counts and weekday moderate to vigorous physical activity To measure weekday step counts, the Yamax Digiwalker SW-200 (Yamax cooperation, Tokio, Japan) and the Actigraph accelerometer, model GT1M (Actigraph MTI, Manufacturing Technology Inc., Table 1 Content and response options of the different items included in the school environmental questionnaire Framework components and associated themes Content of items Response Active commuting to school Promotion of active school commuting: -Active school commuting is promoted by your school. 5-point scale a Facilities for active school commuting: -There are sufficient bicycle racks on school for the students. Sports and physical activity after school: Physical activity after school hours: -Does the school organize sports and physical activity after school (before or after school hours or on Wednesday afternoon)? Binary b Use of facilities: -Are the students allowed to use school facilities (e.g. sports hall, polyvalent spaces, covered play areas, fields of grass, outdoor sports fields) after school hours? Binary b Use of equipment: -Are the students allowed to use school sports equipment (e.g. small sports and play material, loan desk for material, music installation, lockers, lines, goals en nets) after school hours? Binary b Promotion of after school activities: -The school promotes sports and physical activity after school. 5-point scale a Information about activities after school: -The school provides information about the sports and physical activity possibilities in the village/city/town 5-point scale a Cooperation with local partners: -Organizations, sports clubs and other local initiatives can use the school facilities after school hours. 5-point scale a -The school cooperates with community partners e.g. there is cooperation with local sports clubs. Sports and physical activity during lunch break Physical activity during lunch break: -Does the school organize after sports and physical activity during lunch break? Binary b Promotion of physical activity during lunch break: -The school stimulates the students to use the school sports facilities and equipment during the school hours. 5-point scale a -The school stimulates the teachers to participate in the sports and physical activities during lunch break and recess. Pensacola, FL, USA) were used. The Yamax Digiwalker has been acknowledged as a valid, accurate and reliable pedometer to measure free-living step-counts in children [25]. The GT1M accelerometer has demonstrated good reliability for measuring steps [26]. Evidence exists that neither accelerometers nor pedometers are affected by reactivity among adolescents [27,28]. Although the step counts measured by the Yamax Digi-walker CW-701 (the update of the Yamax Digiwalker SW-200) have been shown to be highly correlated with the step counts of the GT1M accelerometer (r = 0.78), the overall agreement between the step counts of both monitors is rather low [29]. In the study of Kinnunen et al. (2011), the 95% limits of agreement ranged between −2690 to 2656 steps/day for the mean value (mean of accelerometer and pedometer steps/day = 6026). Further, the limits of agreement varied substantially over the range of values. At the lowest recorded step count (mean of accelerometer and pedometer steps/day = 906) the accelerometer was on average recording more steps/day than the pedometer. In contrast, at the highest step count value (mean of accelerometer and pedometer steps/day = 12,018) the accelerometer recorded less steps/day than the pedometer on average [29]. To overcome this problem, all analyses were controlled for the type of monitor used. In the subsample that was asked to wear an Actigraph accelerometer instead of a pedometer (first phase n = 297, second phase n = 178), model GT1M (Actigraph MTI, Manufacturing Technology Inc., Pensacola, FL, USA) the monitor was also used to measure weekday MVPA. Actigraph accelerometers have shown to have good reproducibility, validity and feasibility in adolescents [30,31]. Protocol data reduction pedometer and accelerometer The children were asked to wear the activity monitor (pedometer or accelerometer) for seven consecutive days including two weekend days. They were asked to wear the activity monitor during waking hours but to remove the activity monitor for aquatic activities and for activities that prohibit activity monitors. Together with their activity monitor, all children received a diary. These diaries were provided to register activities for which the activity monitor was removed. Adolescents recorded on the diaries when the activity monitor was removed, when they put it back on, and the kind of the activities they were involved in (e.g. swimming, football, gymnastics). Children who wore a pedometer were also asked to record the date and steps taken at the end of the day in the diary. For every minute of reported MVPA registered in the diary for which the pedometer was removed, 150 steps were added to the daily number of reported step counts [33]. The accelerometer data were downloaded to a computer (Actilife software version 4.1.0). Data-reduction software, MeterPlus [34], was used to screen, clean and score the accelerometer-data (step counts and activity counts/min). The data-reduction process of the step counts obtained by accelerometers was comparable to the data-reduction process of the step count data obtained by pedometers. For every minute of reported MVPA in the diary for which the accelerometer was removed 150 steps were added to the daily number of registered step counts [33]. In the data reduction process of the activity counts, time periods of at least one hour of consecutive zeros were removed assuming the accelerometer was unworn [35,36]. Whenever applicable, the activity diaries were used to replace these consecutive number of zeros by the corrected number of minutes moderate PA (MPA) and vigorous PA (VPA) registered in the diaries [37]. To score the accelerometer-data and to obtain the mean number of minutes/day MVPA, the cut-points of Puyau (MVPA: ≥3200 activity counts/min) [38] were used. The review of Reilly and colleagues (2008) concluded that current evidence suggests a cut-point within the range of 3000-3600 counts/min to determine MVPA when using the Actigraph with 1 minute epochs [39]. Puyau and colleagues defined MVPA as activity counts above 3200 counts/min. which lies in the range of 3000-3600 counts/min [38]. For inclusion in the data analysis, the required total accumulated number of minutes registered time by the accelerometer and diaries was 600 minutes/day. For both activity monitors, pedometers and accelerometers, a last step in the data reduction process was the determination of a weekday average. A minimum of three valid weekdays of monitoring was needed to obtain reliable estimates [40,41]. The participants failing to reach the postulated inclusion criteria were excluded from the analyses. In total, 24 children (8%) were excluded from the analyses the first phase of the study, whereas 38 children (21%) were excluded from the analyses in the second phase of the study. Data analyses Descriptive statistical analyses were conducted using SPSS 20.0. The outcome and exposure measures obtained for the purposes of this study originate from three levels: individual level, primary school level and secondary school level. These three levels do not fit in a hierarchical model that assumes a hierarchical or nested structure. Children from a certain primary school may attend several different secondary schools. This is a typical cross-classified structure [42]. The maximum likelihood procedures (for example IGLS algorithm) are designed to work well for nested structures but are not able to take into account a cross-classified structure. Therefore, the Markov Chain Monte Carlo Method (MCMC) was used to fit the cross-classified multilevel models [42]. This method treats each set of classification units as an additive term in the model [43]. To investigate longitudinal changes (primary-secondary school) in PA (self-reported PA: active transport to and from school, extracurricular PA at school and total PA; pedometer/accelerometer determined weekday step counts and accelerometer determined MVPA) and differences in the implementation scores of the five framework components concerning extracurricular PA promotion in schools (active commuting to school, sports and PA after school, sports and PA during lunch break, active school yards or playgrounds and health education policy), four-level (time point, individual, primary school, secondary school) cross-classified multilevel regression models were conducted using MLwin version 2.22. Longitudinal changes were investigated by regressing the dependent PA variables and the implementation scores of the five framework components onto the time point variable. To investigate if longitudinal changes in PA are moderated by gender or by the level of PA in primary school, interaction effects with gender and the level of PA in primary school (expressed in 'achieved the guidelines or not') were examined by entering the cross-product term "time point × gender" and "time point × baseline PA" in the regression models. Girls achieved the guidelines when their pedometer/accelerometer determined mean number of steps on weekdays was higher than 11.000, boys achieved the guidelines when their mean number of steps on weekdays was higher than 13.000 [44]. To investigate if the changes in the implementation scores of the five framework components concerning extracurricular PA promotion on schools are predictors of changes in PA, cross-classified multilevel regression models were conducted. The changes in the implementation scores of the five framework components were regressed onto the changes in PA. To investigate if the association between the five framework components and PA is moderated by gender or by the level of PA in primary school, interaction effects with gender and the level of PA in primary school (expressed in 'achieved the guidelines or not') were examined by entering the crossproduct term "change in the framework component × gender" and "change in framework component × baseline PA" in the regression models. Measures of change between the two time points in the implementation scores of the five framework components and in PA were calculated by subtracting the measures at time point 2 from the measures at time point 1. The changes in the implementation scores of the five framework components were recoded into a dummy variable: increased or decreased. All analyses were controlled for two proxy measures of individual SES (educational attainment of mother and father). Furthermore, the analyses were controlled for the type of monitor used by entering a variable "type of monitor (accelerometer/pedometer)" in the regression models. P-values of 0.05 were considered significant. Sample characteristics In total, for 420 children data were available for both the first and second phase of the study. The sample consisted of 208 girls (49.5%) and 212 boys (50.5%). Mean age at baseline was 11.1 ± 0.5 years and at follow-up 13.4 ± 0.6 years. Approximately half of the children's parents were highly educated, as 56.3% of the mothers and 47.6% of the fathers attained college or university. Table 2 summarizes the results of the Cross-Classified Multilevel Regression Models conducted to investigate longitudinal changes (transition primary school (T1)secondary school (T2)) in PA. The regression models revealed a significant increase in self-reported active transport to and from school (p < 0.001), a decrease in extracurricular PA (p < 0.001) and a decrease in total PA (p < 0.01) from primary to secondary school. The models also indicated that accelerometer determined minutes of weekday MVPA increased whereas the accelerometer/ pedometer determined weekday steps counts showed no significant change from primary to secondary school. Changes in physical activity Entering the cross-product term "time point × baseline PA" in the regression models revealed that the change in pedometer/accelerometer determined weekday step counts from primary to secondary school is dependent on the level of PA in primary school (Table 3). For children not achieving the PA guidelines in primary school, the mean number of steps on weekdays was low in primary and secondary school and almost no change from primary to secondary school could be observed, while for children achieving the PA guideline at baseline, the mean number of steps in primary school was high but showed a steep decrease from primary to secondary school. For the accelerometer determined MVPA, the self-reported active transport to and from school, extracurricular PA and total PA, no significant "time point × baseline PA" interaction was found (Table 3). Entering the cross-product term "time point × gender" in the regression models revealed that the changes in accelerometer determined MVPA and step counts on weekdays, self-reported active transport to and from school, extracurricular PA and total PA, are not dependent of gender (Table 3). Differences in the implementation scores of the five framework components Table 4 summarizes the results of the Cross-Classified Multilevel Regression Models conducted to investigate differences in the implementation scores of the five framework components concerning extracurricular PA promotion, between primary and secondary schools. These models revealed a significantly higher implementation score for after-school sports and PA (p < 0.001) in primary schools than in secondary schools (score primary school: 3.39 (0.66), score secondary school: 3.28 (0.71). Associations between changes in the implementation scores of the five active school strategies and changes in physical activity Table 5 summarizes the results of the Cross-Classified Multilevel Regression Models conducted to investigate if changes in the five framework components concerning extracurricular PA promotion can predict changes in PA. These models revealed that change in the component active schoolyards and playgrounds, is a significant predictor (p < 0.01) of change in the pedometer/accelerometer determined weekday step counts: an increase in the score of active schoolyards and playgrounds is associated with an increase in the weekday steps counts. Furthermore, a positive change in health education policy significantly predicts (p < 0.05) an increase in extracurricular PA. Entering the cross-product term "change in the framework component × gender" in the regression models revealed that gender is a significant moderator (p < 0.05) of the association of change in active schoolyards and playgrounds with change in total PA (Table 5). Figure 1 shows that among boys, total PA showed a larger decrease when the score on active schoolyards and playgrounds decreased from primary to secondary school. In contrast, among girls, total PA showed a smaller decrease when the score on active schoolyards and playgrounds decreased from primary to secondary school. No other crossproduct terms were found to be significant (Table 5). Entering the cross-product term "change in the framework component x level of PA" in the regression models revealed that the level of PA in primary school (expressed in 'achieved the guidelines or not') is a significant moderator (p < 0.05) of the association between change in the score of active commuting to school and change in total PA (Table 5). Among children who did not achieve the PA guideline in primary school, total PA showed a larger decrease when the score of active commuting to school decreased from primary to secondary school. In contrast, among children who achieved the PA guideline in primary school, total PA showed a smaller decrease when the score of active commuting to school decreased from primary to secondary school ( Figure 2). A comparable moderating effect of the PA level in primary school was found for the association between change in active commuting to school and change in accelerometer determined MVPA (Table 5). Among children who did not achieve the PA guideline in primary school, accelerometer determined MVPA showed a decrease when the score of active commuting to school decreased from primary to secondary school while accelerometer determined MVPA showed an increase when the score of active commuting to school increased. On the other hand, among children who achieved the PA guideline in primary school, accelerometer determined MVPA showed a smaller decrease when the score of active commuting to school decreased from Table 4 Longitudinal differences in the implementation scores of the five framework components concerning extracurricular physical activity promotion on schools primary to secondary school than when the score of active commuting to school increased (Figure 3). Discussion The results of this study confirm that the transition from primary to secondary school is characterized by a change in PA. In line with the literature [8,11,12] the changes in PA seem to be dependent on the context of PA. Comparable to the results of the study of Cardon and colleagues [12], the data presented here indicated an increase in the self-reported number of minute's active transport to and from school after the transition from primary to Figure 1 Interaction "change in score of active schoolyards and playgrounds × gender" for total PA. Figure 2 Interaction "change in the score of active commuting to school × baseline physical activity (0 = did not achieve the guidelines, 1 = achieved the guidelines)" for total physical activity. secondary school. Furthermore, a decline was found in selfreported extracurricular PA and total PA. This is in line with the results of the study of Niven and colleagues [8]. Accelerometer determined weekday MVPA increased while for pedometer/accelerometer determined weekday step counts a large decline was observed among those who achieved the PA guidelines in primary school. Among those who did not achieve the guidelines in primary school weekday step counts remained nearly stable. This result underscores the fact that the start level of PA predicts the change in weekday step counts after the transition from primary to secondary school. To our knowledge, this was the first study to investigate changes in weekday PA determined by accelerometers or pedometers. More research is clearly needed to investigate this in more detail. A somewhat surprising result was that no differences between boys and girls were noted in the changes in PA. In two previous longitudinal studies the changes in PA were more apparent in boys than in girls [9,11]. Despite the fact that in the present study no gender differences were found in the change in PA, the results of previous studies underscore the fact that future research still should be encouraged to investigate differences between boys and girls. The implementation scores of sports and PA during lunch break, active schoolyards and playgrounds and health education policy were higher (increase > 0.50) in secondary schools than in primary schools. Although the change in the score of sports and PA after school was small, the score was significantly lower in secondary compared to primary schools (decrease < 0.50) and the score of active commuting to school showed no difference. In general, secondary schools seem more likely to foster strategies to promote PA during school hours than primary schools who seem more likely to foster strategies to promote PA after school. Primarily, secondary schools are overall larger and have therefore more available space and facilities than primary schools. This may explain the higher implementation score of active schoolyards and playgrounds. Secondly, the availability of space and facilities may in turn be conducive to organize sports and physical activity during lunch break. This may explain the higher implementation score on this component. Third, larger schools typically have more teachers. Consequently, teachers following refresher courses on sports and PA during school hours can be replaced more easily. This ultimately may encourage following refresher courses on sports and PA. Following this reasoning, secondary schools may have a higher score on the item concerning training for teachers that contributes to the implementation score of health education policy. Further, a higher implementation score on the component sports and PA during lunch break and the component active schoolyards or playgrounds may affect the subjective norm with regard to the importance of sport and PA for the school. The subjective norm with regard to the importance of sport and PA for the school is one of the items that determined the component health education policy. And as last, the implementation score of health education policy Figure 3 Interaction "change in the score of active commuting to school × baseline physical activity (0 = did not achieve the guidelines, 1 = achieved the guidelines)" for accelerometer determined MVPA. is also determined by how pupils are involved in the decision making about sports and physical activity. The cognitive abilities of secondary school children may be a plausible explanation for a higher involvement in the decision making process. A possible reason for the higher score of sports and PA after school in primary schools can be that in primary schools after school PA programs are seen as a solution for children who are not allowed to go home after school without supervision. Sports and PA programs after school are then used as a form of after-school care. An earlier study investigating the implementation of the different components of the PA promotion framework in Flemish primary and secondary schools did not find a difference in the implementation score of the different components of the framework to promote extracurricular PA [19]. However, since the implementation score in the study of Cardon and colleagues is an overall score, including the different implementation scores on the different components of the PA promotion framework and in the present study separate scores were conducted for the five components comparison of the study results is difficult. To our knowledge this was the first longitudinal study that investigated if changes in the school environment can predict changes in self-reported and pedometer/ accelerometer determined PA in different contexts. It was promising to find that changes in three components of a framework that was developed to promote extracurricular PA programs were associated with changes in PA. An increase in the implementation score of health education policy was found to predict an increase in selfreported extracurricular PA independent of gender or the level of PA at baseline. This result underscores the need to further endeavor for a sport and PA minded schoolatmosphere. Efforts to change the overall school spirit, the involvement of students in the decision making process about sports and PA and the support teachers receive to have training on sports and PA may be promising. A positive association was found between the implementation score of active schoolyards and playgrounds with pedometer/accelerometer determined weekday step counts. Furthermore, among boys, total PA decreased less when the implementation score of active schoolyards and playgrounds increased from primary to secondary school while among girls the decrease in total PA was larger when the implementation score of active schoolyards and playgrounds increased from primary to secondary school. Based on this result it seems that the availability of facilities and equipment seems to be important for boys' total PA but not for girls. This finding is in accordance to the conclusions of several observational studies concluding that PA facilities in school settings are predominantly used by boys while girls less claim the activity facilities [45,46]. A possible explanation can be that the facilities available are more interesting for activities preferred by boys than by girls. Furthermore, for girls recess and lunch break are considered to be an opportunity to socialize with friends [47]. Active commuting to school was the third framework component that was of importance. The association found between the change in the implementation score of active commuting to school and both self-reported total PA and accelerometer determined MVPA was dependent on the level of PA at baseline. For both total PA and MVPA, an increase in the implementation score of active commuting to school was found to have a more positive effect on children who did not achieve the guidelines in primary school compared with those who did achieve the guidelines in primary school. Children who were not sufficiently active in primary school seem more impressible to the promotion of active commuting to school, the availability of facilities for active commuting and safety issues related to active commuting, than children who did achieve the guidelines in primary school. It is possible that children who were not sufficiently active in primary school are not interested in sports activities but are more receptive for physical activities that are not sports activities. Surprisingly, this component does not directly predicts the change in active commuting, A possible explanation for these findings can be that schools that promote active commuting to school, provide facilities for active commuting, are situated in a safe neighborhood and consequently score higher on the implementation score of active commuting, score also higher on other factors. For example, it is possible that schools with higher scores on the implementation score of active commuting score also higher on the implementation score of active schoolyards and playgrounds This may have had an influence influences on the results. Two components were no predictors of changes in PA: sports and PA after school and sports during lunch break. This finding is somewhat surprising. Earlier studies showed that factors like access to activity facilities [48], access of equipment to be physically active [49][50][51][52], provision of after-school PA [53] and recess PA programs [54] have been associated with participation in extracurricular PA. These factors were used in this study to determine the implementation scores of sports and PA after school and sports during lunch break. However, there is limited data available on adolescents. It is possible that among primary school children but not among older, secondary school children, these components are of importance and can contribute to PA. Consequently, there is a need for additional research to investigate this in more detail. Limitations and strengths The results of this study need to be interpreted in light of some limitations. First, this study has been conducted in a Belgian sample and focuses on a conceptual framework for PA programs that has been developed for schools in Flanders (Belgium). Although some lessons could be learned from the results and conclusions of this study, the findings are not fully generalisable to other countries or continents. Second, step-counts were determined using the Yamax Digi-Walker CW701 and the GT1M accelerometer. Although the step counts measured by the Yamax Digi-walker CW-701 have been shown to be highly correlated with the step counts of the GT1M accelerometer, the overall agreement between the step counts of both monitors is rather low [29]. To overcome this problem, all analyses were controlled for the type of monitor used. Third, the choice was made to determine the level of physical activity in primary school using data that were available for every participant. Because of the limited availability of accelerometers, accelerometers were only used in a subsample. Step-count data was available for every participant and therefore used to determine the level of PA in primary school. Fourth, school SES may have had an impact on the resources of the school and the attention of the school to PA-focused initiatives. Unfortunately, no information was available concerning school SES. Consequently, it was not possible to control the analyses for school SES. A first strength of the present study is the longitudinal design of the study. Secondly, clustering in primary and secondary schools was taken into account by using cross-classified multilevel analyses. Conclusions This study provides more insight into the changes in PA during the transition from primary to secondary school, the differences in school environmental characteristics with regard to the five components of the Flemish framework to promote extracurricular PA, and the effect of changes in these school environmental characteristics on changes in self-reported and pedometer/accelerometer determined PA. Based on the results of this study it was possible to conclude that the changes found in PA were dependent of the context of PA. Furthermore, secondary schools seem more likely to foster in-school strategies (sports and PA during lunch break, active schoolyards and playgrounds and health education policy) than primary schools while primary schools are more likely to foster sports and PA after school than secondary schools. As last, the implementation of three components of the framework developed to promote extracurricular PA were found to be important to predict changes in PA during the transition from primary to secondary school: active commuting to school, active schoolyards and playgrounds and health education policy. As these components can contribute to higher levels of PA, efforts are needed to extent the implementation of these components in all primary and secondary schools. Moreover, components need to be implemented in school settings as early as possible to encounter the changes in PA in a positive manner during the transition from primary to secondary school. However, the contribution of other social and physical environmental factors to the changes in PA need to be further explored.
2017-04-13T23:32:36.969Z
2014-03-19T00:00:00.000
{ "year": 2014, "sha1": "ce02fcc7d968b89b107d9bbba1a3cb029291e73d", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-261", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b326612c5b997d576484b358db7ec6263aaf59f5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
30464136
pes2o/s2orc
v3-fos-license
Evaluation of the topical antiperspirant effects of a simple herbal formula Background: Individuals with hyperhidrosis have much higher than average sweat rates. Topical application of herbal preparation may be effective in sweat control. Objective: The aim of this study was to evaluate the sweat control efficacy of an herbal preparation applied directly on the sole in healthy individuals using an electronic device SUDOSCAN. Methods: Twenty healthy volunteers were screened and thirteen volunteers were eligible for the study. Results: 84.6% of volunteers experienced a sweat reduction after 15 minutes of foot immersion in the herbal bath. The reduction reached an average of 15.3%. Conclusion: The herbal formula positively reduces sweating when applied topically. Introduction Although sweating is a normal physiological process, excessive sweating (hyperhidrosis) imposes a significant negative impact on the quality of life of individuals, through the embarrassing experience which also reduces self-confidence and social acceptance. This negative impact is further exacerbated by the production of unpleasant odour secondary to bacterial growth at the site. Many individuals use antiperspirant ⁄deodorant products to control sweating and odour. Some chemical antiperspirants are effective, however, the side effects like allergy and irritations are also obvious [1]. The adverse effects caused by chemical antiperspirants could be avoided by using natural plant extracts. In ancient records the plants used for anti-perspiration by the Chinese people, include mulberry leaves (in "Danxi's Mastery of Medicine" and "Yi Shuo"), Huang Qi and Qin Pi [2]. These herbs are classified as having cool and cold property, thus manage to regulate sweating. Moreover, these herbs also possess strong antibacterial properties [3,4], which could serve as deodorant. In the present study, we chose mulberry leaves and Qinpi to form a simple combined formula, to be tested against sweating in the feet. Copious Sweating from the sole and digits of the feet would favour the foot to be chosen as the test site. The Testing Device -SUDOSCAN The SUDOSCAN is a device approved by the FDA of USA for the quick evaluation of sweat gland function through galvanic skin responses. SUDOSCAN measures the ability of the sweat glands to release chloride ions in response to an electrochemical activation on the palm of the hands and soles of the feet to be treated. Palms and soles are having the highest sweat gland density. SODOSCAN provides quantitative measures of the sweat conductance on the hands and feet (in units of micro-Siemens). These measures can be conveniently used to compare the results before and after topical treatment against sweating [5]. Sweat glands are innervated by small unmyelinated sympathetic C nerve fibers. The technology of SUDOSCAN uses low direct-current (DC) stimulation to extract chloride ions from sweat to create a current when these electrically charged ions encounter specific electrodes. SUDOSCAN can measure electrochemical skin conductance (ESC) of hands and feet through reverse iontophoresis. The pilot study In this study, we aimed to determine the efficacy of the antiperspirant herbal formula against sweating in the sole of feet by using SUDOSCAN to measure sudomotor functional changes in healthy volunteers before and after immersing the feet in standard herbal baths. The pilot study was designed as a self-control prospective study, SUDOSCAN device was used to measure the changes of the sweating function of the sole before and after soaking in a standard herbal formula bath. As the study was a small sample size pilot study to assess the sweat control efficacy by using a noninvasive measurement, the administration route was topical and the study herbs are safe and commonly used, we did not ask participants provide written consent. However, we had verbal informed consent of all participants. We also have participants' identifying information such as subject number, name and contact number. The volunteers were healthy males or females aged over 18 years without known sweat dysfunction. SUDOSCAN consists of two sets of stainless-steel electrodes in contact with the palms of the hands and soles of the feet. A special computer was responsible for recording and data management ( Figure 1). After resting in the study room for 15 minutes, the volunteers were required to immerse their feet in body temperature water, then the standard herbal bath, as tabulated: During the test, the subject placed his or her hands and feet on the electrodes (Figure 1). Data appeared automatically in the computer screen. Statistical methods The changes in SUDOSCAN measures at each time point were assessed by the General Linear Model, Chi square test, repeated measures analysis of covariance (ANCOVA) using baseline parameters as the covariate. The sweat reduction before and after herbal immerse was calculated as the mean of the sweat reduction values. Differences among time points were tested by one-way ANOVA for continuous variables and by χ 2 -test for categorical variables. All data analysis was performed using the Statistical Package for the Social Sciences (SPSS), version 22.0. Statistical significance was two-sided. Any difference with p value less than 0.05 were considered significant. Results The testing environment (22°C) apparently had limited effects on sweating. The temperature of water (37°C) and herbal solution (37°C) immersions was identical. Recruitment Seven healthy males and 13 healthy females with a mean age of 32.6 years were screened. The volunteers fulfilling the following inclusion criteria were enrolled in the imprint casting studies: women and men at an age of 18 to 65 years. Thirteen subjects who met the study criteria were enrolled. Seven volunteers, whose ESC measures were less than 60 µS or showed abnormal sweating response, were excluded. Eventually altogether thirteen healthy volunteers entered the analysis (Table 2). Outcomes and estimation Results for quantitative variables are shown as means standard deviation. Repeated measurement analysis, adjusted with hand data as covariate, was used for the comparisons. A p value < 0.05 was regarded as statistically significant. ESCs of the feet were significantly decreased at various time points compared to their baselines in the treated group (P = 0.001, p = 0.003 and P = 0.000, respectively) ( Table 1). We randomly selected 8 subjects as controls to the 13 subjects. The control subjects were not given water or herbal bathing. They served to rule out the environment effects during testing. The results showed that the controls did not show decline in ESC. Effects of the herbal bath were thus reinforced (Table 3). Subjects soaked with herbal bath showed greater drop in ESC which was prolonged (Figure 2). In treated group (n=13), changes in response to A (water immersing at 37°C), B (herbal immersing at 37°C) and C (aftermath of A & B) were shown Figure 3 In control group (n=8), changes at various time points under office environment (22°C) were also shown Figure 3. No subject reported allergic reaction during the course of the study. Discussion Aluminum chloride hexahydrate (ACH) is commonly used in cosmetic preparations as antiperspirant. Many still consider it to be the most effective agent. However, irritations of the skin and damage to clothing remain its major disadvantages [1]. The extracts from Chinese herbs used as a topical antiperspirant would not only avoid the sideeffects caused by the inorganic antiperspirants but also follow the current trend of using natural material in the cosmetics industry. The antiperspirant effects of Mulberry has been clearly recorded in the "Shennong's Classic of Materia Medica". Esculin hydrate, an active antiperspirant recorded in traditional Chinese herbal medicine, could be as effective [6,7]. Fraxinus rhychophylla, another Chinese traditional herbal medicine, has been studied in the laboratory in an artificial skin model using fluorescence analysis technique for comparison with aluminum polychloride with regard to antiperspirant effects. The results showed that the extracts from fraxinus rhychophylla had the similar effects as the aluminum polychloride [2]. Industries require that antiperspirants should have more than 20% sweat reduction in 50% of panelists before being accepted as reliable [8,9]. In this pilot study, we found that the average sweat reductions were 15.3% but the contact duration was only 15 minutes. The pilot study did demonstrate a trend of sweat reduction. In conclusion, the pilot study on sweating control using SUDOSCAN evaluation demonstrated that soaking the feet with herbal bath containing Mulberry leave and Fraxinus rhychophylla extracts effectively reduced sweating. Table 3. ESC measures of the two groups at each time point.
2019-06-12T15:02:04.135Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "a0baa422d33fb14a9250a6df05c20f2c276fb3c0", "oa_license": "CCBY", "oa_url": "https://www.oatext.com/pdf/CMI-2-118.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "996c58f6904c6f9e94990de3da6212f9b237bd01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53508628
pes2o/s2orc
v3-fos-license
Distributed snow and rock temperature modelling in steep rock walls using Alpine 3 D In this study we modelled the influence of the spatially and temporally heterogeneous snow cover on the surface energy balance and thus on rock temperatures in two rugged, steep rock walls on the Gemsstock ridge in the central Swiss Alps. The heterogeneous snow depth distribution in the rock walls was introduced to the distributed, processbased energy balance model Alpine3D with a precipitation scaling method based on snow depth data measured by terrestrial laser scanning. The influence of the snow cover on rock temperatures was investigated by comparing a snowcovered model scenario (precipitation input provided by precipitation scaling) with a snow-free (zero precipitation input) one. Model uncertainties are discussed and evaluated at both the point and spatial scales against 22 near-surface rock temperature measurements and high-resolution snow depth data from winter terrestrial laser scans. In the rough rock walls, the heterogeneously distributed snow cover was moderately well reproduced by Alpine3D with mean absolute errors ranging between 0.31 and 0.81 m. However, snow cover duration was reproduced well and, consequently, near-surface rock temperatures were modelled convincingly. Uncertainties in rock temperature modelling were found to be around 1.6 C. Errors in snow cover modelling and hence in rock temperature simulations are explained by inadequate snow settlement due to linear precipitation scaling, missing lateral heat fluxes in the rock, and by errors caused by interpolation of shortwave radiation, wind and air temperature into the rock walls. Mean annual near-surface rock temperature increases were both measured and modelled in the steep rock walls as a consequence of a thick, long-lasting snow cover. Rock temperatures were 1.3–2.5 C higher in the shaded and sunny rock walls, while comparing snow-covered to snow-free simulations. This helps to assess the potential error made in ground temperature modelling when neglecting snow in steep bedrock. Introduction In the European Alps, numerous rock fall events were observed in permafrost rock faces during the last decades (e.g. Fischer et al., 2012;Gruber et al., 2004b;Phillips et al., 2016b;Ravanel et al., 2010Ravanel et al., , 2013. Rock fall can be attributed to various triggering factors (Fischer et al., 2012;Krautblatter et al., 2013), including a fast reaction 35 of rock faces to climate change expressed in rapid active layer thickening and permafrost degradation (e.g. Allen and Huggel, 2013;Deline et al., 2015;Gruber and Haeberli, 2007;Ravanel and Deline, 2011;Sass and Oberlechner, 2012). Rock wall instability is a risk to the safety of local communities and infrastructure in the densely populated Alps (Bommer et al., 2010). Measuring rock wall temperatures (e.g. Gruber et al., 2004a;Haberkorn et al., 2015a, Hasler et al., 2011Magnin et al., 2015, PERMOS, 2013 and in a further step modelling the spatial permafrost distribution in steep rock walls is therefore of great importance. Numerical model studies simulating rock temperatures of idealized rock walls have been realised e.g. by Gruber et al. (2004a), Noetzli et al. (2007) and . These studies assumed a lack of snow in steep rock exceeding slope angles of 50°, which is based on the general assumption that wind and gravitational transport (avalanching or sloughing) remove the snow from steep rock exceeding 50°-60° (e.g. Blöschl and Kirnbauer, 1992;Gruber Schmid and Sardemann, 2003;Winstral et al., 2002). They therefore suggested that air temperature and solar radiation are sufficient to model rock surface temperatures in near-vertical, compact, homogeneous rock walls. Rock walls are, however, often variable inclined, heterogeneous, fractured and thus partly snow-covered (Haberkorn et al., 2015a;Hasler et al., 2011;Sommer et al., 2015). Beside threedimensional (3d) subsurface heat flow and transient changes in steep bedrock thermal modelling (Noetzli et al., 50 2007;, the strongly variable spatial and temporal rock surface boundary conditions therefore also need to be taken into account. The spatially variable snow cover is one of these driving factors. The influence of the snow cover on the rock thermal regime has recently been studied in steep bedrock (Haberkorn et al., 2015a,b;Hasler et al., 2011;Magnin et. al., 2015). The highly variable spatial and temporal distribution of the snow cover strongly influences the ground thermal regime of steep rock faces (Haberkorn et 55 al., 2015a,b;Magnin et al., 2015) due to the high surface albedo and low thermal conductivity of the snow cover, as well as energy consumption during snow melt (Bernhard et al., 1998;Keller and Gubler, 1993;Zhang, 2005). In gently inclined, blocky terrain, effective ground surface insulation from cold atmospheric conditions were observed and modelled for snow depths exceeding 0.6 to 0.8 m (Hanson and Hoelzle, 2004;Keller and Gubler, 1993;Luetschg et al., 2008). In contrast, Haberkorn et al. (2015a) found that snow depths exceeding 0.2 m were 60 enough to have an insulating effect on steep, bare bedrock. Such amounts are likely to accumulate in steep, high rock walls with a certain degree of surface roughness. Indeed, a warming effect of the snow cover on mean annual ground surface temperature (MAGST) was observed by Haberkorn et al. (2015a) and Magnin et al. (2015) in shaded rock walls, whilst in moderately inclined (45°-70°) sun-exposed rock walls Hasler et al. (2011) suggest a reduction of MAGST of up to 3 °C compared to estimates in near-vertical, compact rock, due to snow To capture the strong spatial variability of the local surface energy balance and consequently of the ground thermal regime in moderately inclined terrain (Gubler et al., 2011;Riseborough et al., 2008), as well as in steep, rough rock walls (Haberkorn et al., 2015a;Hasler et al., 2011) it is necessary to account for the complex microtopography and its influence on local shading effects, lateral heat fluxes at the rock surface caused by pronounced temperature gradients, small-scale snow distribution patterns and rock temperatures. The 1d 85 modelling approach used by Haberkorn et al. (2015b) to investigate the influence of the snow cover on the rock thermal regime is therefore not sufficient, although the ability of the 1d SNOWPACK model (Lehning et al., 2002a,b;Luetschg et al., 2003;Wever et al., 2015) to simulate the effect of a snow cover on rock temperatures could clearly be demonstrated. High-resolution and spatially distributed physics-based simulations of land surface processes are needed. 90 We therefore present a spatially distributed model study of the influence of the snow cover on the surface energy balance and consequently on near-surface rock temperatures (NSRT) in steep north-west and south-east oriented rock walls using the physics-based 3d atmospheric and surface process model Alpine3D (Lehning et al., 2006). The distribution of the spatially and temporally heterogeneous snow cover in the steep terrain (up to 85°) was provided to the model using a precipitation scaling approach. This was based on a combination of snow depth 95 measurements from the on-site flat field automatic weather station (AWS) and high-resolution (0.2 m) snow depth distribution data obtained using TLS. The challenge of integrating representative precipitation input (e.g. Imhof et al., 2000;Fiddes et al., 2015;Stocker-Mittaz et al., 2002) in the rock walls and its redistribution by wind (Mott and Lehning, 2010), as well as gravitational transport (Bernhardt and Schulz, 2010; was thus accounted for. Model performance for simulating snow depth distribution and consequently the 100 influence on rock temperatures was tested against a dense network of validation measurements of snow depth and NSRTs at both the point-and the spatial-scale. After quantifying model uncertainties, a sensitivity study was performed in order to assess the effects of the snow cover on the rock thermal regime. High-resolution (0.2 m) simulations were carried out, either providing snow cover distribution to the model (by precipitation scaling) or fully neglecting the presence of a snow cover in the rock walls. Thus the potential error induced by neglecting the snow cover in steep rock face thermal modelling for slope angles >50° can be estimated. This is necessary, since it has in general been assumed that wind and gravitational transport remove the snow from steep rock in slopes >50-60° (e.g. Blöschl and Kirnbauer, 1992;Gruber Schmid and Sardemann, 2003;Winstral et al., 2002) and rock temperatures were often modelled without snow for idealized rock walls >50° (e.g. Gruber et al., 2004a;Noetzli et al. 2007). Study site The Gemsstock mountain ridge (46° 36' 7.74" N; 8° 36' 41.98" E; 2961 m a.s.l.) is located on the main divide of the Western Alps, central Switzerland (Fig. 1). Precipitation at Gemsstock is affected by both northerly and southerly airflows, resulting in enhanced orographic precipitation (Haberkorn et al. 2015a). The rocky ridge 115 consists of Gotthard paragneiss and granodiorite, with veins of quartz. The site is at the lower fringe of mountain permafrost. Permafrost distribution is patchy in the north-west facing rock wall, whereas there is no permafrost in the south-east facing wall of the ridge (PERMOS, 2013). This study focuses on a specific area on the north-west and south-east facing rocky flanks of the ridge, which for simplicity are henceforth referred to as the N and S slopes. The 40 m high slopes (2890-2930 m a.s.l.) are 40° to 120 70° steep, with vertical to overhanging (>90°) sections (Fig. 1a). The N facing scarp slope is intersected by a series of parallel joints dipping south-eastwards at 70° (Phillips et al., 2016a). These joints form 0.3 to 3 m wide horizontal ledges within the N facing rock wall and alternate with steep to vertical parts. In contrast, the S facing dip slope has a rather smooth rock surface. We investigate the 2 year study period between 1 September 2012 and 31 August 2014. Methods Applying the Alpine3D model chain for spatially distributed steep rock wall thermal modelling requires various input data and computing steps. In Fig. 2 a brief synopsis of the methods used in this study are shown. Based on Fig. 2 first the distributed numerical model used in this study is introduced. Then the data and model settings 130 required to drive the model are specified, followed by a description of the computation of the precipitation input, which is essential in order to introduce varying snow depths to the extremely steep terrain. Finally the validation data-sets used to evaluate the model performance are introduced. The fully distributed physics-based surface process model Alpine3D (Lehning et al., 2006Kuonen et al., 2010) was used to simulate the influence of the heterogeneously distributed snow cover on the thermal regime of the Gemsstock rock ridge. To do this it is essential to model the surface energy balance as shown in Eq. 1, which is determined by the exchange of energy between the atmosphere and the surface. The energy flux Q snow 140 available for warming and melting or cooling and freezing of the snowpack or the ground is calculated in Alpine3D as the sum of all energy balance components [W m -2 ] at the respective surface (Armstrong and Brun, 2008): 1 Distributed energy balance modelling Where Q net is the sum of the net fluxes of short-and longwave radiation, Q sensible and Q latent are the turbulent fluxes of sensible and latent heat through the atmosphere, Q rain is the rain energy flux and Q ground is the 1d conduction of heat into the ground. In Alpine3D energy fluxes are considered positive when directed towards the snowpack surface (energy gain). Meteorological data, a digital elevation model (DEM) and a land-use model are required to run Alpine3D ( Fig. 150 2). In the setup used here Alpine3D consists of a 3d radiation model, which is based on the view factor approach to calculate short-and longwave radiation in complex terrain, including shortwave scattering and longwave emission from the terrain (Helbig et al., 2009). The 3d atmospheric processes are coupled to the 1d energy balance model SNOWPACK (Wever et al., 2014). The latter is based on the assumption that there is no lateral exchange in these media. SNOWPACK simulates the temporal evolution of the vertical transport of mass and 155 energy, as well as phase-change processes for a variety of layers within the seasonal snowpack and in the ground for each single grid cell (Luetschg et al., 2003(Luetschg et al., , 2008Wever et al., 2015). A bulk Monin-Obukhov formulation is used to parameterize the latent and sensible heat fluxes at the surface. The water flow in the snow and rock is solved using a simple bucket type approach, which is suitable for daily and seasonal time-scales (Wever et al., 2014). The 3d snow drift module Mott and Lehning, 2010) was not included in the simulations, although snow redistribution due to wind was observed at Gemsstock (Haberkorn et al., 2015a), because there is currently no model that convincingly reproduces 3d wind fields over extremely steep, heterogeneous rock walls. In addition the mass conserving computation of gravitational transport and deposition of snow (Bernhardt and Schulz, 2010; is not included in simulations, although sloughing and avalanching were observed in the field (Haberkorn et al., 2015a) and have been suggested as the main process involved in the redistribution of snow in steep rock walls by Sommer et al. (2015). To account for the effects of snow redistribution on the snow depth distribution, we used measured snow depth data from a TLS campaign to scale precipitation grids (Sect. 3.1.3). Model setup The model was driven by meteorological data measured by the on-site AWS Gemsstock (Fig. 1a, 2869 m a.s.l.). Air temperature, relative humidity, wind speed and direction, as well as incoming short-and longwave radiation data were pre-processed, as well as spatially interpolated and parameterized with the MeteoIO library (Bavay and Egger, 2014). Precipitation was provided to the model as described in Sect. 3.1.3. Gaps in meteorological data were corrected according to Haberkorn et al. (2015b). The DEM is derived from high-resolution TLS, carried out at Gemsstock in the snow-free N and S facing rock walls in summer using a RIEGL VZ6000 scanner at a grid resolution of 0. 185 The physical properties of the granodiorite bedrock were based on Cermák and Rybach (1982): with a rock density of 2600 kg m -3 , a specific heat capacity of 1000 J kg -1 K -1 , a thermal conductivity of 2.8 W m -1 K -1 (S facing grid cells) respectively 1.9 W m -1 K -1 (N facing grid cells), as discussed in Haberkorn et al. (2015b). The rock albedo is assumed to be 0.15 and an aerodynamic roughness length of 0.002 m over snow is used for simulations. Although the geothermal heat flux is most likely negligible in the narrow, steep and complex Gemsstock ridge due to strong topographic (Kohl, 1999) and 3d thermal effects (Noetzli et al., 2007), a constant upward ground heat flux had to be applied. Q ground is assumed to be 0.001 W m -2 at 20 m depth to ensure a marginal impact of the lower boundary condition on the analysed rock thermal regime close to the surface. All simulations were run in parallel mode on the same computer cluster as a 32 core process, requiring around 15 days for a two-year simulation. Simulations were also performed for coarser resolutions (1 m, 5 m) to analyse the loss of model accuracy for lower computational costs. Precipitation input for Alpine3D Terrestrial laser scanning (TLS) Snow depths acquired from TLS were used as input data for the precipitation scaling approach. Snow depth distribution was measured at different times in the winters 2012-2013 and 2013-2014 using a RIEGL VZ6000 long-range laser scanner. A total of 4 high-resolution scans were carried out, i.e. two per winter. The high spatial and temporal variability of snow depth distribution in the rock walls were determined by comparing the data to that obtained in snow-free summer scans of the rock walls. The shortest distance from each terrain point to the point cloud at the snow surface was calculated with a point resolution of 0.2 m (Haberkorn et al. 2015a,b). The 205 snow depth determined perpendicular to the surface was both more representative regarding the impact on ground temperatures (Haberkorn et al., 2015b) and more accurate than conventional vertical snow depths in extremely steep terrain (Sommer et al., 2015). Snow depth gaps in the laser scans result from blind areas behind ridges or rocky outcrops. The measurement error made using TLS for snow depth measurements was found to be ±0.08 m (Haberkorn et al., 2015b) and is therefore similar to other observations in steep rock (Sommer et al., . Precipitation scaling To model the snow cover in steep rock walls the high-resolution spatially explicit snow depth distribution data provided by TLS were used. A precipitation scaling algorithm was used to drive the Alpine3D model, which only uses precipitation as input data. As this was not available for Gemsstock, precipitation was first calculated from the snow depth measured at the on-site AWS using a stand-alone SNOWPACK simulation. By using the snow depth driven mode of the SNOWPACK model, the snow depth measurements were used to determine the timing and amount of snowfall by interpreting increases in snow depth as fresh snowfall. According to Lehning et al. (1999) and Wever et al. (2015), SNOWPACK converts snowfall to precipitation while calculating both 220 snow settlement and snow density based on a statistical model. To complete the resulting precipitation series, summer liquid precipitation was used from the nearby MeteoSwiss AWS Gütsch (2287 m a.s.l., 6 km north of Gemsstock; Haberkorn et al. 2015b). Secondly, for each grid cell, scaling factors were calculated based on the ratio between measured snow depth at the AWS and the snow depth of each grid cell measured by TLS at the date of the TLS campaign. These scaling 225 factors were then used to scale the two-year precipitation time series for each grid cell of the DEM. We refer to this method as precipitation scaling, which provides grids of spatially distributed precipitation amounts for Alpine3D input. Data gaps in the TLS lead to data gaps in the precipitation scaling grid, resulting in erroneously modelled snow depths and rock temperatures at these locations. For the analysis of the Alpine3D grid output those grid cells have not been used. 230 Thirdly, model runs were carried out using scaled precipitation of each of the four TLS campaigns. The modelled snow depth and NSRT data coincided best with validation data when using scaled precipitation from snow depth data based on the TLS data obtained on 19 December 2012. Henceforth, the modelled results analysed and discussed here are only based on this TLS data. The use of an early winter TLS is preferred, since the early winter snow depth distribution best represents winter snowfall events. TLS data obtained in spring 235 already contain ablation processes. Sensitivity study A sensitivity study is performed in order to assess the bias made while neglecting snow in thermal modelling of 250 steep rock walls, which has often been done for ideal, compact rock walls with slope angles >50° (e.g. Fiddes et al., 2015;Gruber et al., 2004a;Noetzli et al. 2007). The sensitivity study comprises a rock temperature comparison between a model run with snow (precipitation input from precipitation scaling) and a model run without snow. For the model run without snow, precipitation input was forced to be zero. Alpine3D simulations were thus carried out for two contrasting scenarios in the rock walls ( Fig. 2): one accounting for 255 snow accumulation (henceforth referred to as 'snow-covered' scenario) and one neglecting snow (henceforth referred to as 'snow-free' scenario). 3 Model validation Uncertainties in modelling the snow depth distribution and the near-surface rock thermal regime in steep rock Near-surface rock temperature (NSRT) data The spatially variable thermal regime of the rock slopes was studied using a two-year time series of near-surface rock temperatures. NSRTs were measured in 0.1 m deep boreholes using Maxim iButtons® DS1922L (Maxim Integrated, 2013) temperature loggers. After calibration in an ice-water mixture, instrument accuracy was ±0.25 °C at 0 °C (Haberkorn et al., 2015b). 30 of these temperature loggers were distributed in a linear layout over the 270 N and S facing rock walls ( Fig. 1) with a vertical spacing of approximately 3 m. A detailed statistical point-to-point analysis between modelled and measured NSRTs has been performed at 22 of 30 NSRT locations with a temporal resolution of two hours. 11 of these locations are N facing and 11 are S facing (Appendix : Table 1A). Data from 8 locations were disregarded due to data gaps in the TLS, as discussed in Section 3.1.2. All 22 points were used to evaluate the spatial model performance for each individual rock wall 275 (Section 4.4). Therefore all measured or modelled NSRT data were averaged within the slopes depending whether the grid cells are N or S facing. In addition to the spatial analysis, an absolute point analysis between measured and modelled NSRT evolutions has been carried out for 4 loggers (Section 4.3). These 4 NSRT loggers were chosen in order to represent snow-rich and snow-free locations and thus contrasting NSRT conditions in the N and S facing rock walls. Logger N3 is located in a vertical sector near the top of the N facing 280 rock wall, whereas logger N7 is located in vertical rock 12 m lower at the foot of this rock wall sector, 0.1 m above a ledge. On the S side of the ridge, logger R2 is in 58° steep rock 15 m above a ledge, whereas logger S9 is located in 70° steep terrain close to the gently inclined foot of a rock outcrop on the S facing rock wall (Table 1). Pronounced daily NSRT amplitudes indicate that N3 and R2 were generally snow-free (Figs. 7d, e). Although logger N7 and S9 are located in steep rock, wide ledges below allow the accumulation of a thick snow cover in 285 winter, causing strong NSRT damping during the snow-covered period, as well as a zero curtain in spring (Figs. 7b, c). Snow depth data The rock thermal regime strongly depends on the timing, depth and duration of the snow cover. An accurately 290 modelled snowpack is essential for the correct modelling of the rock thermal regime. The modelled snowpack was therefore validated against measured snow depth data from three independent TLS campaigns, which were not used for precipitation scaling. This was done at the rock wall-scale for all grid cells of the entire N and the S facing slopes on the date of the three TLS campaigns, as well as at the point-scale for grid cells corresponding to the 22 NSRT validation measurement locations. As for the 4 NSRT loggers' data presented in detail, the 295 modelled 2-year snow depth evolution is only presented for the same 4 grid cells. Results In this section only measured and modelled results are presented, while model uncertainties will be discussed in Section 5. First the measured and modelled snow cover accumulating in the rock walls is described at both the Modelled snow cover variability The evaluation of the snow depth distribution modelled using Alpine3D (Figs. 4d-f) against data from three independent TLS revealed a reasonably well reproduced snow depth distribution with r 2 = 0.52-0 Modelled surface energy balance at selected points The modulating influence of the snow cover on the rock thermal regime close to the surface (0.1 m depth) can be assessed by comparing the modelled surface energy balance of the snow-free to that of the snow-covered scenario. This was done at the locations of one sun-exposed (S9) and one shaded (N7) NSRT logger. In Fig. 6, modelled monthly means of each individual energy flux are shown. The terms of the energy balance were 350 defined in Section 3.1.1. Snow-free scenario In the absence of a snow cover, the modelled surface energy balance was strongly influenced by local topographic effects (e.g. steep rock, aspect). At the steep, shaded point N7 (Fig. 6b) almost no solar radiation 355 was received and energy was lost by longwave radiation emission from October to February. The resulting net radiation flux Q net was therefore negative. Furthermore, the latent heat flux Q latent was negative during the entire 2-year period. To compensate the negative fluxes, energy was transferred towards the surface by convection of sensible heat Q sensible from the warmer air to the colder rock surface along with the ground heat release in fall and winter. The net flux resulted in effective ground heat loss during the months with low solar elevation 360 (November-February). Q rain was negligibly small compared with other fluxes and will not be discussed further here. Q net increased uniformly from negative values in winter to positive values in summer. Between March and September/October more radiation was absorbed than reflected and emitted, causing a positive Q net , which was mainly compensated by Q sensible . Q ground was positive (i.e. directed into the rock) during spring and summer resulting in effective ground warming. 365 The evolution of the energy transfer terms of S9 (Fig. 6d) were similar to N7. Only Q net was positive throughout the whole year in the sunny slope, displaying a sinusoidal cycle with minimum values in winter und maxima in summer. The strong Q net input in winter is caused by stronger direct solar radiation input on steep S facing slopes due to the low solar elevation. Snow-covered scenario The accumulation of a thick, long lasting snow cover modulated the dominant driving factors of the surface energy balance considerably. Here too, the monthly evolution of the energy fluxes in the sun-exposed location S9 (Fig. 6c) were similar to those in the shaded location N7 (Fig. 6a), although variations in the magnitude of the fluxes were observed. The energy loss by Q net was mainly compensated by the sensible heat flux from the 375 warmer air towards the colder snow surface during the months with low solar elevation (November-January). All other energy transfer terms were small compared to the snow-free scenario. The small Q ground is caused by the insulating effect of the snowpack, which prevented an effective heat emission in winter. Between March/April and September more radiation was absorbed than reflected and emitted, causing a positive Q net . In contrast to the snow-free scenario, in which all energy was used to warm the ground, under snow-covered conditions any 380 energy surplus Q snow was used for snow melt between March/April and July. The energy surplus first resulted in a heating of the snowpack to 0 °C followed by melt, which corresponded to the zero curtain period of measured and modelled NSRTs (Figs. 7b, c). Thus, the snow cover prevented ground warming between March and July with NSRTs remaining around 0 °C below the snowpack. Q ground was negligible during the snowmelt period and just increased after the snow ablation in July/August and September. individual location. First the NSRT evolution at snow-free locations is described, and then the modulating effect of the snow cover on NSRT is emphasized. NSRT variability at snow-free locations At NSRT locations lacking snow, measured NSRTs closely followed air temperature in the shaded N face (N3, 395 Fig. 7d) while pronounced daily NSRT amplitudes of up to 10 °C could be observed in the sun-exposed rock wall (R2, Fig. 7e) during the whole investigation period. At N3 and R2 the modelled NSRT evolution was in good accordance with measured NSRT with r 2 = 0. NSRT variability at snow-covered locations snow for around 7.5 to 9 months of the year in both the N and the S facing rock walls. After the onset of the continuous snow cover in October/November the rock surface was partly decoupled from atmospheric influences. In the N facing slope (N7, Fig. 7b) measured NSRT oscillations were damped, but continuously decreased down to -4 °C, thus clearly showing the occurrence of permafrost at this location, while in the S facing slope (S9, Fig. 7c) measured NSRT remained close to 0 °C. The timing of snow cover onset and disappearance At the locations accumulating a thick snow cover the temporal evolution of modelled NSRTs are in good 415 accordance with the measured ones in both the shaded (N7) and the sun-exposed (S9) slopes with r 2 = 0. Thermal effect of snow The previously discussed modulating influence of the snow cover on the surface energy balance and its effects on the ground thermal regime can be emphasized by comparing NSRTs at the snow-covered N7 and S9 to the modelled snow-free scenario at these locations (blue lines in Figs. 7b, c). Using the snow-free scenario, modelled NSRT oscillations of N7 and S9 were pronounced during the whole study period, indicating a permanent energy 430 exchange between the atmosphere and the rock. MANSRTs were -2.8/-1.9 °C at the shaded N7 and 0.4/1.3 °C at the sun-exposed S9. This contradicts the NSRT measurements at these locations (Section 4.3.2). Measurements reveal a permanent insulation of the rock by a continuous snowpack between October/November and June/July. MANSRT variability in the entire rock walls A comprehensive analysis of all 22 NSRT locations was used to evaluate the spatial performance of Alpine3D in modelling the potential effect of snow on NSRTs. Both the measured and modelled NSRT data of all 11 N facing locations and of all 11 S facing ones were used to calculate means of MANSRT, MBE and MAE over the individual N and S facing rock walls (Table 3). Snow-covered scenario The topography driven difference of the measured mean MANSRT between the entire N and the entire S facing 450 rock wall were 3.6/3.2 °C. Such a small deviation is reasonable when taking into account that the rock walls are facing rather NW and SE than N and S (Fig. 1a, Appendix Table 1A), as well as considering the accumulation of a thick snow cover at 7 of 11 locations in both the N and S slopes. At the corresponding 22 grid cells, the modelled mean MANSRT difference for the snow-covered scenario across the entire N and S facing slope is 2.6/2.3 °C and thus around 1.0 °C lower than the measured values 455 (Table 3). This is mainly caused by too low modelled NSRTs and thus MANSRTs, especially in the sun-exposed rock wall during snow-free periods (Fig. 9) Snow-free scenario In the absence of a snow cover, the modelled MANSRT variability was much lower within the individual rock walls (Fig. 9). Assuming the modelled snow-free scenario in the entire rock walls, resulted in mean MANSRT of -3.3/-2.3 °C within the N and of 0.1/0.8 °C within the S facing slopes (Table 3). In correspondence to the single 475 NSRT locations (Section 4.3.3) the mean MANSRT of snow-free simulations confirmed too low modelled MANSRT when compared with both observations and snow-covered simulations (Fig. 9). Modelled spatial distribution of MANSRT variability The influence of the snow cover on rock surface temperatures and the previously discussed rock temperature 480 results are summarized in Fig. 10 490 In contrast to the snow-free scenario, the accumulation of a heterogeneously distributed snow cover strongly changes the conditions at the rock surface and thus rock temperatures. In the snow-covered scenario, MANSRT variability is pronounced in steep rock walls depending on the accumulation of a continuous snow cover, on snow depth and snow cover duration. The snow depth distribution varies strongly due to the complex microtopography in the rock walls with rock portions accumulating thick snow in close vicinity to rock portions 495 lacking snow. MANSRTs were highest at the foot of both rock walls and gradually decreased from flat to steeper areas due to both snow depth decrease and low insolation in the N slope at locations without snow. MANSRT at locations shadowed by rock outcrops or in rock dihedrals were colder compared to their surrounding areas (arrows in Figs. 10c, d). The influence of the snow on rock surface temperatures is emphasized by 2.5/1.8 °C (N), respectively 2.3/1.3 °C (S) higher modelled MANSRTs averaged over the individual N and S facing slopes 500 for snow-covered, than for snow-free conditions. Influence of grid resolution The Alpine3D model performance was tested at different spatial-scales (0.2 m, 1 m, 5 m) to analyse the loss of model accuracy for lower computational effort. At locations with a rough micro-topography the loss of 505 information was important due to the aggregation of the initial DEM (0.2 m resolution) to 1 m and 5 m. Slope angles were only sampled at <70° (1 m resolution) and <60° (5 m resolution), whereas in reality the rock was nearly vertical. Aspects were displaced by up to 90° (Appendix Table 1A). This reduces the accuracy of the precipitation scaling and the modelled energy balance components (e.g. net radiation, turbulent fluxes). Shortwave incoming radiation was inadequately modelled at locations with strongly varying micro-topography 510 when increasing grid cell size. However, on a monthly basis, errors in net radiation due to a coarser resolution were smoothed. In addition to smoothed slope angles, 2 or 3 NSRT locations are often merged together in a single grid cell at 5 m resolution. The strongly varying micro-topography and consequently also the snow depth distribution is thus inadequately represented at the 5 m scale. Considering NSRT simulations at each of the 22 logger locations separately revealed that NSRTs modelled at 0.2 and 1 m resolution are in good accordance with 515 measurements, while at 5 m resolution NSRTs are at most locations poorly modelled due to too strong aggregation and thus the over-or underestimation of snow in both the N and the S facing slopes. In Table 4 In this study discrepancies in modelling absolute snow depths in steep rock walls are evident (Figs. 4, 5). This is 535 a consequence of the linear precipitation scaling algorithm used here. Snow settlement is calculated for snow depths at the AWS location and is then linearly scaled into the rock walls, but snow depths and the meteorological forcing obviously differ between the flat field AWS and the rock walls. This causes the snowpack to settle differently and in a non-linear manner. Differences in settling calculated at the AWS and for the grid points in the Alpine3D model domain therefore cause absolute snow depth errors. However, on the basis 540 of measured NSRTs (Figs. 7b,c) it is evident that the snow cover duration (Table 1) is well reproduced by the model. The realistically modelled snow cover duration over the winter was found to be more important for modelling the ground thermal regime than accurately modelled absolute snow depths at certain points in time. This agrees with the findings of Marmy et al. (2013) and Fiddes et al. (2015). Although measured and modelled snow depth differences were >1.0 m (Figs. 4, 5), these snow depth differences do not affect the rock thermal 545 regime since steep, bare rock is already decoupled from atmospheric influences at snow depths >0.2 m (Haberkorn et al., 2015a). Amongst others, Luetschg et al. (2008) and Zhang (2005) Table 555 1) at locations lacking snow and during the snow-free period. A likely explanation is that both air temperature and wind speeds, measured at the flat field AWS may be poorly representative for the prevailing conditions in the rock walls and therefore turbulent flux simulations are biased. In addition, the underestimation of snow melt may also be partly explained by the 1d snow module which does not account for lateral heat flow between adjacent snow-free and snow-covered rock portions, as well as micro-meteorological processes due to unevenly 560 distributed heating during the ablation period which in reality accelerates snow melt. Nevertheless, the model verification showed that the overall performance of Alpine3D modelling snow depths and consequently rock temperatures in steep slopes in the current setup provides useful improvements compared to the common assumption of a lack of snow in thermal modelling of idealized rock walls exceeding 50° (e.g. Fiddes et al., 2015;Gruber et al., 2004a;Noetzli et al., 2007). Further, we found that the insulation by snow was too strong in the simulations. Modelled NSRT and consequently MANSRT were therefore positively biased during the snow-covered period in the steep, rough N facing slope and thus measured negative NSRTs could not be reproduced (Fig. 7b). This has two possible explanations: (i) The snow thermal conductivity is too low in the model and/or (ii) the existence of lateral heat fluxes due to the strong thermal interaction of micro-topography and micro-climate between snow-covered and 570 snow-free rock portions, which lead to stronger cooling below snow pixels than simulated with the 1d model. While assuming predominately 1d vertical heat conduction in the snow and ground, a part of the energy balance and thus the complex lateral heat flow occurring at the rock surface, as well as in steep, narrow ridges is poorly described or missing (Noetzli et al., 2007). Effective ground heat loss in autumn 2013-2014 was observed and modelled at exposed locations due to an initially thin snow cover, but a heat exchange between adjacent 575 locations covered with thick snow was not reproducible by the model, although it was measured (Haberkorn et al., 2015a). In contrast modelled and measured NSRTs in the homogenous S facing slope supported the validity of the 1d heat conduction assumption at snow-covered locations since here a continuous, smooth snowpack was an effective barrier to heat loss from the ground to the air (Fig. 7c). Finally, difficulties in partitioning the measured incoming shortwave radiation in a direct and diffuse component, particularly for low sun angles, may 580 explain the stronger modelled net radiation for snow-free conditions in the shaded (Fig. 6b) than in the sunexposed slope (Fig. 6d), which is amplified by differences in slope and aspect between the model domain and reality (Appendix Table 1A). 585 Meteorological conditions and topographic properties like slope angle, aspect, surface roughness (Gruber et al., 2004b;Noetzli et al., 2007) and local shading effects (Mott et al., 2011) control the surface energy balance and their annual variations in rock wall sectors lacking snow. Changes of local conditions at the rock surface due to the accumulation of a snow cover modify the importance of influencing factors on the ground energy balance (Hoelzle et al., 2001). This study emphasizes the need to account for the strongly varying snow cover in thermal 590 modelling of steep, fractured, complex rock walls. Alpine3D was used to simulate rock surface temperatures for both a snow-covered (precipitation scaling) and a snow-free scenario (zero precipitation input), in order to estimate the error introduced by neglecting snow in steep bedrock thermal modelling. The results are summarized in Fig. 10, where the comparison of snow-free and snow-covered simulations show a prominent warming effect of the snowpack on MANSRT over the entire N 595 and S facing rock walls. These model results are supported by measured NSRT data and model predictions at both the point- (Table 1) and rock wall-scale (Table 3), as well as by previous observations reported by Haberkorn et al. (2015a). Modelled MANSRT differences between snow-covered and snow-free conditions were due to the insulation of the rock by a continuous snowpack, despite the strong solar insolation in spring and early summer (Fig. 6). Under snow-free conditions the excessive radiation input in early summer cannot compensate 600 the effective ground heat loss in winter. The modelled MANSRT increase of 1.3 -2.5 °C found for both snowcovered N and S facing steep rock walls compared to snow-free simulations (Figs. 10e, f) is in the same order of magnitude than the cooling or warming effect of snow on mean-annual ground surface temperatures modelled by Pogliotti (2011). However, Pogliotti (2011) suggested that a warming effect of mean-annual ground surface temperatures can only occur on gentle slopes, while cooling can occur everywhere and also in conditions of a 605 nearly perennial thin snow cover. The latter is doubted, since our observations show that thin snow melts fast at elevations around 3000 m a.s.l. especially on steep S faces with strong insolation. In shaded slopes the increased MANSRT caused by thick snow confirms the findings of Magnin et al. (2015). In contrast, in sunny rock walls both measurements and model results at the point-and spatial scale (Tables 1,3) challenge the hypotheses presented by Magnin et al. (2015) and Hasler et al. (2011), who supposed a cooling effect of a snow cover due to the shielding of the rock surface from radiation influences during the months with most intense insolation. Discrepancies with our observations may have three reasons: (i) These authors estimated snow depths qualitatively rather than quantitatively. (ii) They adopt the widespread theory of an insulating snow cover with depths exceeding 0.6 m for blocky terrain (Hanson and Hoelzle, 2004, Keller and Gubler, 1993, Luetschg et al., 2008, while Haberkorn et al. (2015a) found the insulation effect on NSRT at smooth rock surfaces already 620 In this study it has been proven that both net radiation and the snow cover are the key factors driving ground temperatures and determine whether permafrost is present or not in steep, rough rock walls, which was already proposed for moderately inclined terrain by Hoelzle et al. (2001). In steep S facing mountain ridges up to 3000 m a.s.l., permafrost is most likely absent independent of the evolution of a thick snow cover, as shown in Figs. 10b and d. In contrast in steep rugged N facing rock walls the accumulation of a thick snow cover prevents a 625 continuous permafrost distribution (Fig. 10c), while permafrost would most likely be present in areas without or with only thin snow (Fig. 10a). These results confirm recent two-dimensional numerical simulations made for east/north-east facing Scandinavian rock walls by Myhra et al. (2015), who found that the size of snow-free rock portions are crucial for warming or cooling a rock wall. In addition, these authors show that the existence of permafrost in steep bedrock varies strongly depending on thickness and extension of an insulating snow cover, 630 which can lead to permafrost temperature increase and taliks in steep slopes. We therefore suggest that in recent permafrost distribution assessments in the European Alps based on energy balance (Fiddes et al., 2015) or statistical modelling (Boeckli et al., 2012a,b) mean annual rock surface temperatures were possibly modelled too low by around 2 °C in steep bedrock as a result of neglecting snow. Mismatches of scale issues in distributed permafrost modelling arise often while validating the model results 635 based on grids of tens to hundreds of metres to point measurements (e.g. Gubler et al., 2011;Gupta et al., 2005;Schlögl et al., 2016). Here, a point-and spatial model validation of NSRTs and snow depths were performed at different grid cell sizes (0.2 m, 1 m, 5 m; Table 4). In both the N and the S facing rock walls, the point-and spatial validation with data at 1 m resolution is reasonable, to accurately model the snow cover and ground surface temperatures in steep rugged rock faces. The decrease in computational time by reducing the grid 640 resolution from 0.2 to 1 m, is significant (25 times lower). Additionally, a DEM resolution of 1 m is considered to be precise enough to detect ledges within the rock face, which are essential for snow accumulation in steep rock (Haberkorn et al., 2015a;Sommer et al., 2015). At a resolution of 5 m the loss of topographic, as well as accurate snow depth information results in an inadequately modelled rock thermal regime. Model runs at coarser spatial-scales are thus assumed to be unsuitable for modelling temperatures in complex steep rock walls, such as the Gemsstock ridge. Variations of surface processes due to micro-topographic inhomogeneity occur at smallscales, providing the motivation for high-resolution numerical modelling in complex topography in order to establish a basis of proper validation of grid-based model results. Conclusions The potential to model the strongly heterogeneous snow cover and its influence on the rock thermal regime on two rugged, steep mountain rock walls has been studied at the Gemsstock ridge (central Swiss Alps) over a two year period. The results were obtained using the spatially distributed physics-based model Alpine3D in combination with a precipitation scaling approach. In the rough rock walls, the heterogeneously distributed snow cover was moderately well reproduced by Alpine3D with absolute snow depth differences varying between +1.5 and -1.0 m and a MAE between 0.47 and 0.77 m averaged over the entire rock walls. However, the snow cover duration was well reproduced by the model and proved to be most important for realistically NSRT modelling. Rock temperatures are convincingly modelled, although modelled NSRTs and thus MANSRTs are somewhat too low during snow-free periods and at locations without snow, as indicated by a MBE varying between -0.2 and -660 1.3 °C in the rock walls. Model verification suggests an MAE of 1.6/1.7 °C in both the entire shaded and sunny rock walls. Remaining errors in snow depth and consequently rock temperature simulations are explained by inadequate snow settlement modelling, due to linear precipitation scaling, missing lateral heat fluxes in the rock and by errors due to shortwave radiation, air temperature and wind interpolation, which are complex in such terrain. 665 The influence of the snow cover on rock surface temperatures was investigated by comparing a snow-covered model scenario (precipitation input provided by precipitation scaling) with a snow-free (zero precipitation input) one. A strong increase in MANSRTs in both the shaded and sun-exposed steep rock walls induced by a thick long lasting snow cover were both measured and modelled. MANSRT were by 2.5/1.8 °C higher in the shaded and 2.3/1.3 °C higher in the sun-exposed rock walls when comparing the modelled snow-covered scenario to the 670 snow-free one. As snow reduces ground heat loss in winter, it has an overall warming effect on both N and S facing rock walls despite the fact that it provides protection from solar radiation in early summer. The model performance was tested at different scales ranging from 0.2 m to 5 m. A DEM resolution of 1 m was found to be detailed enough to detect the strongly variable micro-topography in steep, rugged rock walls and hence a grid resolution of 1 m is adequate to accurately model the snow cover and rock surface temperatures. 675 Coarser resolutions are not appropriate at the Gemsstock site. The correction of winter precipitation input using a precipitation scaling method based on TLS improved snow cover and thus also rock temperature simulations in the complex rock walls. The results of this study help to quantify the potential errors in ground temperature modelling when neglecting the evolution of a snow cover in steep rock exceeding 50°, as has often been done for idealized rock walls. 7 Outlook The observations and model results discussed here are from an individual site with specific characteristics. In future studies, additional rock faces with diverse characteristics and climates should be investigated to assess the general validity of our results. The precipitation scaling method presented is currently only valid at the site-scale, 685 but can potentially also rely on satellite imagery or airborne laser scan data to enable snow depth scaling for larger areas. Correcting for different snow settlement rates due to different snow depths will be a feasible improvement for snow depth simulations. Further improvements can be expected by considering wind fields in steep terrain and lateral heat fluxes with the Alpine3D model. While the generation of wind fields over steep slopes is an unsolved and challenging issue, the implementation of 3d advective heat fluxes in steep ridges influencing both the rock surface and ground temperatures at depth can be addressed by coupling the modelled surface energy balance to a ground model representing 3d heat flow in the rock. This will likely allow to model a more accurate evolution of ground temperatures especially when considering only thin snow and potential disposition for slope instability. However, the need for modelled lateral heat fluxes is questionable when the model accuracy has a MAE of 1.6/1.7 °C (Table 3) and the significantly higher computational costs must be taken into account. Although ground temperature modelling over larger areas, such as the entire Alps, is not feasible at such high resolutions, our site specific approach has demonstrated the potential to reveal temperature variations for different snow cover conditions and to discuss limitations of permafrost models running at coarsescales. Climate change impact studies critically depend on the small-scale variability at the atmosphere-surface interface. This physics-based approach can be used to study the long-term effect of a changing climate on rock 700 temperatures and permafrost distribution. Data availability The data is available on request from WSL Institute for Snow and Avalanche Research SLF.
2018-10-16T16:59:27.637Z
2016-04-18T00:00:00.000
{ "year": 2016, "sha1": "2fe42b1089c0b9e175cdf17fa1c73ab414bbd2e6", "oa_license": "CCBY", "oa_url": "https://www.the-cryosphere.net/11/585/2017/tc-11-585-2017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a39636696d4845805f8eb947c820a1cfe2c7296", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
14986121
pes2o/s2orc
v3-fos-license
The Magnetic Field in Galaxies, Galaxy Clusters, and the InterGalactic Space Magnetic fields of debated origin appear to permeate the Universe on all large scales. There is mounting evidence that supernovae produce not only roughly spherical ejecta and winds, but also highly relativistic jets of ordinary matter. These jets, which travel long distances, slow down by accelerating the matter encountered on their path to cosmic-ray energies. We show that, if the turbulent motions induced by the winds and the cosmic rays generate magnetic fields in rough energy equipartition, the predicted magnetic-field strengths coincide with the ones observed not only in galaxies (5 $\mu$G in the Milky Way) but also in galaxy clusters (6 $\mu$G in Coma). The prediction for the intergalactic (or inter-cluster) field is 50 nG. Introduction The average magnetic field (MF) of the interstellar medium (ISM) of our Galaxy (B MW ∼ 5 µG) corresponds to an energy density of ∼ 0.5 eV cm −3 , in good agreement with the energy density of cosmic rays (CRs). This provides a strong hint of a common origin and of energy equipartition (see, e.g. Longair 1992), though other theories of the origin of galactic fieldse.g. by dynamo amplification of primordial seed fields-have been proposed (Parker 1992). The origin of intergalactic MFs, both within and without galaxy clusters, is also undecided. Here we discuss how all of these MFs could have a common origin, and be in equipartition with the corresponding local energy densities of CRs. Radio observations of clusters indicate that the intra-cluster medium (ICM) between the galaxies is permeated by intense MFs (e.g. Kronberg 2004). Nearby clusters are seen to have a "radio halo" with a distribution similar to that of the cluster gas, observed in Xrays. These halos are produced by synchrotron emission from CR electrons spiralling in the cluster's MF, while the X-rays are electron bremsstrahlung. Measurements of the Faraday rotation of linearly polarized radio emission traversing the cluster's medium, in combination with X-ray data, support the existence of cluster MFs of a few µG (Kim et al. 1989(Kim et al. , 1990Taylor & Perley 1993;Feretti et al. 1995;Deiss et al. 1997;Eilek 1999;Ensslin et al. 1999;Clarke, Kronberg & Bohringer 2001;Johnston-Hollitt, Hollitt & Ekers 2004). The mapping of the Faraday rotation reveals that the clusters' MFs are turbulent with a Kolmogoroff power spectrum on a variety of scales (Ensslin, 2004;Vogt & Ensslin 2004). The MF between clusters and isolated galaxies in the inter-galactic medium (IGM) is not known. Speculations on its value range from nearly a µG to a pG. Low-level radio emission was detected from the IGM around Coma (Kim et al. 1989;Ensslin et al. 1999) and from the IGM in large-scale filaments of galaxies (Bagchi et al. 2002). The estimated field strengths are of the order of several hundred nG. Theories of the origin of MFs in the ICM and IGM include cosmic shocks (e.g. Kulsrud et al. 1997;Ryu, Kang & Biermann 1998), ionization fronts (Gnedin et al. 2000) and outflows from primeval galaxies ), quasars and/or radio galaxies (Furlanetto & Loeb 2001, Gopal-Krishna & Wiita 2001. Kronberg et al. (2004) have estimated that "giant" extragalactic radio sources, powered by accretion onto massive black holes (M > 10 8 M ⊙ ), inject E B ∼ 10 60−61 ergs of magnetic energy into radio lobes, and have argued that the expansion and diffusion of these Mpc-scale lobes could have magnetized a large fraction of the IGM. Assuming that in the accretion ∼ 1% of M is released in the form of magnetic energy, Kronberg (2004) estimated a mean B IGM ∼ 40 nG at redshift z ∼ 2. This value 1 evolves as (1 + z) −2 by cosmic expansion, yielding a two-orders-of-magnitude smaller IGM energy density, and a one-order-of-magnitude smaller B IGM at z = 0. Concerning the ICM, it was suggested that the jets formed by accretion onto massive black holes in clusters provide the heat source in the so-called "cooling flow" (CF) clusters (e.g. McNamara et al. 2000). But Kronberg et al. (2004) have also found that the radio lobes of the powerful radio galaxies at the centre of rich clusters contain a magnetic energy of only E B ∼ 10 58−59 erg. Assuming equipartition between the kinetic energy output and the magnetic energy, the energy supply from such objects is insufficient to power the X-ray emission from bright CF clusters over a Hubble time (∼ 10 62 erg for bright CF clusters). Moreover, some CF clusters contain neither powerful radio galaxies nor active galactic nuclei. Contrariwise, Colafrancesco, Dar & De Rújula (2004) have shown that the required heat supply in CF clusters can be provided by the energy deposited in the ICM by jets and CRs emitted from the cluster galaxies 2 , and that the equipartition of energy between the ionized gas, the MF and the CRs can explain the origin of MFs of several µG. In this letter we argue that the outflow of jets, CRs and winds from SN explosions in star formation regions, where most SNe take place, magnetize the ISM in galaxies, the ICM in galaxy clusters and the IGM outside galaxy clusters. Our main assumption is that of a rough equipartition between the energy of the SN jets and winds and the energy of the accompanying CRs. The predicted strengths of the MF in the ISM and ICM are consistent with those observed. In large structures, the predicted magnetic energy density is roughly proportional to the luminosity density, with a mean MF of several µG in the ICM of rich galaxy clusters and ∼ 50 nG in the IGM. Galactic CRs, magnetic fields and supernova explosions As a result of the steep energy spectrum of galactic CRs, the bulk of the CR energy is carried by nuclei with an average energy of a few GeV. The most accurate measurements of their flux, dI/dE, near Earth and during solar minimum (minimum solar modulation) are those of AMS (Alcaraz et al. 2000a,b) and BESS (Haino et al. 2004). Their measurements yield a local CR energy density 3 : If the energy densities of galactic CRs and MFs are in equipartition, and B MW ∼ 5 µG, in good agreement with observations, as is well known (Longair 1992). There is evidence from gamma-ray bursts (e.g. , Dar 2004a), from SN1987A (Nisenson & Papaliolios 1999), and from the morphology of young supernova (SN) remnants (e.g. Hwang et al. 2004) that in addition to quasi-spherical ejecta, SNe produce highly relativistic and narrowly collimated jets, which carry an average kinetic energy E K [Jet] ∼ 2×10 51 erg, similar to the kinetic energy of the spherical ejecta. The jets slow down by collisions with the interstellar matter (ISM) along their path. The intercepted ISM is thereby accelerated to CR energies, carrying away almost entirely the original energy of the jets. This simple theory of CRs agrees very well with the observed CR spectra and CR composition at all energies (Dar 2004b;De Rújula 2004a,b). The fast winds also transport CRs and magnetic fields within galaxies and out of them. We assume equipartition in the sense that, of the total energy (∼ 2 E K [Jet]) of winds and jets, 1/2 ends up in CRs. Can the bulk of the galactic CRs be accelerated by SNe in the stated way? If SNe produce the observed flux of galactic CRs, the galactic CR luminosity must satisfy: . This SN rate is also consistent with the value measured in the local universe 4 . By fitting the diffuse gamma-ray emission of the Galaxy, as measured by EGRET (Sreekumar et al. 1998), to the CR production rate of γ-rays in a "leaky box" model of the galactic CR halo, Strong et al. (2004) obtained V CR [MW] ≈ 2.1 × 10 68 cm 3 (the volume of a cylinder with 30 kpc diameter and 10 kpc height). Using the observed CR spectrum and τ (E) ∼ 2 × 10 7 (E/GeV) −0.5±0.15 y (inferred from the relative abundance of unstable isotopes in CRs), we obtain for the mean injection rate of CR energy per unit volume in the Milky Wayρ E [CR] ≈ 1.8 × 10 −19 erg cm −3 y −1 , and consequently L CR [MW] ∼ 1.2 × 10 42 erg s −1 . This number agrees with the RHS of Eq. (3). From the above we conclude that SN explosions seem to be the source of the bulk of the CR and MF energies in the ISM of ordinary galaxies. The magnetic field in the ICM of galaxy clusters Let R(z) be the rate of SN events in a galaxy such as ours, at redshift z, or look-back time t. For a standard cosmology with Ω = Ω M +Ω where Ω M ≈ 0.27, Ω Λ ≈ 0.73 and H 0 ≈ 65 km Mpc −1 s −1 . The SN rate is proportional to the star formation rate R SF (z), so that R(z) = R(0) R SF (z)/R SF (0). The observations (see, e.g. Lilly et al. 1995;Madau et al. 1996, Steidel et al. 1999Schiminovich et al. 2004) are: R(z) ∼ R(0) (1 + z) α with α ≈ 2.5 ± 0.5 for z ≤ 1 and R(z) ≈ R(1) ([1 + z]/2) 0±0.5 for 1 ≤ z ≤ 5, a redshift beyond which the relative volume is small. If the star formation history in a galaxy cluster (GC) is not very different from that in the Milky Way or the rest of the universe, the cluster's SN rate is simply weighed by the ratio of luminosities: R GC (z) ≈ (L GC /L MW ) R(z). For reasonable MF coherence lengths, low-energy CRs do not diffuse out of a rich cluster during a Hubble time, and the total CR energy in their ICM is: if the cluster decouples from the Hubble expansion at a relatively early time. The factor 2 reflects the equality of energies of jetted and "spherical" ejecta; the factor 1/3 the energy equipartition between CRs, MFs, and the dense 5 ICM plasma. For z o = 0, the integral in Eq. (4) is ≈ 2.6 ± 1.3, implying a CR energy density: If the CRs from SN explosions magnetize the ICM in the same way that they magnetize the ISM, the MF in the ICM has the same energy density as the CRs: B 2 ICM /(8 π) ∼ ρ ECR . This prediction is in good agreement with observations. For instance, the observed luminosity of Coma within a radius of 1 Mpc is L Coma ≈ 3 × 10 12 L ⊙ (Fusco Femiano & Hughes 1994), implying a CR energy density, ρ E [CR] ∼ (0.67 ± 0.33) eV cm −3 , and B Coma ≈ 5.1 ± 1.2 µG, in agreement with the observed ∼ 6 µG (e.g. Clarke et al. 2001). Intergalactic cosmic rays and magnetic fields Let R SN (z = 0) ≈ 10 −4 Mpc −3 y −1 be the current local SN rate per comoving unit volume (the observed SN rate per unit luminosity within the Virgo circle, multiplied by ρ L ≈ 1.2 × 10 8 L ⊙ Mpc −3 , the mean luminosity density in the local universe). In a steady state, the injection rate of CRs into the IGM by a galaxy is equal to its CR production rate. Consequently, SN explosions in galaxies, at a cosmic time t(z), inject energy into the IGM at a rate ∼ 2 E K [Jet] R SN (z). Let dN SN /dE be the CR spectrum produced by a single SN. Its energy dependence is that of (dI/dE)/τ (E), with dI/dE the observed CR spectrum, and τ (E) the CR the residence time. In equipartition, its normalization is (dN SN . The CR spectral density in the IGM at redshift z o , is given by: The present CR energy density of the IGM implied by equipartition and Eq. (6) is: If the galactic winds and CRs from SN explosions magnetize the IGM in the same way that they magnetize the ISM, then, under the assumption of equipartition, the magnetic field in the IGM has the same energy density as the CRs: B 2 IGM /(8 π) = ρ ECR [IGM]. Hence the average strength of magnetic fields in the IGM is predicted to be B IGM ∼ 50 ± 12 nG. The estimated field in the outskirts of Coma (several hundred nG; see Kim et al. 1989;Ensslin et al. 1999) is intermediate between our expectations for the ICM and the IGM. Conclusions There is evidence that long-duration gamma ray bursts are produced by relativistic jets ejected in core-collapse SN explosions, as long advocated in the "Cannonball" (CB) model Dar 2004a and references therein). The jets, along their long paths (much larger than a galaxy's size), transfer essentially all of their energy to the local medium, which is accelerated to CR energies De Rújula 2004, Dar & De Rújula in preparation). The generation of CRs and the subsequent MFs along the jet's path is fast: it occurs at nearly the speed of light. It is known from "first-principle" simulations that a relativistic plasma (in our case, the CRs) impinging on a medium at rest generates turbulence and MFs efficiently and very rapidly (Frederiksen et al. 2004). The transport of CRs and MFs by SN winds, even at a modest few thousand km s −1 reaches -in a Hubble time-distances larger than the mean separation between galaxies. This justifies our implicit assumption that, in equipartition with CRs, sufficiently uniform MFs are generated in a time much shorter than Hubble's time. In equilibrium, the CRs escaping a galaxy -or being generated by jets beyond the galaxy's confines-have the same spectrum: the CR "source spectrum": the observed spectrum deprived of the galactic confinement-time effect, as in Eq. (3). We have assumed equipartition between CR and MF energies in the low-density ISM and IGM, and between the energies of CRs, MFs and the dense plasma of the hot central regions of the ICM. On this basis, we obtained B MW ∼ 5 µG for the mean magnetic field in the Galaxy, and a very similar value for B Coma , the mean magnetic field in the ICM within 1 Mpc from the centre of the Coma cluster (or similarly rich clusters), in agreement with the observations. The prediction for the IGM is B IGM ∼ 50 nG. The observations in the outskirts of Coma are between our predictions for the ICM and the IGM, but not enough is known about the propagation of CRs in the IGM to claim that this is a success. Our theory of large-scale MFs also explains in a very simple fashion the properties of CRs . It individuates the heat-source in "cooling flow" clusters and predicts the temperature profile of the intra-cluster gas (Colafrancesco et al. 2004). It predicts an extragalactic γ-ray background radiation with a spectral index ∼ −2.1, dominated by inverse Compton scattering (ICS) of the microwave background radiation by CR electrons in the galactic halo and in the IGM. A similar radiation from the halo of Andromeda may be observable by GLAST (Dar & De Rújula 2001). The theory entails a γ-ray emission from the ICM of clusters of galaxies, due to ICS of CR electrons and π 0 production and decay (with spectral indices ∼ −2.1 and ∼ −2.2, respectively) in the collisions of CR nuclei with the ICM (Dar & De Rújula 2001). These radiations are also at a level detectable by GLAST. Although other accelerators -such as flaring stars, stellar winds, SN remnants, pulsars, microquasars, and massive black holes in active galactic nuclei-contribute to the production of non-solar CRs, supernova explosions seem to be the dominant source of CRs, as speculated long ago (Baade & Zwicky 1934). Not only can the SN outflows accelerate the bulk of the high energy CRs, but they can also magnetize the entire universe at the observed level.
2014-10-01T00:00:00.000Z
2005-04-21T00:00:00.000
{ "year": 2005, "sha1": "ebb01ec510fcbc6fb171788e2f820974d89966fa", "oa_license": null, "oa_url": "http://cds.cern.ch/record/847414/files/PhysRevD.72.123002.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ebb01ec510fcbc6fb171788e2f820974d89966fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219730001
pes2o/s2orc
v3-fos-license
Circadian disruption with constant light exposure exacerbates atherosclerosis in male ApolipoproteinE-deficient mice Disruption of the circadian system caused by disordered exposure to light is pervasive in modern society and increases the risk of cardiovascular disease. The mechanisms by which this happens are largely unknown. ApolipoproteinE-deficient (ApoE−/−) mice are studied commonly to elucidate mechanisms of atherosclerosis. In this study, we determined the effects of light-induced circadian disruption on atherosclerosis in ApoE−/− mice. We first characterized circadian rhythms of behavior, light responsiveness, and molecular timekeeping in tissues from ApoE−/− mice that were indistinguishable from rhythms in ApoE+/+ mice. These data showed that ApoE−/− mice had no inherent circadian disruption and therefore were an appropriate model for our study. We next induced severe disruption of circadian rhythms by exposing ApoE−/− mice to constant light for 12 weeks. Constant light exposure exacerbated atherosclerosis in male, but not female, ApoE−/− mice. Male ApoE−/− mice exposed to constant light had increased serum cholesterol concentrations due to increased VLDL/LDL fractions. Taken together, these data suggest that ApoE−/− mice are an appropriate model for studying light-induced circadian disruption and that exacerbated dyslipidemia may mediate atherosclerotic lesion formation caused by constant light exposure. (p = 0.68), and white adipose tissue (p = 0.66) did not differ significantly between 8-week old ApoE +/+ mice and ApoE −/− mice at 8 and 20 weeks old (one-way ANOVA; Fig. 3a, Table S2). Likewise, we found that the phases of the SCN (p = 0.54), liver (p = 0.87), pituitary (p = 0.36), lung (p = 0.15), kidney (p = 0.59), aorta (p = 0.62), spleen (p = 0.41), and white adipose tissue (p = 0.73) were not significantly different between ApoE +/+ and ApoE −/− mice at 8 and 20 weeks old (one-way ANOVA, Fig. 3b, Table S2). The amplitudes of the tissue PER2::LUC rhythms were also similar between 8-week old ApoE +/+ mice and ApoE −/− mice at 8 and 20 weeks old (Table S2). These data demonstrate that the molecular timekeeping mechanism is not altered by APOE deficiency. exposure to constant light exacerbates atherosclerosis in male ApoE −/− mice. We next investigated the effects of chronic exposure to constant light on atherosclerosis (Fig. 4, Table S3). Similar to previous studies in wild-type mice, locomotor activity was arrhythmic or the rhythm was disrupted in ApoE −/− mice housed in constant light (Fig. 4b, actograms of all individual mice shown in Fig. S10, S11, S12, S13; Fig. S14) 35 . Male (Fig. 4c, t-test p = 0.001), but not female (Fig. 4e, Mann-Whitney p = 0.08) ApoE −/− mice housed in constant light had more atherosclerosis in the en face aorta compared to those in control 12 L:12D (Table S3). Likewise, atherosclerotic lesion area in the aortic root was increased in male (Fig. 4d, Mann-Whitney p = 0.04), but not in female (Fig. 4f, t-test p = 0.20), ApoE −/− mice exposed to constant light (Table S3). Representative single-plotted actograms of wheel-running activity of ApoE +/+ (a) and ApoE −/− (b) mice housed in 12 L:12D for 7 days (LD) and then released into constant darkness for 7 days (DD). Yellow shading shows lights on. Mean activity profiles (c) were generated from 7 days in 12 L:12D. The amplitudes in LD (d) were the peak Qp's of the χ2 periodograms for 7 days in 12 L:12D. The phase angles of entrainment (e) were determined by drawing a regression line to activity onset for days 1-5 in constant darkness and then extending the line to the last day in 12 L:12D. A positive phase angle occurred when activity started after the time of lights off. The free-running period (f) was determined using an χ2 periodogram for days 1-7 in constant darkness. There were no significant differences between ApoE +/+ and ApoE −/− mice. Data are mean ± SEM. www.nature.com/scientificreports www.nature.com/scientificreports/ Constant light exposure exacerbates dyslipidemia, but not inflammation in male ApoE −/− mice. We next examined the potential mechanisms by which exposure to constant light could increase atherosclerosis in male ApoE −/− mice. Male ApoE −/− mice had similar body weights (t-test p = 0.13) and calorie consumption (t-test p = 0.52) in 12 L:12D and constant light, although the mice were less active in constant light (Mann-Whitney p = 0.008) (Table S4). We next measured macrophages in the vascular wall of aortic roots as a marker of inflammation (Fig. 6). The percent macrophage content of the lesions in aortic roots was not altered by constant light exposure in male ApoE −/− mice compared to mice in 12 L:12D (Fig. 6c, t-test p = 0.62, Table S3). Discussion ApoE −/− mice are a well-established rodent model for studying atherosclerosis 36 . Beginning at ~12 weeks of age, male and female ApoE −/− mice spontaneously develop atherosclerotic lesions in the aorta, even when fed a low-fat (non-Western) diet 34 . It was critical to our experimental design to use a mouse model that develops atherosclerosis on low-fat diet. This is because diet-induced obesity increases atherosclerosis in ApoE −/− mice and high-fat diet feeding disrupts daily rhythms in male wild-type mice 37,38 . Therefore, in this study, we sought to exclude the potential confounding effects of high-fat diet feeding on atherosclerosis and circadian rhythms in order to isolate www.nature.com/scientificreports www.nature.com/scientificreports/ the effects of constant light-induced circadian disruption. To this end, we fed ApoE −/− mice a low-fat diet for the duration of the study. This protocol prevented hyperphagia and obesity in male ApoE −/− mice. Thus, the exacerbation of atherosclerosis in male ApoE −/− mice housed in constant light occurred independently of systemic metabolic dysfunction caused by obesity. The first goal of this study was to determine if ApoE −/− mice are an appropriate model for studying the effects of light-induced circadian disruption on atherosclerosis. An ideal model should have no inherent circadian rhythm abnormalities. One previous study found that ApoE −/− mice had unstable activity rhythms in constant darkness and impaired circadian responsiveness to light 39 . Therefore, we first comprehensively characterized circadian rhythms of behavior and light responsiveness, and molecular circadian rhythms in tissues in ApoE −/− mice. We found that ApoE −/− mice were indistinguishable from ApoE +/+ mice in every parameter measured, with the exception of slight significant reductions in total activity and amplitude in ApoE −/− mice in constant darkness. Endogenous (free-running) and entrained rhythms of wheel-running activity, as well as responsiveness to light pulses at different times of day were the same in ApoE −/− and ApoE +/+ mice. Moreover, the molecular timekeeping rhythms in the SCN and peripheral tissues, as well as the phase relationship between these body clocks, were the same in ApoE −/− and ApoE +/+ mice. We postulate that our results differ from the previous study because the ApoE −/− mice used in the prior study may have been on a mixed genetic background while the ApoE −/− mice in our study were backcrossed 10 times into a C57BL/6 J background. Genetic background markedly affects www.nature.com/scientificreports www.nature.com/scientificreports/ circadian activity rhythms in mice and the C57BL/6 J strain has stable, high-amplitude behavior rhythms 40 . In sum, we found that ApoE −/− mice have no circadian abnormalities and thus are an ideal model for studying the effects of light-induced circadian disruption on atherosclerosis. The second goal of this study was to investigate the effects of constant light exposure, which chronically disrupts circadian rhythms, on atherosclerosis. Most previous studies investigated the effects of circadian disruption on atherosclerosis using circadian gene mutant mice [15][16][17][18][19] . To our knowledge, only two studies have examined the effects of light-induced circadian disruption on atherosclerosis in mice. One study inferred that a weekly inversion of the light-dark cycle accelerated atherosclerosis development in ApoE −/− mice, but lacked quantitative data 41 . A second study recently showed that weekly inversion of the light-dark cycle increased severe atherosclerotic lesions www.nature.com/scientificreports www.nature.com/scientificreports/ in female APOE*3-Leiden.CETP mice due to increased macrophage content and inflammation, oxidative stress, and chemoattraction markers in the vascular walls 42 . However, the data from the present study demonstrate that the increase in atherosclerosis in male ApoE −/− mice exposed to constant light was driven by increased VLDL/ LDL cholesterol rather than increased macrophage content. Additionally, the mice in the previous study were fed a Western diet (high fat and high cholesterol), which independently causes metabolic dysfunction, obesity, and disrupts circadian rhythms 27,[43][44][45][46][47][48] . Therefore, our study is unique in that we found that disordered light exposure exacerbates atherosclerosis even when mice are fed low-fat diet. An additional strength of our study is that we studied both male and female ApoE −/− mice, while only females could be studied in the previous study because male APOE*3 Leiden.CETP mice do not develop atherosclerosis on a cholesterol-rich diet. In mice, constant light exposure increases the period of activity rhythms and, chronically, can cause arrhythmicity 49 . At the tissue level, constant light desynchronizes cellular oscillators in the SCN, causing the overall rhythm of the SCN to be low-amplitude or absent 35 . Since the SCN is the master circadian clock that coordinates physiological and behavioral rhythms throughout the body, the effect of constant light exposure on the SCN results in whole-body disruption of circadian rhythms 35,50 . Previous studies showed that constant light exposure increased body weight, disrupted insulin sensitivity, and decreased triglyceride-derived fatty acids and glucose uptake by brown adipose tissue [51][52][53] . However, no study has examined the effects of constant light exposure on atherosclerosis. In the current study, we found that exposure to constant light increased cholesterol on atherogenic lipoproteins and atherosclerotic lesion area in male ApoE −/− mice. According to the lipid hypothesis, chronic elevated levels of cholesterol in the blood causes atherosclerosis [54][55][56] . In female ApoE −/− mice, constant light exposure did not increase total or VLDL/LDL-cholesterol concentrations nor atherosclerotic lesion area. The mechanisms underlying the sex difference in response of lipids to light-induced circadian disruption are unknown but could be due to differences in circulating sex hormones. In sum, we found that ApoE −/− mice had circadian rhythms that were indistinguishable from ApoE +/+ mice. In addition, circadian disruption with constant light exposure increased atherosclerosis in male, but not female ApoE −/− mice. Together these data establish male ApoE −/− mice as an appropriate model for studying the effects of light-induced disruption of circadian rhythms on atherosclerosis. www.nature.com/scientificreports www.nature.com/scientificreports/ Methods Animals. C57BL/6 J ApoE −/− (N10) mice were purchased from The Jackson Laboratory (stock # 002052) and bred with wild-type C57BL/6 J mice (from The Jackson Laboratory) to generate C57BL/6 J heterozygous ApoE +/mice. Heterozygous C57BL/6 J ApoE +/males and females were bred to generate ApoE +/+ and ApoE −/− mice (N11-N12) for experiments. For bioluminescence experiments, ApoE +/mice that were heterozygous for PERIOD2::LUCIFERASE 33 (originally obtained from Dr. Joseph Takahashi and then backcrossed for 25 generations with C57BL/6 J mice from The Jackson Laboratory) were crossed with ApoE +/mice to generate ApoE +/+ and ApoE −/− mice that were heterozygous for PER2::LUC for experiments. Breeders and weanlings were housed in 12 L:12D and fed standard rodent laboratory diet (Teklad 2918) and water ad libitum. At 3 weeks old, offspring were weaned and group-housed with same-sex siblings. Genotyping for ApoE −/− was performed according to the protocol on The Jackson Laboratory website. Genotyping for PER2::LUC was determined by measuring bioluminescence from tail snips. For all experiments, mice were singly-housed in cages (33 cm × 17 cm × 14 cm) with running wheels (diameter: 11 cm) and fed low-fat diet (Research Diets D12450K, 10% kcal fat) and water ad libitum. The running wheels were either unlocked (could rotate) or locked (could not rotate), as indicated for each specific experiment below. All procedures were conducted in accordance with animal protocol 2015-2211 approved by the University of Kentucky Institutional Animal Care and Use Committee. Characterization of circadian behavior. At 7 to 8 weeks old, male and female ApoE −/− and ApoE +/+ mice were housed singly in cages with unlocked running wheels. The cages were placed in light-tight boxes in 12 L:12D with white LED lights (intensity 250 to 350 lux). Wheel revolutions were recorded every minute using the ClockLab system (Actimetrics, Inc, Wilmette, IL). Mice were housed in 12 L:12D for 7 days and then in constant darkness for 7 days. Data were analyzed with ClockLab analysis software (Actimetrics). Mean activity profiles of wheel-running activity (5-min bins) were compiled for7 days in 12 L:12D. The amplitude (Q p ) of the wheel-running rhythm was the peak value of the χ 2 periodogram for 7 days in 12 L:12D or 7 days in constant darkness or 7 days in constant light. Mean daily activity was determined for 7 days in 12 L:12D or 7 days in constant darkness or 7 days in constant light. The phase angle of entrainment was determined by fitting a linear regression line to 5 days in constant darkness, and then extending it back to the last day in 12 L:12D. The free-running period of wheel-running activity in constant darkness was determined by χ 2 periodogram with alpha set to 0.001 for days 1-7 in constant darkness. After 18 days in constant darkness with weekly light pulses (see below), we returned the mice to 12 L:12D for 20 days and then released them into constant light for 21 days. The free-running period of wheel-running activity in constant light was determined by χ 2 periodogram for days 1-7 in constant light. Wheel-running activity data are shown in actograms in 6-min bins in the normalized format (ClockLab). Circadian phase responses to light pulses. Male and female ApoE +/+ and ApoE −/− mice (12-18 weeks old at time of light pulse) were single housed in cages with ad libitum access to unlocked running wheels. The cages were housed in light-tight boxes in 12 L:12D with white LED lights (intensity 250 to 350 lux). Mice were then released into constant darkness for 7 days. The onset of activity on the day of the light pulse, which was designated as CT12, was determined by linear regression using ClockLab Analysis. A single light pulse (15 min, 150 lux white LED) was administered at CT8-10 (subjective day), or CT14-16 (early subjective night), or CT21-23 (late subjective night). The mice then free-ran for 7 days after the light pulse. The magnitudes of the phase shifts were determined by measuring the time between a line fit to the onset of activity the 7 days before the light pulse and a line fit to the onset of activity the 7 days after the light pulse using ClockLab Analysis software. Some mice were administered more than 1 light pulse, every 3 weeks, and the cages were changed in the week between pulses. Each mouse received no more than 3 light pulses and an individual mouse never received a light pulse at the same CT. Bioluminescence tissue rhythms. Male and female ApoE −/− and ApoE +/+ mice (7 weeks old) that were heterozygous for PER2::LUC were housed singly in cages with locked wheels (wheels were present but could not rotate) in 12 L:12D. At 8 or 20 weeks old, mice were euthanized by cervical dislocation without anesthesia and aorta, kidney, lung, liver, pituitary, spleen, SCN, and white adipose tissue explants were dissected and cultured as described previously 57 . Bioluminescence was measured every 10 min with the 32-channel LumiCycle apparatus (Actimetrics Inc.). Data were smoothed by 30-min adjacent averaging and detrended using LumiCycle Analysis software (Actimetrics Inc.). The amplitude (goodness of fit ≥90%) was determined from the cycle that occurred between 12-36 hours in culture with LumiCycle Analysis software (Actimetrics). The data were exported to ClockLab analysis software to analyze period and phase. The period was determined from a regression line fit to the acrophase of 3-5 cycles. The phase was the acrophase of the peak of bioluminescence that occurred between 12-36 hours in culture. Effects of constant light exposure on atherosclerosis, lipids, and inflammation. At 7 weeks old, male and female ApoE −/− mice were single-housed in cages with locked running wheels in light-tight boxes in 12 L:12D. General locomotor activity was continuously recorded with passive infrared sensors and collected in one-minute intervals using the ClockLab acquisition system (Actimetrics Inc.). At 8 weeks old, mice were randomized to either remain in 12 L:12D or to be housed in constant light for 12 weeks. Body and food weights were measured weekly between ZT9-ZT12, where ZT0 is lights on and ZT12 is lights off, in 12 L:12D, or the corresponding local time for mice in constant light. χ 2 periodograms were used to determine whether general locomotor activity was rhythmic in the final 28 days in constant light using ClockLab Analysis software. Activity was rhythmic if a dominant peak exceeded alpha set at p = 0.001 (period range 20-36 h). Scientific RepoRtS | (2020) 10:9920 | https://doi.org/10.1038/s41598-020-66834-9 www.nature.com/scientificreports www.nature.com/scientificreports/ At 20 weeks of age, mice were anesthetized with inhaled isoflurane between ZT6-9 (or the corresponding local time for mice in constant light) until unresponsive to toe pinches, then euthanized by cervical dislocation. Blood was collected by cardiac puncture and serum was separated by centrifugation and stored at −80 °C. The aortas were perfused with NaCl (0.9% wt/vol) via left ventricular puncture. Aortas were dissected from the root to the iliac bifurcation as described previously 58 . The heart was removed from the aorta and stored at −80 °C. Mice with gross organ abnormalities were excluded from the study (2 mice from the female constant light group were excluded for enlarged kidneys). The aortas were stored in 10% neutrally-buffered formalin for 24-hours at room temperature and then transferred to 0.9% NaCl solution and stored at room temperature. To measure atherosclerotic lesion area in en face aortas, peri-aorta adipose tissue was removed and the aorta was cut longitudinally and photographed using Image-Pro 7.0 software. The aortic arch was defined as the ascending arch to 3 mm distal to the root of the left subclavian artery. Atherosclerotic lesion area of the aortic arch was measured using Image-Pro 7.0 software by one researcher not blinded to the experimental treatments and analyzed by a second researcher who was blinded to the experimental groups. Atherosclerotic lesion area in the aortic root was measured as described previously 58 . Briefly, 9 serial tissue sections (10 µm) from the origin of the aortic valves to the ascending aorta, were stained with oil red O. Serial sections were distributed among eight consecutive slides resulting in ~80 µm intervals for each slide. Lesions were quantified from the internal elastic lamina to the luminal edge using Image-Pro 7.0 software by 2 researchers as described above. Macrophage quantification. Immunohistochemistry for macrophages was performed on frozen serial sections (adjacent to those stained with Oil Red O) of the ascending aorta using the MicroProbe system as described previously using rat anti-mouse CD68 (Bio-Rad, Cat# MCA1957), Rat IgG2b (BD PharMingen, Cat# 559478), and biotinylated rabbit anti-rat IgG (Vector, Cat# BA-4001) antibodies 59 . Immunoreactivity was visualized with red chromogen 3-amino-9-ethylcarbazole (AEC) (Vector). Macrophage content was quantified in the first serial section of the aortic root from the internal elastic lamina to the luminal edge using Image-Pro 7.0 software by 2 researchers as described above. The area of CD68 staining was expressed as a percentage of atherosclerotic lesion area. Statistical analyses. A priori power analyses were performed for circadian behavior parameters and en face atherosclerotic lesion area using alpha = 0.05, power = 0.80, and effect size of 1 to 1.5 using G*Power (Heinrich Heine Universität Düsseldorf) (Table S5). Circadian behavior parameters (amplitude, daily activity, phase angle of entrainment, period) and phase shifts in reponse to light pulses at each CT were compared between ApoE +/+ and ApoE −/− mice using two-tailed Student's t-tests, unless the data were not normally distributed or had unequal variance, in which case the Mann-Whitney test was used. One-way ANOVAs were used to compare the periods, phases, and amplitudes of the bioluminescence rhythms of each tissue among groups. Atherosclerosis lesion area in the en face aorta and aortic roots, total serum cholesterol and triglyceride concentrations, cumulative food intake, cumulative locomotor activity, and the percentage of lesions with macrophages in ApoE −/− mice of the same sex were compared between 12 L:12D and constant light conditions using two-tailed Student's t-tests, unless the data were not normally distributed or had unequal variance, in which case the Mann-Whitney test was used. Statistical tests were performed with OriginPro 2016 (Northampton, MA). Data are presented as the mean ± SEM. Significance was ascribed at p < 0.05. Data availability All data generated or analyzed during this study are included in this published article and its Supplementary Information files.
2020-06-18T14:36:17.783Z
2020-06-18T00:00:00.000
{ "year": 2020, "sha1": "65e66f0b3bd40bc1166d95b7ae21fe755ca2c6a5", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-66834-9.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65e66f0b3bd40bc1166d95b7ae21fe755ca2c6a5", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216523938
pes2o/s2orc
v3-fos-license
THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE Social Demographic Factors and Management of Diabetes Mellitus Type 11: An Empirical Investigation from Garissa County, Kenya : Studies in East Africa, West Africa, and South Africa by Miller (2013) as well as Joshi& Aravind (2017) have reported a higher rate of DM2 in the urban dwellers with an increasing trend in prevalence rates. However, no particular study has conclusively examined health education intervention and behavioural change in the control and management of DM2 in Kenya. This particular study therefore aimed to fill this existing research gap. Therefore, this study determined the effect of social demographic factors on managements of DM2 at Garissa County Hospital. The study sample size 138 adults enrolled at the diabetes clinics at the Garissa County hospital. Structured questionnaires were used for data collection. Data analysis was conducted using IBM-SPSS version 21. The results of this study provided vital information for formulating strategies and policies for comprehensive and sustainable diabetes management by County and Ministry of health. The findings of this study revealed an insignificant change in the levels of BMI and total Cholesterol in baseline and midterm, the insignificant changes in the levels of BMI and total cholesterol may be attributed to the improvement of living standards, changes in dietary structure, and reduction of physical activity due to low levels of knowledge and awareness on effective diabetes management. The findings are in line with a study by Trento et al. which established that BMI decreased over 6 months among study participants (−1.4, −2.0 to −0.7), but there was no statistically significant difference observed. Furthermore, a difficulty in the of BMI was mentioned in the study Scainet which also showed no differences when compared with normal care, although the BMI did decrease significantly when compared with the An effective model to diabetes as group education sessions, which improves clinical outcomes, patients’ quality of life, and clinicians’ satisfaction while optimizing use of the typically limited resources of busy clinics should also be considered. People aged 45-59 years are 8.5 times more likely to develop diabetes than those aged 15-29 years; and those above the age of 60 are 12.5 times more likely to develop diabetes based on the present prevalence rates in sub-Saharan Africa (Ruchugo, 2015). 8.8% of adults aged 20-79, are estimated to have diabetes. 79.0% of diabetes related deaths in Africa occur in people under the age of 60 (IDF, 2015). The age of onset of DM2 in Kenya is roughly between 45 and 55, compared with 64 years in most developed countries (Maina, et al., 2010). Kinship was reported as a DM2 risk factor among rural patients in Kenya (Chege, 2010). There are about 251.2 million men with diabetes globally and 199.5 women with diabetes thus there are 15.6 million more men than women with diabetes This difference is expected to decrease to about 15.1 million more men than women by 2040 (IDF, 2015). In a study conducted in Tanzania, proportion of participants with DM2 was higher in males than females (Miller, 2013). Study among underprivileged society in New Delhi established that the diabetes prevalence was higher in females than males (Misra, et al., 2001). The magnitude of DM2 was higher in males than in females in Ethiopia (Helamo, et al., 2017). Materials and Methods This study utilized quasi-experimental design combining qualitative and quantitative techniques of data analysis. The methodology was suitable because the purpose of the investigator was to check the degree of relationship between and between variables at a specific time point (Best & Kahn, 2006). This study was conducted in the Garissa Central Sub-County Hospital. One of the three counties in the northeastern region of Kenya, Garissa County is located in Garissa County, Kenya (Kenya National Bureau of Statistics, 2016). The target population was urban and peri-urban residents in Garissa County. The eligibility criteria for this study included patients diagnosed with type DM2; Patients living in the study county for the last 5 years; Patients aged 18 years and above, who are not suffering from NCD like hypertension or other serious diseases (heart, stroke, kidney or mental disease); Patients who had not developed complications; and Patients who were not using insulin; and who consent and are willing to participate. This study excluded patients not diagnosed with type DM2; or diagnosed with DM2, however not living in the study counties, aged below 18 years, Those suffering from NCD or other serious diseases (heart, stroke, kidney or mental disease); who have complications and those using insulin; and who fail to consent and are not willing to participate. This study used purposive sampling to select Garissa Central Sub-County hospital due to its location and existing urban and peri-urban population. DM2 patients in the hospital who fit the inclusion and exclusion criteria formed the sampling frame. DM2 patients were voluntarily recruited into the treatment and control arms. The study was conducted in Garisa Level 5 hospital along with other two hospitals. The study sample size was 138 DM2 patients, that is 69 patients in the treatment arm and 69 patients the control arm. The sample was determined by the formula for comparing two proportions (Carayannis et al, 2011) for non-inferiority clinical trials using Sakpalformula (Sakpal, 2010). The research instrument used in the study was a structured questionnaire. The questionnaire consisted of structured questions to collect data on Diabetes Knowledge (Menino, et al., 2017). Figure 2 presents data on the highest education levels attained by the respondents, based on the data, 39% of the respondents indicated they had no education, 31% had primary education, 17% had Diplomas, while 3% and 2% had secondary education and Masters degree respectively. None of the respondents had PhD qualifications. The findings imply that a greater proportion of the respondents had the satisfactory level of education necessary for understanding diabetes. The data presented on figure 3 presents data on the respondents' marital status, based on the data, 52% of the respondents were married, 25% were divorced/separated/widowed, while 23% had never married. Figure 5: Respondents' Height in Meters Source: Survey Data (2019) The researcher recorded the heights of the respondents, this was used in the calculation of BMI, based on the data 78% of the participants recorded a height of 1.511-2.00 meters, 20% recorded heights of 1.01-1.50 meters while only 2% recorded heights ranging between 0.51-1.00 meters. The study sought to establish the household income of the respondents, based on the data presented on figure 4.6, 28% reported a monthly household income of Kshs.20,001-30,000, 22% reported a household income of Kshs.50,001-60,000,10% indicated an household income of 40,001-50,000 while 11% indicated an household income of Kshs.30,001-40,000 also 9% of the respondents reported a household income of below 10,000 and 5% of the respondents reported a household income of 10,001-20,000. Only 15% of the respondents reported a monthly household income above Kshs. 60,000. Based on the data presented on Table 1, 54% of the respondents reported none of their closest family members had diabetes mellitus 2, 36% reported that brother/sister had diabetes mellitus 2, 10% indicated that parent had diabetes mellitus 2 while only 4% reported that grandfather/mother had diabetes mellitus 2. The findings imply that disease was not as a result of hereditary factors among a greater proportion of the respondents but as a result of their lifestyle. This further shows that there was low level of knowledge and information on diabetes risks factors as well as management among the respondents targeted for the study Vol 7 Issue 11 DOI No.: 10.24940/theijst/2019/v7/i11/ST1911-039 November, 2019 Figure 7 presents data on the size of household of the respondents, based on the data, 36% of the respondents reported a household of 6-8 persons, 22% reported a household of 3-5 persons,14% reported a household of 8-10 persons while 19% reported a household of above 10 persons. However, only 9% of the respondents reported a household of 1-2 persons. Figure 8: Previous Training on Diabetes Management Source: Survey Data (2019) As presented in figure 4.8, 94% of the respondents indicated that they had not undergone any training on diabetes management, only 6% of the respondents agreed that they had undergone training on diabetes management. .9 presents data on BMI of the respondents, based on the data, 32% recorded a BMI ranging between 25 Kg/m² and 29 Kg/m², 23% recorded BMI values ranging between 30-34 Kg/m², 20% recorded a BMI ranging between 21 Kg/m² and 24 Kg/m². Only 10% of the respondents recorded BMI values above 34 Kg/m².
2022-05-28T18:37:51.878Z
2019-11-30T00:00:00.000
{ "year": 2019, "sha1": "5309ae428b66f6a0c76192a129773854d659abcd", "oa_license": null, "oa_url": "http://www.internationaljournalcorner.com/index.php/theijst/article/download/150046/104692", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5309ae428b66f6a0c76192a129773854d659abcd", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268315475
pes2o/s2orc
v3-fos-license
Recent trends in the epidemiology and clinical outcomes of inflammatory bowel disease in South Korea, 2010-2018 BACKGROUND Inflammatory bowel disease (IBD) was previously regarded as a Western disease; however, its incidence is increasing in the East. The epidemiology of IBD in Asia differs significantly from the patterns in the West. AIM To comprehensively investigate the epidemiology of IBD in South Korea, including its incidence, prevalence, medication trends, and outcomes. METHODS We analyzed claims data from the Health Insurance Review and Assessment Service and Rare and Intractable Diseases (RIDs), operated by the National Health Insurance Service of South Korea. Patients with IBD were identified based on the International Classification of Diseases, Tenth Revision, and RID diagnostic codes for Crohn’s disease (CD) and ulcerative colitis (UC) from 2010 to 2018. RESULTS In total, 14498 and 31409 patients were newly diagnosed with CD and UC, respectively, between 2010 and 2018. The annual average incidence of CD was 3.11 cases per 105 person-years, and that of UC was 6.74 cases per 105 person-years. Since 2014, the incidence rate of CD has been stable, while that of UC has steadily increased, shifting the peak age group from 50-year-olds in 2010 to 20-year-olds in 2018. The CD and UC prevalence increased consistently over the study period; the use of 5-aminosalicylates and corticosteroids gradually decreased, while that of immunomodulators and biologics steadily increased in both CD and UC. The clinical outcomes of IBD, such as hospitalization and surgery, decreased during the study period. CONCLUSION The CD incidence has been stable since 2014, but that of UC has increased with a shift to a younger age at peak incidence between 2010 and 2018. IBD clinical outcomes improved over time, with increased use of immunomodulators and biologics. INTRODUCTION Inflammatory bowel disease (IBD), which includes Crohn's disease (CD) and ulcerative colitis (UC), is a chronic relapsing inflammatory disease of the gastrointestinal tract.Although it was previously regarded as a Western disease, its incidence has increased in newly industrialized countries, including South Korea, since the start of the twenty-first century [1,2]. South Korea is a representative Asian country in which the incidence and prevalence of IBD are rapidly increasing [1].According to a longitudinal population-based study conducted in the Songpa-Kangdong (SK) district of Seoul, South Korea, the incidence rates of both CD and UC have rapidly increased since the first diagnosis of a patient with IBD in Korea in 1986 (from 0.06/10 5 [3] inhabitants/year in 1986-1990 to 2.44/10 5 inhabitants/year in 2011-2015 for CD, and from 0.29/10 5 inhabitants/year in 1986-1990 to 5.82/10 5 inhabitants/year in 2011-2015 for UC) [4].Several nationwide population-based studies have investigated the epidemiology of IBD in South Korea [3,5,6].Jung et al [3] reported the average annual incidence of UC and CD to be 5.0 and 2.8 per 10 5 person-years, respectively, from 2011 to 2014, and Kwak et al [6] reported that both the prevalence and incidence of CD and UC showed an increase between 2007 and 2016 (1.9fold and 1.2-fold increase in the prevalence and incidence of CD, respectively, and 1.6-fold and 1.3-fold increase in the prevalence and incidence of UC, respectively).However, despite the previous epidemiological data on IBD in South Korea, studies on recent trends are still lacking.Furthermore, although the clinical course of IBD has changed due to the recent introduction of various new therapeutic agents, there are very few demographic studies on its clinical outcomes. The current study aimed to explore the incidence and prevalence of IBD using a nationwide population-based cohort from the National Health Insurance Service (NHIS) database and to investigate the temporal changes in medication and clinical outcomes of IBD in South Korea. Data source The NHIS is a mandatory nationwide insurance system operated by the government that covers approximately 97% of the entire population of South Korea.It contains medical claims data, including patient demographics, principal diagnosis, and comorbidities, using the International Classification of Disease 10 th revision (ICD-10) codes and prescriptions, admissions, and procedures [7].All the information from the NHIS is integrated into the Health Insurance Review and Assessment Service (HIRA) claims database, which is a comprehensive data source for epidemiological studies in South Korea [3,5,6]. The NHIS established the Rare and Intractable Disease (RID) registration program in 2006 to provide a medical copayment reduction of 10% to patients with RIDs, including IBD.To register in this program and obtain a special code (V code), patients must obtain diagnostic approval from a physician based on the strict diagnostic criteria defined by the NHIS [5].Because it is directly related to the government's support for medical expenses, RID registration is conducted strictly to ensure a high reliability in RID diagnosis. The current retrospective nationwide cohort study used claims data from HIRA and RIDs (operated by the NHIS) and was approved by the Seoul National University Hospital Institutional Review Board (1806-060-949).Because there are no personal identifiers, informed consent was waived. Patient identification Patients diagnosed with IBD between 2010 and 2018 were included in this study.To establish an accurate definition for identifying patients with IBD, we compared two different definitions: one defined patients with IBD using only ICD-10 codes, whereas the other used both ICD-10 and V codes.Finally, we identified patients with CD as those with both ICD-10 code K50 and RID code V130 and patients with UC as those with ICD-10 code K51 and RID code V131.The diagnostic accuracy of each definition for identifying patients with IBD was assessed using data from the SK-IBD study in which all IBD cases were strictly identified through a medical record review conducted by two experts [4]. The following information was collected from the HIRA claims database: date of diagnosis, comorbidities, prescription records, and IBD-related clinical outcomes, including emergency room (ER) visits, hospitalizations, and surgery. Definition of IBD-related medication and outcomes IBD-related medications included 5-aminosalicylates (5-ASA), immunomodulators (azathioprine or 6-mercaptopurine), systemic corticosteroids (including both oral and intravenous corticosteroids), and biologics (infliximab, adalimumab, vedolizumab, ustekinumab, or tofacitinib), prescribed at least once, along with the diagnostic codes.ER visit was defined as a visit to the ER for a primary diagnosis of IBD.Hospitalization was defined as an admission for ≥ 3 d with the principal diagnosis of IBD.Surgery was defined as resection of the small bowel, colon, or rectum in patients diagnosed with IBD. Statistical analysis The annual incidence of IBD was defined as the number of newly diagnosed cases of IBD per 10 5 individuals in the respective year, based on the registered resident population for that year.Prevalence was defined as the number of patients with IBD per 10 5 person-years based on the registered resident population for that year.Incidence and prevalence were calculated for different age and sex groups, and each value was expressed as the number of cases per 10 5 person-years. Trends in IBD-related prescriptions and clinical outcomes, including ER visits, hospitalization, and surgery, between 2010 and 2018 were examined.Trends in prescriptions and clinical outcomes were described based on the number of patients who were prescribed IBD-related medications or had an ER visit, hospitalization, or surgery annually.Statistical analyses were performed using SAS version 9. Accuracy of diagnostic definitions To assess the accuracy of the diagnostic definitions, we identified patients with IBD from the healthcare data of the SK district, using only ICD-10 codes or both ICD-10 codes and V-codes, and compared them with the published results of the SK-IBD study from 2011 to 2015 [4].Compared to using ICD-10 and V codes for identifying patients with IBD, using only ICD-10 codes overestimated the incidence and prevalence (mean annual incidence rate per 10 5 inhabitants: 17.34 for CD and 109.05 for UC).However, when V-codes were applied in addition to the ICD-10 codes, the results (mean annual incidence rate per 10 5 inhabitants: 3.79 for CD and 7.40 for UC) were similar to the incidence rates reported in a previous study for the SK district (mean annual incidence rate per 10 5 inhabitants: 2.44 for CD and 5.82 for UC) (Supplemen-tary Table 1). Incidence Between 2010 and 2018, 45907 patients were newly diagnosed with IBD; specifically, 14498 (31.6%) and 31409 (68.4%) patients were diagnosed with CD and UC, respectively.Table 1 and Figure 2 present the incidence of IBD in South Korea between 2010 and 2018.The mean annual incidence rates during the study period were 3.1 per 10 5 and 6.7 per 10 5 for CD and UC, respectively.The incidence of UC increased steadily throughout the study period from 5.3 to 8.2 per 10 5 between 2010 and 2018, with an annual increase of approximately 0.4%.Meanwhile, the incidence of CD increased from 2.5 to 3.3 per 10 5 between 2010 and 2014 and remained relatively stable thereafter at approximately 3.3 per10 5 .In 2018, the UC-to-CD incidence ratio was 2.5. Age-specific incidence rates showed different patterns for CD and UC throughout the study period (Figure 3).For CD, the peak incidence was observed in individuals aged 10-19 years.The incidence rate remained relatively high in the 20-29-year-olds but decreased significantly after age 30, exhibiting a consistent pattern over time.In contrast, there was a shift in the peak incidence age for UC, decreasing from 50-59 years in 2010 to 20-29 years in 2018. Both CD and UC showed higher incidence rates in men than in women over the study period, with sex differences being more prominent in patients with CD than in those with UC.In 2018, the male-to-female ratios for CD and UC were 2.4 and 1.7, respectively.Additionally, age-and sex-specific incidence rates demonstrated distinct patterns for CD and UC in 2018 (Figure 4).The peak incidence of CD occurred in patients aged 10-19 years in both men and women.In contrast, while the incidence of UC peaked in the 20-29-year-olds for both men and women, the second peak incidence of UC was observed only in males in the 60-69-year-old age group. Prevalence The prevalence of IBD steadily increased between 2010 and 2018, reaching 17168 and 34420 patients with CD and UC, respectively, in 2018.CD and UC showed more than double the prevalence over the study period, from 15.1 per 10 5 in 2010 to 32.7 per 10 5 in 2018 for CD, and from 31.6 per 10 5 in 2010 to 65.5 per 10 5 in 2018 for UC.The prevalence rate was higher in men than in women for both CD and UC, with a greater sex difference observed, especially for CD; the values were 46.8 per 10 5 in men and 18.5 per 10 5 in women for CD, and 77.1 per 10 5 in men and 53.9 per 10 5 in women for UC (Supplementary Figure 1).Furthermore, the prevalence of IBD increased in all age groups throughout the study period.The peak prevalence of CD occurred in the 20-29-year-old age group, whereas that of UC occurred in the 60-69-year-old age group, which was consistent for both men and women (Supplementary Figures 2 and 3). Changes in prescription patterns of IBD medications 5-ASA, immunomodulators, systemic corticosteroids, and biologics were administered to 12435 (72.4%), 10516 (61.3%), 4323 (25.2%), and 6278 (36.6%) patients with CD, respectively, in 2018.In the UC group, 5-ASA was administered to almost all patients (32323; 93%), whereas immunomodulators, systemic corticosteroids, and biologics were administered to 5789 (16.8%), 6271 (18.2%), and 2822 (8.2%) patients, respectively.Figure 5 shows the temporal changes in the use of IBD medications between 2010 and 2018.The number of patients using immunomodulators and biologics increased in both the CD and UC groups during the study period.In patients with CD, the use of immunomodulators increased from 52.1% in 2010 to 61.3% in 2018, and that of biologics increased from 11.9% in 2010 to 36.6% in 2018.Moreover, in patients with UC, the use of immunomodulators increased from 13.1% in 2010 to 16.8% in 2018, whereas that of biologics increased from 0.7% in 2010 to 8.2% in 2018.Conversely, the use of steroids decreased during the same period, from 32.1% in 2010 to 25.2% in 2018 for CD and from 23.8% in 2010 to 18.2% in 2018 for UC.Finally, the use of 5-ASA decreased from 87.9% in 2010 to 72.4% in 2018 in CD while remaining relatively unchanged in UC patients throughout the study period. Clinical outcomes Figure 6 shows the trend in IBD outcomes, including IBD-related ER visits, hospitalizations, and surgeries, between 2010 and 2018.In 2018, 1991 (11.6%) and 1081 (3.1%) patients with CD and UC, respectively, visited the ER at least once with a diagnosis of IBD.The rate of ER visits did not change significantly during the study period. A total of 3448 (20.1%) and 2440 (7.1%) patients with CD and UC, respectively, were hospitalized in 2018.The hospitalization rates for CD and UC showed a decreasing trend during the study period, from 24.1% in 2010 to 20.1% in 2018 for CD and from 8.9% in 2010 to 7.1% in 2018 for UC. Regarding surgery, 399 patients with CD (2.3%) and 42 patients with UC (0.1%) underwent bowel resection for complications related to IBD in 2018.The surgery rate for patients with CD decreased from 3.4% in 2010 to 2.3% in 2018.Regarding UC, there was a decreasing trend in the surgery rate, from 0.4% in 2010 to 0.2% in 2014; however, it remained relatively unchanged thereafter. DISCUSSION This nationwide population-based study investigated the comprehensive epidemiological data on IBD in South Korea from 2010 to 2018.To the best of our knowledge, this was the first study to analyze the epidemiology of IBD in an Asian population, considering medication use and IBD-related clinical outcomes, such as ER visits, hospitalizations, and surgeries.Throughout the study period, the incidence of UC increased continuously, with a shift towards the younger age group, whereas that of CD plateaued from 2014 onwards.Furthermore, patients with more recently diagnosed IBD showed higher use rates of immunomodulators and biologics and lower rates of hospitalization and surgery. The average annual incidence rates of IBD in this study from 2010 to 2018 were 3.1 per 10 5 for CD and 6.7 per 10 5 for UC, comparable to the findings of previous studies conducted in Korea using the nationwide HIRA database.In such studies, the incidence of CD and UC ranged from 2.6 to 3.2 per 10 5 people and from 4.3 to 5.3 per 10 5 people, respectively, during 2005-2016 [3,5,6,8].The incidence of IBD in the present study was lower than that in North America (Olmsted County, Minnesota; the incidence of CD was 10.7 per 10 5 people and that of UC was 12.2 per 10 5 people) and Europe (Denmark; incidence of CD was 9.1 per 10 5 people and that of UC was 18.6 per 10 5 people) [9,10].However, this was the highest estimate among Asian countries, representing the economic advancement and Westernization of society with rapid industrialization in Korea [11,12]. Remarkably, the incidence of UC increased continuously over the entire study period, whereas that of CD plateaued from 2014 onwards.Although the SK cohort study showed an increased incidence of IBD, the increase has slowed down recently [4].Moreover, recent studies using a nationwide database have reported the incidence of IBD to have plateaued or even decreased [3,5,6].The global evolution of IBD could explain this trend; IBD in Korea has recently accelerated through the incidence stage to the stage of compounding prevalence [2].Unlike other industrialized countries in Asia that experienced a rising incidence of IBD in the 2020s, Korea and Japan transitioned into an accelerated incidence stage earlier owing to rapid industrialization [6,13].Studies with more recent data may show a trend toward a decrease or plateau in incidence.Importantly, our study included the most recent data from previous epidemiological studies in Korea and confirmed a stabilizing incidence and an increasing trend in the prevalence of IBD.Longitudinal studies with long-term follow-ups would be necessary to evaluate the transition of this epidemiological stage accurately. One of the interesting findings of this study was the shift in the peak age of UC incidence to a younger age, which changed from individuals in their 50 s to those in their 20 s, during the study period.This trend has also been observed in several previous studies in Korea [6,8,14].Although the exact reason for this shift is still unclear, changes in dietary habits and lifestyle factors may be linked.Furthermore, the increased frequency of eating out and using delivered food, especially among young people, and the rise in coffee and sugar-sweetened beverage consumption, could play a role [15,16].Moreover, reduced exposure to green areas and increased time spent in urban living and working environments may have contributed to this trend [17,18].Finally, with improved access to healthcare over the past few decades, the rate of endoscopic examinations among young people may have increased, leading to a higher rate of early diagnosis of UC [19].Further studies are recommended to elucidate the underlying reasons for this trend. Another interesting finding of this study was that the rates of hospitalization and surgery in patients with IBD decreased over the study period, in line with the increased use of immunomodulators and biologics.Temporal changes in IBD outcomes following the use of new therapeutic agents have been addressed in previous studies.In a hospital-based cohort study of patients with UC, conducted in Korea between 1986 and 2015, the rates of hospitalization and surgery decreased over time; in particular, the rate of surgery significantly decreased after the introduction of anti-tumor necrosis factor agents [20].A meta-analysis of 44 population-based cohort studies revealed a decreasing trend in the risk of surgery, which was significantly lower in patients diagnosed in the 21 st century than in those diagnosed in the 20 th century for both CD and UC [21]. The ER visits and hospitalization rates in this study were comparable to those reported in previous Korean studies [3,20].However, the surgery rate was much lower than that in previous studies [7,21,22].This discrepancy could be attributed to differences in the methods used to calculate the surgery rate.Two Western studies have reported that the risk of surgery is highest within the first 1 or 2 years after the diagnosis of UC, and one South Korean study has reported that the rate of surgery is low between 5 and 10 years after the diagnosis of UC, increasing slightly thereafter [20,23,24].Our current study analyzed the rate of surgery in patients with IBD each year, irrespective of the time of diagnosis, so it did not reflect the cumulative probabilities of surgery after IBD diagnosis.Furthermore, considering that previous studies reported a gradual decrease in the surgery rate over time, the lower rate in this study could have been due to our consideration of a more recent time period [21,25]. This study had some limitations.First, the certainty of the IBD diagnosis was limited owing to the nature of the claims data.To overcome this concern, we defined patients with IBD more accurately using both ICD-10 codes and V-codes rather than relying only on ICD-10 codes based on a comparative analysis of patients with IBD in the SK-IBD study.Our definition was further validated internally at a tertiary referral hospital in Korea (Seoul National University Hospital), demonstrating sensitivity rates of 94.5% and 96.4% for CD and UC, respectively.Second, since we used claims data, we could not obtain detailed clinical information, such as symptoms, disease phenotype, or severity, which may have influenced the clinical outcomes of IBD.Thus, we could not establish a causal relationship between using immunomodulators and biologics and IBD-related clinical outcomes.Third, the present study included a study cohort that was retrospectively enrolled with a diagnostic code for IBD during the study period from 2010 to 2018 and was followed up until 2018.Therefore, the duration of the disease and follow-up within the cohort showed heterogeneity; moreover, the follow-up duration may be too short to reflect the entire clinical course of IBD, particularly among subjects diagnosed recently.Finally, although we carefully devised an operational definition of clinical outcomes to encompass IBD-related results, inherent uncertainty remained since the NHIS does not provide detailed clinical data of individual patients or information on individual identifiers. CONCLUSION In conclusion, our study indicated a distinct shift in the epidemiology of the Korean population with IBD towards a stabilizing incidence and a younger age at diagnosis.The incidence of CD has been stable since 2014, whereas that of UC has continuously increased, peaking at a younger age.Moreover, the use of immunomodulators and biologics increased notably, aligning with a reduction in the rates of hospitalization and surgery during the study period.Our findings shed light on the evolving landscape of IBD in Korea and reveal changes in its incidence, treatment patterns, and clinical outcomes. 4 (SAS Institute, Cary, NC, United States) and R version 3.4.3(The R Foundation for Statistical Computing, Vienna, Austria).Figure 1 provides an overview of the study's selection process. Figure 1 CONSORT Figure 1 CONSORT flow diagram illustrating the study process.IBD: Inflammatory bowel disease; ER: Emergency room. Figure 3 Figure 3 Age-specific incidence of Crohn's disease and ulcerative colitis in 2010, 2014, and 2018 in South Korea.A: Crohn's disease; B: Ulcerative colitis. Figure 6 Figure 6 Temporal trend of clinical outcomes in patients with Crohn's disease and ulcerative colitis, from 2010 to 2018.A: Ulcerative colitis; B: Crohn's disease.ER: Emergency room. Table 1 Incidence rates of Crohn's disease and ulcerative colitis in South Korea from 2010 to 2018 Year of diagnosis 1 Per 10 5 person-years.CD: Crohn's disease; UC: Ulcerative colitis.
2024-03-11T17:03:52.195Z
2024-03-07T00:00:00.000
{ "year": 2024, "sha1": "994253b92cf21a1123feb4b36207d18d702de075", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v30.i9.1154", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3408ce8a03d0181ac0bb151d952cd69e7255e24d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12060029
pes2o/s2orc
v3-fos-license
A Classification of Grammar Development Strategies In this paper Introduction: Four grammar development strategies There are several potential strategies to build widecoverage grammars, therefore there is a need for classifying these various strategies. In this paper, we propose a classification of grammar development strategies according to two criteria : ¢ Hand-crafted versus Automatically acquired grammars ¢ Grammars based on a low versus high level of syntactic abstraction. As summarized in table 1, our classification yields four types of grammars, which we call respectively type A, B, C and D. Of these four types, three have already been implemented to develop wide-coverage grammars for English within the Xtag project, and an implementation of the fourth type is underway 1 . Most of our examples are based on the development of wide coverage Tree Adjoining Grammars (TAG), but it is important to note that the classification is relevant within other linguistic frameworks as well (HPSG, GPSG, LFG etc.) and is helpful to discuss portability among several syntactic frameworks. We devote a section for each type of grammar in our classification. We discuss the advantages and drawbacks of each approach, and especially focus on how each type performs w.r.t. grammar coverage, linguistic adequacy, maintenance, over-and under-generation as well as to portability to other syntactic frameworks. We discuss grammar replication as a mean to compare these approaches. Finally, we argue that the fourth type, which is currently being implemented, exhibits better development properties. TYPE A Grammars: hand-crafted The limitations of Type A grammars (hand-crafted) are well known : although linguistically motivated, developing and maintaining a totally handcrafted grammar is a challenging (perhaps unrealistic ?) task. Such a large hand-crafted grammar for TAGs is described for English in (XTAG Research Group, 2001). Smaller hand-crafted grammars for TAGs have been developed for other languages (e.g. French (Abeille, 1991)), with similar problems. Of course, the limitations of hand-crafted grammars are not specific to the TAG framework (see e.g. (Clement and Kinyon, 2001) for LFG). Coverage issues The Xtag grammar for English, which is freely downloadable from the project homepage 2 (along with tools such as a parser and an extensive documentation), has been under constant development for approximately 15 years. It consists of more than 1200 elementary trees (1000 for verbs) and has been tested on real text and test suites. For instance, (Doran et al., 1994) report that 61% of 1367 grammatical sentences from the TSNLP test-suite (Lehman and al, 1996) were parsed with an early version of the grammar. More recently, (Prasad and Sarkar, 2000) evaluated the coverage of the grammar on "the weather corpus", which contained rather complex sentences with an average length of 20 words per sentence, as well as on the "CSLI LKB test suite" (Copestake, 1999). In addition, in order to Automatically acquired level of syntactic abstraction Automatically generated grammar Table 1: A classification of grammars evaluate the range of syntactic phenomena covered by the Xtag grammar, an internal test-suite which contains all the example sentences (grammatical and ungrammatical) from the continually updated documentation of the grammar is distributed with the grammar. (Prasad and Sarkar, 2000) argue that constant evaluation is useful not only to get an idea of the coverage of a grammar, but also as a way to continuously improve and enrich the grammar 3 . Parsing failures were due, among other things, to POS errors, missing lexical items, missing trees (i.e. grammar rules), feature clashes, bad lexicon grammar interaction (e.g. lexical item anchoring the wrong tree(s)) etc. Maintenance issues As a hand-crafted grammar grows , consistency issues arise and one then needs to develop maintenance tools. (Sarkar and Wintner, 1999) describe such a maintenance tool for the Xtag grammar for English, which aims at identifying problems such as typographical errors (e.g. a typo in a feature can prevent unification at parse time and hurt performance), undocumented features (features from older versions of the grammar, that no longer exist), type-errors (e.g. English verb nodes should not be assigned a gender feature), etc. But even with such maintenance tools, coverage, consistency and maintenance issues still remain. Are hand-crafted grammars useful ? Some degree of automation in grammar development is unavoidable for any real world application : small and even medium-size hand-crafted grammar are not useful for practical applications because of their limited coverage, but larger grammars give way to maintenance issues. However, despite the problems of coverage and maintenance encountered with hand-crafted grammars, such experiments are invaluable from a linguistic point of view. In particular, the Xtag grammar for English comes with a very detailed documentation, which has proved extremely helpful to devise increasingly automated approaches to grammar development (see sections below) 4 . TYPE B Grammars: Automatically extracted To remedy some of these problems, Type B grammars (i.e. automatically acquired, mostly from annotated corpora) have been developed. For instance (Chiang, 2000), (Xia, 2001) (Chen, 2001) all automatically acquire large TAGs for English from the Penn Treebank (Marcus et al., 1993). However, despite an improvement in coverage, new problems arise with this type of grammars : availability of annotated data which is large enough to avoid sparse data problems, possible lack of linguistic adequacy, extraction of potentially unreasonably large grammars (slows down parsing and increases ambiguity), lack of domain and framework independence (e.g. a grammar extracted from the Penn Treebank will reflect the linguistic choices and the annotation errors made when annotating the treebank). We give two examples of problems encountered when automatically extracting TAG grammars: The extraction of a wrong domain of locality; And The problem of sparse-data regarding the integration of the lexicon with the grammar. Wrong domain of locality Long distance dependencies are difficult to detect accurately in annotated corpora, even when such dependencies can be adequately modeled by the grammar framework used for extraction (which is the case for TAGs, but not for instance for Context Free Grammars). For example, (Xia, 2001) extracts two elementary trees from a sentence such as Which dog does Hillary Clinton think that Chelsea prefers. These trees are shown on figure 1. Unfortunately, because of the potentially unbounded dependency, the two trees exhibit an incorrect domain of locality: the Wh-extracted element ends up in the wrong elementary tree, as an argument of "think", instead of as an argument of "prefer" 5 Figure 1: Extraction of the wrong domain of locality This problem is not specific to TAGs, and would translate in other frameworks into the extraction of the "wrong" dependency structure 6 . Sparse data for lexicon-grammar integration Existing extraction algorithms for TAGs acquire a fully lexicalized grammar. A TAG grammar may be viewed as consisting of two components: on the one 5 Some extraction algorithms such as those of (Chen, 2001) or (Chiang, 2000) do retrieve the right the right domain of locality for this specific example, but do extract a domain of locality which is incorrect in some other cases. 6 One can argue that the problem does not appear when using simple CFGs, and/or that this problem is only of interest to linguists. A counter-argument is that linguistic adequacy of a grammar, whether extracted or not, DOES matter. An extreme caricature to illustrate this point : the context free grammar S 3 S word 4 word allows one to robustly and unambiguously parse any text, but is not very useful for any further NLP. hand "tree templates" and on the other hand a lexicon which indicates which tree template(s) should be associated to each lexical item 7 . Suppose the following three sentences are encountered in the training data : 1. Peter watches the stars 2. Mary eats the apple 3. What does Peter watch ? From these three sentences, two tree templates will be correctly acquired, as shown on figure 2 : The first one covers the canonical order of the realization of arguments for sentences 1 and 2, the second covers the case of a Wh-extracted object for sentence 3. Concerning the interaction between the lexicon and the grammar rules, the fact that "watch" should select both trees will be accurately detected. However, the fact that "eat" should also select both trees will be missed since "eat" has not been encountered in a Wh-extractedObject construction. Figure 2: Correct templates, but incomplete lexicon-grammar interface A level of syntactic abstraction is missing : in this case, the notion of subcategory frame. This is especially noticeable within the TAG framework from the fact that in a TAG hand-crafted grammar the grammar rules are grouped into "tree families", with one family for each subcategorization frame (transitive, intransitive, ditransitive, etc.), whereas automatically extracted TAGs do not currently group trees into families. TYPE C Grammars To remedy the lack of coverage and maintenance problems linked to hand-crafted grammars, as well as the lack of generalization and linguistic adequacy of automatically extracted grammars, new syntactic levels of abstraction are defined. In the context of TAGs, one can cite the notion of MetaRules (Becker, 2000), (Prolo, 2002) 8 , and the notion of MetaGrammar (Candito, 1996), (Xia, 2001). MetaRules A MetaRule works as a pattern-matching tool on trees. It takes as input an elementary tree and outputs a new, generally more complex, elementary tree. Therefore, in order to create a TAG, one can start from one canonical elementary tree for each subcategorization frame and a finite number of MetaRules which model syntactic transformations (e.g. passive, wh-questions etc) and automatically generate a full-size grammar. (Prolo, 2002) started from 57 elementary trees and 21 handcrafted MetaRules, and re-generated the verb trees of the hand-crafted Xtag grammar for English described in the previous section. The replication of the hand-crafted grammar for English, using a MetaRule tool, presents interesting aspects : it allows to directly compare the two approaches. Some trees generated by (Prolo, 2002) were not in the hand-crafted grammar (e.g. various orderings of "by phrase passives") while some others that were in the hand-crafted grammar were not generated by the MetaRules 9 . This replication process makes it possible, with detailed scrutiny of the results, to : ¢ Identify what should be consider as under-or over-generation of the MetaRule tool. ¢ Identify what should be considered to be under-or over-generation of the hand-crafted grammar. Thus, grammar replication tasks make it possible to improve both the hand-crafted and the MetaRule generated grammars. MetaGrammars Another possible approach for compact and abstract grammar encoding is the MetaGrammar (MG), initially developed by (Candito, 1996). The idea is to compact linguistic information thanks to an additional layer of linguistic description, which imposes a general organization for syntactic information in a three-dimensional hierarchy : Each terminal class in dimension 1 describes a possible initial subcategorization (i.e. a TAG tree family). Each terminal class in dimension 2 describes a list of ordered redistributions of functions (e.g. it allows to add an argument for causatives, to erase one for passive with no agents ...). Each terminal class in dimension 3 represents the surface realization of a surface function (ex: declares if a direct-object is pronominalized, wh-extracted, etc.). Each class in the hierarchy corresponds to the partial description of a tree (Rogers and Vijay-Shanker, 1994). A TAG elementary tree is generated by inheriting from exactly one terminal class from dimension 1, one terminal class from dimension 2, and n terminal classes from dimension 3 (where n is the number of arguments of the elementary tree being generated). For instance the elementary tree for "Par qui sera accompagnee Marie" (By whom will Mary be accompanied) is generated by inheriting from transitive in dimension 1, from impersonal-passive in dimension 2 and subjectnominal-inverted for its subject and questionedobject for its object in dimension 3. This compact representation allows one to generate a 5000 tree grammar from a hand-crafted hierarchy of a few dozens of nodes, esp. since nodes are explicitly defined only for simple syntactic phenomena 10 . The MG was used to develop a wide-coverage grammar for French (Abeille et al., 1999). It was also used to develop a medium-size grammar for Italian, as well as a generation grammar for German (Gerdes, 2002) using the newly available implementation described in (Gaiffe et al., 2002). A similar MetaGrammar approach has been described in (Xia, 2001) for English 11 . MetaGrammars versus MetaRules: which is best ? It would be desirable to have a way of comparing the results of the MetaGrammar approach with that of the MetaRule approach. Unfortunately, this is not possible because so far none of the two approaches have been used within the same project(s). Therefore, in order to have a better comparison between these two approaches, we have started a second replication of the Xtag grammar for English, this time using a MG. This replication should allow us to make a direct comparison between the hand-crafted grammar, the grammar generated with MetaRules and the grammar generated with a MG. For this replication task, we use the more recent implementation presented in (Gaiffe et al., 2002) because their tool : 10 Nodes for complex syntactic phenomena are generated by automatic crossings of nodes for simple phenomena 11 but that particular work did not attempt to replicate the Xtag grammar, and thus the generated grammar is not directly comparable to the hand-crafted version of the grammar. Is freely available 12 , portable (java), well maintained and includes a Graphical User Interface. ¢ Outputs a standardized XML format 13 ¢ Is flexible (one can have more than 3 dimensions in the hierarchy) and strictly monotonic w.r.t. the trees built ¢ Supports "Hypertags", i.e. each elementary tree in the grammar is associated with a feature structure which describes its salient linguistic properties 14 . In the (Gaiffe et al., 2002) implementation, each class in the MG hierarchy can specify : ¢ Its SuperClasse(s) ¢ A Feature structure (i.e. Hypertag) which captures the salient linguistic characteristics of that class. ¢ What the class needs and provides ¢ A set a quasi-nodes ¢ Constraints between quasi-nodes (father, dominates, precedes, equals) ¢ traditional feature equations for agreement. The MG tool automatically crosses the nodes in the hierarchy, looking to create "balanced" classes, that is classes that do not need nor provide anything. From these balanced terminal classes, elementary trees are generated. Figure 3 shows how a canonical transitive tree is automatically generated from 3 hand-written classes and the quasi-trees associated to these classes 15 . Advantages and drawbacks of TYPE C grammars It is often assumed that Metarule and MetaGrammar approaches exhibit some of the advantages of hand-crafted grammars (linguistic relevance) as well as some of the advantages of automatically extracted grammars (wide-coverage), as well as easier maintenance and better coherence. However, as is pointed out in (Barrier et al., 2000), grammar development based on hand-crafted levels of abstraction give rise to new problems while not necessarily solving all the old problems: Although the automatic generation of the grammar insures some level 12 http://www.loria.fr/equipes/led/outils/mgc/mgc.html 13 See http://atoll.inria.fr/ clerger/tag20.dtd,xml for more details on format standardization efforts for TAG related tools. 14 The idea of "featurization" is very useful for applications such as text generation, supertagging (Kinyon, 2002), and is especially relevant for the automatic acquisition of a MG (see section 5) 15 This example is of course a simplification: for sake of clarity it does not reflect the complex structure of our real "hierarchy". of consistency, problems arise if mistakes are made while hand-crafting the abstract level (hierarchy or MetaRules) from which the grammar is automatically generated. This problem is actually more serious than with simple hand-crafted grammars, since an error in one node will affect ALL trees that inherit from this node. Furthermore, a large portion of the generated grammar covers rare syntactic phenomena that are not encountered in practice, which unnecessarily augments the size of the resulting grammars, increases ambiguity while not significantly improving coverage 16 . One crucial problem is that despite the automatic generation of the grammar (which eases maintenance), the interface between lexicon and grammar is still mainly man-ually maintained (and of course one of the major sources of parsing failures is due to missing or erroneous lexical entries). TYPE D Grammars However, the main potential advantage of such an abstract level of syntactic representation is framework independence. We argue that the main drawbacks of an abstract level of syntactic representation (over-generation, propagation of manual errors to generated trees, interface with the lexicon) may be solved if this abstract level is acquired automatically instead of being hand-crafted. Other problems such as sparse data problems are also handled by such a level of abstraction 17 . This corresponds to type D in our classification. A preliminary description of this work, which consist in automatically extracting the hierarchy nodes of a MetaGrammar from the Penn Treebank (i.e. a high level of syntactic abstraction) may be found in (Kinyon and Prolo, 2002). The underlying idea is that a lot of abstract framework independent syntactic information is implicitly present in the treebank, and has to be retrieved. This includes : subcategorization information, potential valency alternations (e.g. passives are detected by a morphological marker on the POS of the verb, by the presence of an NP-Object "trace", and possibly by the presence of a Prepositional phrase introduced by "by", and marked as "logical-subject"), and realization of arguments (e.g. Wh-extractions are noticed by the presence of a Wh constituent, co-indexed with a trace). In order to retrieve this information, we have examined all the possible tag combinations of the Penn Treebank 2 annotation style, and have determined for each combination, depending on its location in the annotated tree whether it was an argument (optional or compulsory) or a modifier. We mapped each argument to a syntactic function 18 . This allowed us to extract fine-grained subcategorization frames for each verb in the treebank. Each subcategorization frame is stored as a finite number of final classes using the (Gaiffe et al., 2002) MG tool : one class for each subcategorization frame (dimension 1 in Candito's terminology), and one class for level may be viewed as a syntactic interlingua which can solve some portability issues 20 . Conclusion We have proposed a classification of grammar development strategies and have examined the advantages and drawbacks of each of the four approaches. We have explained how "grammar replication" may prove an interesting task to compare different development strategies, and have described how grammar replication is currently being used in the Xtag project at the University of Pennsylvania in order to compare hand-crafted grammars, grammars generated with MetaRules, and grammars generated with a MetaGrammar. We have reached the conclusion that of the four grammar development strategies proposed, the most promising one consists in automatically acquiring an abstract level of syntactic representation (such as the MetaGrammar). Future work will consist in pursuing this automatic acquisition effort on the Penn Treebank. In parallel, we are investigating how the abstract level we acquire can be used to generate formalisms other than TAGs (e.g. LFG).
2014-07-01T00:00:00.000Z
2002-09-01T00:00:00.000
{ "year": 2002, "sha1": "153c597ff582c644a792f1fc3715453a3cb9a955", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1118790&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "153c597ff582c644a792f1fc3715453a3cb9a955", "s2fieldsofstudy": [ "Linguistics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
199097127
pes2o/s2orc
v3-fos-license
Application Research and Thermal Economic Evaluation of Low-Pressure Economizer on Coal-Fired Units The energy saving principle, layout and key parameters of low-pressure economizer on coal-fired units are introduced. Taking a 220MW unit as an example, the thermal economy of low pressure economizer is evaluated by equivalent enthalpy drop method. Introduction At present, the designed flue gas temperature of coal-fired boilers is generally between 120-140℃, but the average exhaust temperature of a large number of coal-fired boilers is above 160℃, and a large amount of heat is lost with the high-temperature exhaust flue gas. Therefore, it is of great significance to reduce the exhaust gas temperature and make deep use of the waste heat of flue gas to improve the economy of the unit and to reduce the coal consumption of power generation. The low-pressure economizer is a system which is widely used to reduce the exhaust gas temperature to recover waste heat, using flue gas to replace part of steam extraction to heat part of condensate water by squeezing out steam extraction to do work to realize energy saving. Wanchao Lin of Xi'an Jiaotong University has made a thorough analysis of the thermal economy of low-pressure economizer system by using the theory of equivalent enthalpy drop [1]; Zhenqi Zhou of Northeast Electric Power University has used the principle of equivalent enthalpy drop to analyze the thermal economy of coal-fired units equipped with low-pressure economizer [2]; Xinyuan Huang of Shandong University has also made an analysis of the low-pressure economizer of coal-fired power plants. The optimum design and operation of the low-pressure economizer in coal-fired power plants are studied in depth, and a general mathematical model for optimum design of the low-pressure economizer in coal-fired power plants is proposed [3][4]. This paper introduces the energy-saving principle, layout mode and determination method of key parameters of low-pressure economizer on coal-fired units. Taking a 220 MW unit as an example, the thermal economic evaluation is carried out by using equivalent enthalpy drop method. Energy-saving principle of low pressure economizer The installation of low-pressure economizer is one of the effective measures to save energy and reduce consumption by utilizing the waste heat of boiler exhaust gas. The low-pressure economizer is installed at the back of the boiler, and its structure is similar to that of the economizer. The characteristic of low-pressure economizer is that it is directly connected to the regenerative heating system and is a part of steam turbine thermal system. The condensate water in the lowpressure regenerative system of steam turbine enters the low-pressure economizer, and after heat exchange in the heat exchanger absorbing the heat from the high-temperature exhaust gas of the boiler, it returns to the entrance of a certain low-pressure regenerative heater, thereby increasing the temperature of the condensate water and realizing the recovery and utilization of the waste heat from the exhaust gas of the boiler. After adding low-pressure economizer in the system, a large amount of waste heat from flue gas enters the regenerative system, which will squeeze out part of the extraction steam and reduce the thermal cycle efficiency of the system. At the same time, the exhaust of the condenser will increase and the vacuum of the steam turbine will decrease, which is the main question about the energy saving of low-pressure economizer.Theoretically, after adding a low-pressure economizer, a large amount of waste heat from flue gas enters the regenerative system, but this is the additional heat that is absorbed and reused by the waste heat without increasing the fuel of the system. It must be converted to electrical work with a certain efficiency. This part of the extra thermal work is much larger than the thermal work loss caused by reducing the steam extraction and microdrop of steam turbine vacuum, and the overall economic efficiency of the unit is improved. Installation location 3.1.1. Between ESP and FGD. The low-pressure economizer reduces the flue gas temperature to about 110 ℃, enters the desulfurization tower, and the heated condensate enters the regenerative system for circulation. In this arrangement, the low-pressure economizer actually acts as the role of flue gas cooling in the GGH heater. The low-pressure economizer is in low dust area, and the wear degree of fly ash on pipe wall is greatly reduced. Since most of the alkaline particles in the flue gas are trapped by ESP and the flue gas at the outlet is acidic, the corrosion in the low-pressure economizer and the subsequent flue should be considered. In addition, ash deposition in operation will also affect the heat transfer effect of low-pressure economizer. The advantage of this arrangement is that the economizer system and the regenerative system can be safely and reliably separated, and the safe operation of the unit will not be affected when accidents such as the leakage of the economizer occur; the disadvantage is that it can not make use of the advantages brought by the reduction of flue gas temperature to improve the operation efficiency of the electrostatic precipitator and to reduce the power of the induced draft fan and the booster fan. 3.1.2. Before ESP.The heat transfer temperature difference is low, the heat transfer area is large, and the space is large, so when installing low temperature economizer, it is necessary to reasonably consider the location of the flue in the boiler site. The optimization design method of heating surface can be used to reduce the size of low temperature economizer and alleviate the difficulties in arrangement. For example, the finned tube can be used instead of the smooth tube to increase the heat transfer area, and the number of tube rows can be greatly reduced. The low pressure economizer works in high dust area, so the wear of fly ash on pipe wall must be considered. The greatest risk of this arrangement is corrosion. Because of the flue gas temperature after the low pressure economizer is close to the acid dew point of the flue gas, dust collector, flue, induced draft fan and booster fan may have corrosion risk. Two-stage arrangement. The first stage is arranged between the air preheater and the dust collector. The main consideration is that the flue gas outlet temperature is higher than the acid dew point to avoid the corrosion of downstream equipment. The second stage is arranged at the entrance of the desulfurizer. This arrangement generally sets the condensate water bypass of the first low-pressure economizer. When the load is low, part of the condensate water takes the first bypass to reduce the heat absorbed by the flue gas, so that the flue gas temperature at the outlet of the economizer is always above the acid dew point of the flue gas, so as to avoid the corrosion of the dust collector, flue and other equipment. At present, the three schemes have been successfully applied at home and abroad, while the first one has been adopted more in China. Connection mode According to the position of inlet and outlet point of low pressure economizer in thermal system, it can be divided into series and parallel arrangement. Series arrangement.As shown in Figure 3 of the series arrangement, all condensate water from the N-1 stage heater enters the low-pressure economizer in the tail flue, and returns to the N stage heater after absorbing the waste heat of the flue gas. The advantage of series system is that the amount of water flowing through low pressure economizer is the largest. When the heating area of low pressure economizer is fixed, the cooling degree of boiler flue gas and the heat load of low pressure economizer are larger, and the utilization degree of waste heat of flue gas is higher. The drawback is that the resistance of condensate water increases, and the condensate pump may have to be replaced because of insufficient pressure head of condensate pump. 3.2.1. Parallel arrangement. Parallel system, as shown in Figure 4, can divert water across one or more heaters. The water flow through low-pressure economizer is defined as Dd, and the ratio of Dd to D0 is defined as the water dividing coefficient β. For parallel systems, heat Q not only squeezes out N-stage steam extraction, but also squeezes out all heaters around N-stage steam extraction. The advantage of parallel system is that it is unnecessary to replace condensate pump, because the resistance reduced by bypassing one or more heaters is enough to compensate for the increased resistance of low-pressure economizer and connecting pipes. Besides, it is also convenient to realize the cascade development and utilization of waste heat energy. The disadvantage is that the heat transfer temperature-pressure ratio of parallel system is lower than that of series system. Because the diversion flow is less than the total flow, the outlet water temperature of parallel system will be higher than that of series system, and the utilization degree of exhaust waste heat is relatively lower. Water diversion flow In parallel system, the temperature of exhaust gas can be controlled by adjusting the operation of low pressure economizer, that is, by changing the influent flow rate of low pressure economizer. However, changing the inlet water flow rate in operation will affect the outlet water temperature and thermal economy of low pressure economizer. According to the equivalent enthalpy drop theory, when the water flow of low pressure economizer increases, the water partition coefficient increases, which is favorable to the increase of equivalent enthalpy drop increment. But at the same time, the increase of water flow reduces the enthalpy of outlet water of low pressure economizer, and decreases the equivalent enthalpy drop increment. Therefore, there is a problem of determining the optimal feed water flow rate in the operation of low pressure economizer. Inlet water temperature The inlet water temperature of low-pressure economizer also has a question of selecting the best value. In addition, the thermal corrosion and ash plugging of low pressure economizer should be considered. When the two can not be coordinated, the thermal corrosion problem can only be given priority. In most cases, the optimal inlet temperature is much lower than the inlet temperature required by thermal corrosion. Therefore, the actual inlet temperature of low pressure economizer is restricted by thermal corrosion factors. According to the mechanism of low temperature corrosion and the relationship between corrosion and wall temperature, the annual corrosion rate of metal is less than 0. 2mm.The temperature of the tube wall of the low pressure economizer shall be within the following range: tld+25℃< t < 105℃, in which tld is the dew point of vapor in the flue gas, which can be calculated according to the elemental analysis of the fuel. Outlet position Condensate from the outlet of low-pressure economizer can be introduced into different low-pressure heaters, and the increment of equivalent enthalpy drop is different when the position of the inlet of low-pressure economizer is different. The higher the enthalpy value of condensate water at the outlet of low pressure economizer is, the greater the cooling degree of exhaust smoke and the utilization of exhaust heat are, and the better the economy of low pressure economizer is. The higher the waste heat level of exhaust smoke is, the better its economy is. Obviously, while raising the waste heat level of exhaust gas, it will raise the exit gas temperature of low-pressure economizer and reduce the utilization degree of waste heat of exhaust gas. Because of these two contradictory factors, it can be inferred that there is an optimal outlet position for low pressure economizer. This position can be determined by thermodynamic calculation or thermal test. Economic evaluation of low pressure economizer The thermal economic evaluation methods of low pressure economizer include equivalent enthalpy drop method, heat consumption transformation coefficient method, heat balance method, local quantitative analysis method and thermal test method. Among them, the equivalent enthalpy drop method can obtain the economic effect of the change of the whole thermal system only by quantitatively calculating the extraction steam flow and heat of certain stages. Therefore, it has become the mainstream method for the economic evaluation of low pressure economizer.Equivalent enthalpy drop method regards exhaust waste heat recovered from low pressure economizer as pure heat input system, while the energy consumption of boiler producing 1 kg new steam remains unchanged. Under this premise, all the additional power generated by exhaust steam in the system will improve the efficiency of steam turbine. Calculation of coal consumption reduction in power generation Total work done by 1 kg new steam is called equivalent enthalpy drop of the new steam (recorded as H, kJ/kg). The additional work of the exhaust extraction is called equivalent enthalpy drop increment (recorded asΔH, kJ/kg). It is calculated as follows: Where: d is the steam consumption rate of the unit, kg/ (kW·h); ηjd is the electromechanical efficiency of the turbine, %. Where:β is flow coefficient of low pressure economizer, %; h d is the specific enthalpy of low pressure economizer, kJ/kg;h m-1 is the specific enthalpy of inlet water of lower stage heater at outlet of low pressure economizer, kJ/kg; η m is extraction efficiency of m-low pressure heater, %; β j is the flow coefficient of j low pressure heater bypassed, %; τ j is the enthalpy rise of the refrigerant of j-low pressure heater bypassed , kJ/kg; η j is the extraction efficiency of j-low pressure heater bypassed, %. The low pressure economizer makes the thermal economy of the unit relatively improve: The reduction of steam consumption of the unit is: Where: D is the steam consumption rate of the unit, kg/s. Relative reduction of unit heat consumption Δq (kJ/ (kW·h)) is: Where: q is the heat consumption rate of the unit, kJ/ (kW·h). Reduction of coal consumption for power generation Δb (kg/ (kW·h)) is: Where: η g , η b is pipeline efficiency and boiler efficiency, %. Taking the low pressure economizer system of a 220MW coal-fired unit as an example, the thermal economy is evaluated by the equivalent enthalpy drop method. The results are listed in Table 1. From Table 1, it can be seen that the exhaust gas temperature of low pressure economizer is reduced by 61.74℃, and the coal consumption of power generation is reduced by 3.56g/((kW·h). The calculation of coal consumption in power supply should also consider the changes of fan power consumption and condensate pump power consumption after installing low-pressure economizer. Although the low-pressure economizer reduces the exhaust gas temperature, it does not change the boiler efficiency. The flue gas temperature of the boiler is still defined at the outlet of the air preheater. Where: D c is condensing capacity of condenser, t/h; ΔD c is the decrease value of new steam flow caused by adding low pressure economizer, t/h, which can be calculated by Δb; ΣD j is the total amount of extraction to condenser, t/h, which is reduced by low pressure economizer. Where the reduction of class j is based on the following formula: Where: G is flow rate of low pressure economizer, kg/s; γ j is reduced coefficient, %, which refers to the proportion of the steam extraction from j class arriving at the condenser. The test and calculation on load 220MW show that the total extraction volume reduced to the condenser is 14.12t/h, the low pressure economizer saves 5.64t/h of new steam, the net condensate increases 8.48 t/h, and the back pressure of steam turbine increases 0.0404kPa. The specific enthalpy of steam turbine exhaust is increased by 0.457kJ/kg, which accounts for only 0.038% of the equivalent enthalpy drop of the new steam.Therefore, the effect of reducing extraction on steam turbine vacuum and steam turbine work can be neglected. Conclusion Low temperature corrosion and flue gas wear should be taken into account in the layout of low pressure economizer.The material of low pressure economizer should be choosed according to the working conditions of the unit.Because of the lower water resistance in the parallel system, there is no need to replace the condensate pump, and the reconstruction project is favorable to the construction, when the economizer failure will not cause accidents affecting the safe operation of the unit, the parallel mode is given priority. Through the calculation of energy saving after installing low pressure economizer, it can be seen that adding low pressure economizer has practical and feasible significance in reducing flue gas exhaust temperature, saving energy and reducing consumption, and improving the thermal economy of the unit.
2019-08-02T20:42:45.340Z
2019-07-09T00:00:00.000
{ "year": 2019, "sha1": "507a84226d351e1c815b2b75d5ea559f4e3087f3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/252/3/032067", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c91737777b7e28282c9e7db809981d904db99a8b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Environmental Science" ] }
132034122
pes2o/s2orc
v3-fos-license
A geomorphic assessment to inform strategic stream restoration planning in the Middle Fork John Day Watershed, Oregon, USA ABSTRACT A geomorphic assessment of the Middle Fork John Day Watershed, Oregon, USA, was used to generate a hierarchical, map-based understanding of watershed impairments and potential opportunities for improvements. Specifically, we (1) assessed river diversity (character and behavior) and patterns of reach types (and their controls); (2) evaluated the geomorphic condition of the streams; (3) interpreted their geomorphic recovery potential; and (4) synthesized the above into a hypothetical, strategic management plan. Collectively, these maps can set bounds and provide realistic guidance for river rehabilitation, design and implementation efforts. Fifteen distinct reach types were identified, two-thirds of which are found along perennial streams. On the basis of a variety of geo-indicators, approximately two-thirds of all perennial stream reaches were found to be in ‘good’ geomorphic condition, whereas one-third had departed to ‘moderate’ and ‘poor’ condition. Departures from ‘good’ condition were primarily related to riparian vegetation removal, conversion of floodplain to agricultural land uses (farming and grazing), logging, and channel bed dredge mining for gold. Encouragingly, the majority of reaches classified as being in moderate geomorphic condition were found to have high recovery potential. While our geomorphic assessment has practical utility for informing physically realistic expectation management for efforts like salmonid habitat restoration, the maps themselves are the key vehicle for communicating and visualizing among stakeholders. Introduction Geomorphic mapping of channel patterns and reach types over entire drainage networks sets the stage for restoration and conservation planning (Beechie & Imaki, 2014). In particular, efforts to recover threatened and endangered populations of anadromous salmon (Oncorhynchus spp.) and steelhead (O. mykiss) across the U.S. Pacific Northwest rely heavily on stream restoration intended to mitigate or reverse human impacts (Montgomery, 2004). Those impacts, commonly referred to as the four H'shatchery practices, hydropower dams, harvest, and habitat loss/ degradationhave spurred intense efforts to quantify the status and trends of fish populations (and their habitats), as well as to identify management actions that might improve population viability (Mann & Plummer, 2000;Rucklehaus, Levin, Johnson, & Kareiva, 2002;Wheaton et al., 2017). Within the Interior Columbia Basin in particular (see Plate 1), biological opinions issued by the National Marine Fisheries Service (NMFS), under the National Oceanic and Atmospheric Administration (NOAA), developed population recovery plans that lean heavily on tributary habitat restoration (NMFS, 2008). Accordingly, a myriad of river restoration efforts have been implemented across subwatersheds (e.g. Holburn, Piety, Lyon, McAffee, & Callahan, 2008;Reclamation, 2010). Many of these interventions are opportunistic, pursued at a reach scale without knowledge of the watershed context of geomorphic condition and recovery potential. As a consequence, they may not produce the desired overall fish population response because they do not strategically target key limiting factors, connections between and across isolated reaches, or address the root causes of degradation at the appropriate scale (Bennett et al., 2016). Moreover, many restoration efforts have the best of intentions, but fail to produce physically realistic goals for the streams they are intended to improve. Restoration efforts can benefit greatly from geomorphic assessments that recognize the importance of the watershed-scale context when evaluating individual stream reach conditions (Beechie et al., 2010;Beechie, Pess, Roni, & Giannico, 2008;Demarchi, Bizzi, & Piégay, 2016). The resulting network-scale maps (i.e. reach resolution and watershed extent over the drainage network) represent the most concise way to distill and communicate the end products of such a geomorphic assessment in a way that can directly support watershed management (Wheaton et al., 2017). Through geomorphic assessment, the rivers and streams that comprise a drainage network of a watershed can be broken into distinctive reaches and similar reach types grouped together (Buffington & Montgomery, 2013;Kasprak et al., 2016). Landscape units, lithology and rock strength, stream power and drainage basin area are all important controls on river character and behavior (Church, 1992;Schumm, 1977). Interactions among these factors shape channels and their floodplains forming reaches of relatively distinctive structure and function (Buffington & Montgomery, 2013;Kellerhals, Church, & Bray, 1976). A reach break is the physical transition between different adjacent reach types with characteristic valley setting, planform, bed material, and geomorphic unit assemblages. In this study, such process-based reach types are synonymous with distinct river styles (cf. Brierley & Fryirs, 2005). Valley confinement is a key driver of reach breaks throughout a watershed (e.g. Fryirs, Wheaton, & Brierley, 2016a;Montgomery & Buffington, 1997) (see Plate 3). The degree of confinement controls the ability of a channel to adjust laterally and, to some extent, vertically on the valley bottom. Measures of confinement are used to differentiate valley settings (Brierley & Fryirs, 2005;Fryirs et al., 2016a). The geomorphology of river channels and their floodplains is key to understanding the processes that create and maintain habitat conditions suitable for salmonid species (Beechie & Sibley, 1997;Gilbert, Macfarlane, & Wheaton, 2016;Wheaton et al., 2010). An analysis of a river's current geomorphic condition and its recovery potential not only informs potential restoration targets and priorities, it can also support assessments of salmonid-habitat relationships at a variety of spatial scales (e.g. ISEMP/CHaMP, 2015). Hierarchical geomorphic assessments provide insight that can enhance the success and cost-effectiveness of ongoing salmonid habitat restoration efforts (Bennett et al., 2016). Salmonids implicitly 'consider' geomorphic features at multiple levels of a geomorphic hierarchy when selecting/using habitats (Fausch, Torgersen, Baxter, & Li, 2002). Ideal spawning locations for bull trout (Salvelinus confluentus), for example, are characterized by both a particular in-channel geomorphic unit assemblage (i.e. pool-riffle transitions) and specific valley setting (i.e. unconfined alluvial valleys (Baxter & Hauer, 2000;Bean, Wilcox, Woessner, & Muhlfeld, 2014)). In a case such as this, restoration priorities set by in-channel features alone are likely to be misleading. Further, a watershed-scale perspective on the abundance and spatial arrangement of particular reach typessome of which may be rare but critical to particular species and/or life stagesis needed to understand the overall feasibility of watersheds to support robust and resilient fish populations (Fausch et al., 2002;Rosenfeld & Hatfield, 2006). Yet, fish population and habitat assessments have historically neglected this critical multi-scale view (Fausch et al., 2002). This hierarchical perspective of riverine habitat also helps restoration practitioners avoid some of the costly mistakes of the past. For instance, the U.S. Pacific Northwest is replete with examples of large wood placement projects that aimed to enhance salmonid habitat but failed due to a lack of consideration of local geomorphic conditions and watershed hydrology (e.g. Frissell & Nawa, 1992). By formally considering a reach's natural behavior, trajectory, and capacity for adjustment, such assessments can help restoration practitioners to 'work with nature' (Brierley, Fryirs, Outhet, & Massey, 2002) leading to longer lasting and more appropriately sited restoration treatments. This purpose of this paper and the associated maps is to illustrate a practical application of a multiscalar geomorphic assessment framework that can aid in planning and prioritization of ecological restoration and management. The Main Map embodying the assessment is packaged as an atlas. The atlas supports building realistic expectations for watershed managers and stakeholders to constrain management actions based on a sound understanding of watershed-scale processes. Specifically, links between the physical environment and aquatic ecosystems, support efforts to move beyond site or reach-specific management applications to procedures that work with watershedspecific process relationships. This is especially important for fish protection (Fausch et al., 2002). We use the Middle Fork John Day Watershed in Oregon, USA as a case study. Study watershed The Middle Fork John Day (hereafter MFJD) Watershed, northeast Oregon, USA, is home to populations of summer steelhead listed under the Endangered Species Act and at-risk Chinook salmon (Oncorhynchus tshawytscha) and has been the focus of numerous studies, which make it an excellent candidate for illustrating the potential utility of the River Styles Framework. The MFJD River has been the focus of multiple previous geomorphic investigations (e.g. Butcher, Crown, Brannan, Kishida, & Hubler, 2010;Dietrich, 2014Dietrich, , 2016McDowell, 2001;Reclamation, 2008;Torgersen, Price, Li, & McIntosh, 1999). Kasprak et al. (2016) used the MFJD to compare and contrast different reach typing (stream channel classification) frameworks (including river styles). The MFJD has been the subject of stream temperature thermal fish habitat studies (e.g. Feldhaus, Heppell, Hiram, & Mesa, 2010;McNyset, Volk, & Jordan, 2015;Torgersen et al., 1999), continuous fish surveys and habitat assessments (e.g. Blanchard, 2015), site-scale bioenergetic ecohydraulic modeling (Wall, Bouwes, Wheaton, Saunders, & Bennett, 2015), and salmonid life cycle modeling (McHugh et al., in press). In addition to fish studies, the MFJD has been the focus of research on freshwater mussels (e.g. Box et al., 2006;Hegeman, Miller, & Mock, 2014;Mock et al., 2010) that have shed new light on what sort of habitats these species prefer. The MFJD Watershed is also an Intensively Monitored Watershed (Bennett et al., 2016) in which extensive restoration is being coordinated (Holburn, Turner, Piety, & Klinger, 2009;Reclamation, 2008) in an effort to understand how specific actions influence fish and their habitat (i.e. determine if restoration is effective at increasing the populations). In addition, habitat status and trend monitoring is conducted through the Columbia Habitat and Monitoring Program [CHaMP] (2012). While not the focus of this paper, collectively these past studies in the MFJD provide an excellent backdrop in which the maps presented here can help shed new light and context for. We conducted a hierarchical geomorphic assessment using Brierley and Fryirs (2005) in the MFJD Watershed to inform ongoing and future research and restoration planning efforts. This framework organizes traditional geomorphic assessment in terms of four stages: (1) river classification (i.e. reach typing); (2) geomorphic condition assessment; (3) recovery potential analysis; and (4) development of a strategic management plan to address potential restoration and rehabilitation goals. Analyses are 'nested' across spatial scales of watersheds, landscape units, river reaches, and geomorphic units (landforms) ( Figure 1). Initially, morphometric, hypsometric, and geomorphic analyses are required to characterize river character, behavior and patterns at the watershed scale (summarized as reach types). An understanding of current and historic geomorphic processes along with human perturbation influences are used to assess condition and forecast recovery potential as part of developing a strategic river management plan (White, Justice, Kelsey, McCullough, & Smith †, 2017). The MFJD Watershed is a 2050 km 2 subwatershed of the Columbia River Basin located in east-central Oregon (see Plate 1). The MFJD River flows northwesterly from headwaters on the western flank of the Blue Mountains, a rugged series of ranges in northeastern Oregon. The John Day Basin lies in the rain shadow of the Cascade Range (mean annual precipitation = ∼35-56 cm; temperature range = −10°C to 5°C in winter and 10-30°C in summer) and is underlain by Cretaceous volcanic, marine sedimentary, and granitic rocks overlain by the Miocene Picture Gorge Basalt of the Columbia River Basalt Group (e.g. Walker & MacLeod, 1991). The basin has a semi-arid climate across upland landscapes, but is locally diverse, ranging from alpine and forested mountains to grass-and scrublands of the adjacent foothills and low-relief, temperate steppe uplands. Vegetation communities are stratified along moisture and elevation gradients between mesic highland, mixed spruce and subalpine fir forests, and sage grasslands of the upland and tableland environments. The MFJD Watershed consists of five Hydrologic Unit Code 10 (HUC) subwatersheds (Seaber, Kapinos, & Knapp, 1987) that join the 131-km long central trunk stream of the MFJD River. The topography contains a high-relief stream network with high drainage density, marked by steep-sloped canyons, deeply dissected highlands, broad tablelands, and rounded uplands replete with broad meadows. We identified and mapped six 'landscape units' that range from high elevation, moist alpine terrain in the south and east, to semi-arid volcanic tablelands to the northwest (Plate 2). Mapping data and methods The methods used to implement the geomorphic assessment are well documented in Brierley and Fryirs (2005; i.e. the River Styles Framework) and summarized in Figure 1. Here we focus more on describing the specifics of how we implemented that framework within the Middle Fork John Day to produce the maps presented here. Desktop analyses and stream survey The bulk of the desktop analysis and field-based validation work is centered on the regional landscape and watershed investigations essential to the stream classification exercise. To aid in our desktop analysis, we used Google Earth Pro (v.7.1.2.2041Pro (v.7.1.2. , 2013 and other geographic information system (GIS) readable imagery, in conjunction with the National Elevation Dataset (NED; USGS, 1999) and National Hydrography Dataset (NHD;USGS, 2007), to document the landscape-scale physiographic attributes such as underlying geology, vegetation patterns and composition, relief, drainage density and a thorough visual interpretation of stream and valley attributes. Air photo analysis is critical for validating preliminary mapping of the valley bottom (Gilbert et al., 2016), channel, and where aerial photo resolution allows, for bed material inference and in-channel geomorphic units. Determining reach breaks (e.g. Buffington & Montgomery, 2013;Wohl & Merritt, 2008) is the single most important analytical step in developing networkbased maps comprising multiple variables (e.g. stream classification, geomorphic condition, recovery potential, and prioritized management classes) (Table 1). Reach breaks are identified through changes in valley setting and associated channel confinement (Fryirs, Wheaton, & Brierley, 2016b), river planform, the assemblage of geomorphic units (i.e. floodplain and channel landforms; cf. Notebaert & Piégay, 2013;Wheaton et al., 2015) and bed material texture. We validated our remotely sensed interpretations with field visits to representative reach type localities to map valley slope, floodplain and in-channel geomorphic units for each unique reach type (i.e. 'River Style'; e.g. see Plate 3). The field-based ground-truthing, mapping, and data collection efforts are critical for extrapolating channel classes throughout the study watershed. Longitudinal profile plots provide a key tool for understanding and interpreting the downstream patterns of rivers in each watershed, and controls that govern their form and function. This data display allows for efficient analysis of downstream variations in types of landscape units (and sediment process zones), upstream watershed area, slope, total stream power and their relationships to valley confinement and reach type (Figure 2). Longitudinal profiles were constructed using the National Hydrography Dataset version 1 (1:24,000) and WBD layers to derive upstream watershed area from an integrated flow accumulation raster derived from a 10 m digital elevation model (DEM). To extract longitudinal profiles, we segmented the streamlines into 100 m reach segments for which we calculated upstream watershed area and reach slope. For this operation we used the Geospatial Modeling Environment (GME) tool (Beyer, 2012). Total stream power, a measure of the capability of a river to do work (i.e. rework and transport sediment) against the bed and banks of the river channel per unit downstream length (e.g. Worthy, 2005), was calculated for each 100 m interval: where ρ is the density of water, g is acceleration due to gravity, Q is a characteristic discharge, S is the channel slope, and Ω is stream power in Watts. We used a two-year recurrence interval flow for discharge (Q 2 ), given the effectiveness of frequent bankfull flows in modifying and maintaining channel form relative to larger magnitude, infrequent flood stage flows (Wolman & Miller, 1960). To estimate Q for the Middle Fork John Day River, a regional regression equation was obtained from the United States Geological Survey (USGS) National Streamflow Statistics Website (URL: http://water.usgs.gov/osw/programs/ nss/pubs.html) and we used the National Streamflow Statistics Program (Ries, 2006) to compute an area-discharge relationship between Q 2 and drainage area. The relationship was verified by calculating a linear regression based on seven gauges in the John Day basin, including the Middle Fork, and regional gauge data from northeastern Oregon (Harris & Hubbard, 1982;Kasprak & Wheaton, 2012). Streamflow data of flood recurrence and flow duration analyses were obtained from the USGS streamflow website for Oregon (URL: http://or. water.usgs.gov/). The Log-Pearson III analysis of peak discharge data was performed using the methods outlined by Klingeman, Bogavelli, Coles, and Wright (2002) (see Plate 3). Building the network-based classification and status maps The network-based status maps display results of landscape units, river type, geomorphic condition, recovery potential, and prioritized strategic plan analyses. The atlas maps (see Plates 1-7) were built in Esri ArcMap using the 1:24,000 NHD version 1 (USGS, 2007) as the baseline network for delineating reach breaks and other variables on maps. This cartographically derived, digital vector dataset closely matches the actual course of the river visible in air photos. Line segments of interest were assigned the appropriate categorical variables. For example, segments denoting river classifications begin and end at geomorphic reach breaks. In addition, segments are categorized according to their geomorphic condition, recovery potential and prioritized management (Table 1). Stream length and valley confinement proportions were summarized for the whole MFJD Watershed and its five subwatersheds (Figure 4). We used NED 30 m raster DEMs to extract elevation data and hillshade images, clipped to hydrologic unit codes (HUC) 8 and 10 watershed boundaries. Stream length statistics for each analysis were generated in ArcMap and exported to Microsoft Excel for processing. The completed raster and vector data were exported to Adobe Illustrator for rendering of maps and summary figures. Stream classification (river character and behavior) Fifteen different reach types were identified, spanning the range of confined, partly confined, and laterally unconfined valley settings found within the MFJD Watershed (see Plates 1 and 4). This included both perennial and ephemeral streams. Stream attributes leading to the classification are listed in organizational trees that include explicit, objective, and/or quantitative criteria (Figure 3). We summarized the frequency of stream length by river classes and valley settings for five HUC 10 subwatersheds (Figure 4). These data are critical for understanding the partitioned nature of the watershed and to track attributes that are helpful for a variety of geomorphic and habitatrelated analyses. For example, Figure 4 summarizes stream length data for the 962 km perennial network, which is used by anadromous fish, whereas Plate 1 shows the equivalent mapping for the entire 4110 km perennial, ephemeral, and intermittent drainage network. This provides insight into geomorphic parameters that may be directly relevant to fish and their habitat or indirectly through their more sporadic contributions of water, wood, and sediment to the perennial network from upstream tributaries. Since Plate 4 summarizes the same numbers for the entire perennial, intermittent, and ephemeral drainage network, it may be more appropriate to informing a holistic watershed management approach as opposed to just fish-centric management activities. For those interested in how the River Styles classification reported here compares to that of other common classification systems, the reader is referred to Buffington and Montgomery (2013) and Kasprak et al. (2016). This latter paper includes a comparison specific to the MFJD Watershed. For each representative downstream pattern of River Styles, O'Brien and Wheaton (2015) produced a longitudinal profile depicting geomorphic controls including landscape units (and geology), total stream power, and sediment process zones (i.e. Figure 2). As noted by May, Roering, Eaton, and Burnett (2013) and May, Roering, Snow, Griswold, and Gresswell (2017), geomorphic controls upon knickpoint development and valley confinement relationships exert a primary control upon fish stocks, and associated fish management issues, in this part of the world. Geomorphic condition Streams and rivers are dynamic entities. The propensity for channel adjustment varies across River Styles. The current geomorphic condition of each reach reflects its capacity for adjustment, and an analysis of river evolution ( Figure 5) that considers whether the reach has a contemporary structure and function that is expected for that River Style (Fryirs, 2015). A range of geomorphic indicators are used to perform this analysis (Plate 5). Thus, reaches of the same style can be in various states of geomorphic condition. Analyses of geomorphic condition highlight the discrepancy between historic and current channel configuration and identifies potential locations for mitigation or protection. We assigned geomorphic condition for each reach based on the physical indicators that informed the condition assessment. These explanations, in conjunction with watershed maps, offer managers a resource for more effectively identifying problem areas and opportunities when designing a management plan (see . The MFJD Watershed contains a range of rivers in various geomorphic conditions. Plate 5 partitions the stream network into categories of intact, good, moderate, and poor geomorphic condition. We also derived stream length metrics for the perennial network to include the portions of subwatersheds hosting populations of salmonid species (Figure 6(A) and Plate 1). Geomorphic recovery potential An analysis of a reach's capacity for improvement in geomorphic condition over a relevant time period, generally 50-100 years, serves as the primary basis for assessing river recovery potential (Fryirs & Brierley, 2016). Key to these assessments are (1) an understanding of the sensitivity to adjustment and responses to historical impacts; (2) the landscape/watershed position of the affected reach and its proximity to either good or poor condition reaches (particularly those positioned upstream); and (3) consideration of the current (and likely future) limiting factors and pressures that impact upon that reach. The recovery potential of a specific reach is represented on a river recovery diagram that presents the current state and the predicted, potential future outcome, given different management scenarios from the 'do-nothing' (passive restoration) to the 'full intervention' options ( Figure 7). The sum of these assessments is shown on Plate 6 and summarized as perennial stream length data in Figure 6(B). Our watershed map of geomorphic recovery potential (Plate 6) suggests that, with a few exceptions, most streams in the MFJD Watershed have a high capacity to recover from land use pressures without intervention. However, streams in the southeast portion of the watershed have incurred disproportionate impacts in a relatively delicate landscape (basic soils, sparse forests, accessible terrain for multiple land uses), and have only moderate recovery potential. Isolated reaches of the mainstem and a few tributaries have poor recovery potentialtheir geomorphic condition and function will not improve without intervention (e.g. Figure 6(B), Bridge Creek Unit). Building a prioritized river management plan Using the results of reach types, geomorphic condition, and recovery potential, we developed a watershedframed strategic plan wherein realistic goals for river rehabilitation and restoration occurring over a timeframe of 50-100 years are defined (see Plate 7). The proposed plan is not a major departure from the key management drivers (e.g. Reclamation, 2010) that are currently operating in the MFJD Watershed. Management objectives in our hypothetical, geomorphically focused strategic management plan encourage conservation of unique or remaining natural areas, followed by restoration and rehabilitation efforts that support and promote the geomorphic function (i.e. discharge and sediment flux) of good condition reaches with high recovery potential. Reaches in poor condition with little recovery potential are given the lowest priority for rehabilitation or restoration. Conclusions and implications Our study presents a series of maps for the MFJD Watershed in northeast Oregon, which help set physically realistic, geomorphic bounds on what might be possible for managers to achieve through restoration and conservation actions. The maps provide consistent, watershed-wide assessments of geomorphic reach type, condition and recovery potential to guide river restoration planning and inform strategic river management practice. The communication of findings using maps is intuitive, simplifying outputs from quite complex geomorphic assessments such as O'Brien and and Reclamation (2008). The results corroborate previous documentation that the MFJD Watershed has experienced significant impact through grazing operations, road building and clear-cut logging, channel re-routing, floodplain/wetland drainage, and channel bed mining throughout the last century (NOAA, 2013;Reclamation, 2010). Fortunately, the most damaging of these practices have since been curtailed and the recovery potential for the watershed is very favorable with 69% of perennial streams and 74% of all streams showing high recovery potential. While the maps can provide geomorphic insight that is immediately relevant to assessments of physical habitat for fish, they do not consider other ecological (e.g. temperature and food availability) or socio-political (e.g. land ownership) factors that might influence the inherent value or recovery potential of reaches. The preliminary strategic management map we present here is reasonable from a physical feasibility perspective, but further modifications to reflect the values of the various stakeholders involved in the planning process would be necessary (O'Brien, . In systems with a more complicated array of impacts extending beyond just physical habitat (Wheaton et al., 2017), the River Styles Framework can easily be combined with other lines of evidence, beyond Figure 7. Geomorphic condition variants shown as conceptual cross sections, and their recovery potential, for the low sinuosity gravel bed River Style. The current conditions are shown at left, and restored, rehabilitated and created conditions and potential pathways are shown to the right. the physical environment, to inform management decisions. Software Network-based analyses and their derivative maps were processed Using Esri ArcMap™ 10.3.1.4959. Google Earth Pro v.7.1.2.2041 was used to search and validate our geomorphic interpretations during the 'desktop' phase of the study. Longitudinal profile plots were extracted using the GME tool (Beyer, 2012). Stream length data were summarized and plotted in Microsoft Excel, and all maps and figures were rendered using Adobe Illustrator version CC version 17.1.0 (64 bit).
2019-04-26T14:27:06.187Z
2017-04-20T00:00:00.000
{ "year": 2017, "sha1": "b98c2797d4ef2fc8747feca94badc5d39f27617b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17445647.2017.1313787?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "c9f743dacf46a62f6c2dd8ed2b2a73bae1e0ddad", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geography" ] }
263948258
pes2o/s2orc
v3-fos-license
Quantification of diversity sampling bias resulting from rice root bacterial isolation on popular and nitrogen-free culture media using 16S amplicon barcoding Culturing bacteria from plant material is well known to be conducive to strong bias compared to the actual diversity in the original samples. This bias is related to the bacterial cultivability, chemical composition of the media and culture conditions. Recovery bias is often observed but has never been quantified on different media using an amplicon barcoding approach whereby plant microbiota DNA extractions are compared to DNA extracted from serial dilutions of the same plant tissues grown on bacterial culture media. In this study, we: i) quantified the bacterial culturing diversity bias using 16S amplicon barcode sequencing by comparing a culture-dependent approach (CDA) focused on rice roots on four commonly used bacterial media (10% and 50% TSA, plant-based medium with rice flour, nitrogen free medium NGN and NFb) versus a culture-independent approach (CIA) assessed with DNA extracted directly from root and rhizosphere samples; ii) assessed enriched and missing taxa detected on the different media; iii) used biostatistics functional predictions to highlight metabolic profiles that could potentially be enriched in the CDA and CIA. A comparative analysis of the two approaches revealed that among the 22 phyla present in microbiota of the studied rice root samples, only five were present in the CDA (Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Verrucomicrobia). The Proteobacteria phylum was the most abundant in all CDA samples, showing high gamma-Proteobacteria enrichment. The diversity of the combined culture media represented about a third of the total microbiota diversity, and its genus diversity and frequency was documented. The functional prediction tool (PICRUSt2) detected nitrogenase enzyme enrichment in bacterial taxa sampled from nitrogen-free media, thus validating its predictive capacity. Further functional predictions also showed that the CDA mostly missed anaerobic, methylotrophic, methanotrophic and photosynthetic bacteria compared to the CIA, thereby generating valuable insight that could enable the design of ad-hoc culture media and conditions to increase the rice-associated microbiota cultivability. Introduction Plants interact continuously with a microbiota that plays an important role in their health, fitness and productivity. In the last 10 years, the low-cost accessibility of next generation sequencing (amplicon-based sequencing and metagenomics) to scientists has enabled extensive description of the diversity of this microbiota on many model and non-model plants (e.g. in Arabidopsis [1] and wheat [2]). For rice, its microbiome has been widely described in different countries and rice culture practices [3][4][5][6]. This wealth of data has now provided a good overview of the main bacterial and fungal taxa inhabiting underground plant tissues (roots and rhizosphere), as well as those in their above-ground parts (phyllosphere and endosphere). The diversity determined using amplicon-barcode approaches is mainly based on fragments of ribosomal taxonomic markers such as 16S and 18S rRNA genes, with taxonomic resolution often restricted to the genus level. To access and obtain more microbial diversity and structural representativeness, several studies have been carried out using a combination of markers at different resolution levels, ranging from general (16S V3-V4 or V4 for prokaryotes, 18S V4 for microeukaryotes) to more resolutive markers (gyrB or rpoB fragments for bacteria, ITS1/ITS2 for fungi) [7][8][9]. Bioinformatic analysis of amplicon barcode data has also involved several novel strategies, ranging from operational taxonomic unit (OTU) clustering at different identity percentages to more advanced clustering methods using swarming algorithms [10,11], in addition to methods inferring true amplicon sequence variants (ASV) [12]. Harnessing plant microbiota diversity with regard to plant nutrition or tolerance to pathogens, for instance, relies on the isolation and culturing of the taxonomic and/or functional diversity of the microbiota [13]. The capacity to culture and store such diversity allows us to design synthetic communities and test their various compositions on plant growth and health [14,15]. Otherwise, different culturomics approaches have been developed to capture the bacterial diversity of plant microbiota, including culture media supplementation with various compounds, simulated natural environments, diffusion chambers, soil substrate membrane systems, isolation chips, single cell microfluidics [16], or using limiting dilutions on plates combined with dual barcode processing [17]. Substantial improvements in diversity sampling have also been achieved by popular media supplementation with plant compounds or plantbased media, while microbiologists continue to develop alternative culture methods to highlight rare and unculturable plant-associated microorganisms [16]. Several functional prediction tools, such as PICRUSt2 [18] have recently been developed to predict functional enrichment in metagenomes and even 16S amplicon barcoding data. In theory, such tools could allow the identification of metabolic functions and ecological functions that are enriched in culture-independent compared to culture-dependent approaches in order to guide culture media design or highlight culturing conditions that could help capture them [19]. It is well recognized in the microbiology community that commonly used non-selective bacterial medium, such as Lubria broth (LB), R2A, nutrient agar (NA), tryptic soy agar (TSA), are conducive to strong bias in the sampled diversity recovered from plant tissues [20,21]. This bias has never been quantified or documented in terms of proportions using next generation sequencing (NGS) amplicon-based technologies, to the best of our knowledge. Other media, such as Norris-glucose nitrogen-free medium (NGN), nitrogen-free medium (NFb) [22], have been successfully designed to isolate dinitrogen-fixing bacteria, but proportions of recovery of the diversity of the dinitrogen-fixing community remain unclear. In this study, we employed both culture-independent (CIA) and culture-dependent approaches (CDA) to analyse bacterial diversity in rice roots and rhizosphere soils. Specifically, we used 16S amplicon barcode sequencing to analyse DNA directly extracted from the plant samples (CIA), as well as from mass bacterial cultures of varying dilutions plated on different media (CDA), including a popular medium for isolating plant-associated bacteria (TSA at 10 and 50%), a plant-based medium (rice flour), and two nitrogen-free media (NFb, NGN). The object of this study was: i) to quantify the bias of bacterial diversity introduced by CDA compared to CIA; ii) to determine the proportions of enriched bacterial genera per medium; iii) to use functional prediction tools on amplicon data to identify specific metabolic functions or bacterial capacities present in the rice root microbiota that are missing from the CDA. Our hypothesis is that the culture-dependent approach (CDA) we used in this study, which involved high-throughput sequencing of DNA pooling from the culture media, will help overcome the issue of losing slow-growing bacteria and provide a more accurate assessment of the culturable bacterial diversity. The approach may increase the percentage recovery of bacteria and obtain a more comprehensive picture of the bacterial diversity that reflects the real diversity present in the plant samples. Rice root sampling and processing Oryza sativa ssp indica cv FKR64 plant roots were collected in a rice field near Bama village (western Burkina Faso, Kou Valley, 10.64384 N, -4.8302 E). This field was already assessed in a previous study and described by Barro et al. [6]. Rice sampling was authorized by a national agreement between the Burkina Faso government and farmers within the framework of a rice productivity improvement program involving INERA. Rice plants were sampled at the panicle initiation growth stage, with three sampling points chosen 10 m apart, where roots were collected from three plants (20 cm apart). Roots were hand-shaken to remove non-adherent soil. Ten roots per plant from the same sampling point were pooled to obtain three final samples in 50 mL Falcon tube containing 30 mL of sterile PBS buffer, and vortexed for 5 min to separate the rhizospheric soil from the roots. Roots were removed with sterile forceps and placed in new 50 mL Falcon tubes. From this treatment step, the rhizosphere (Rh) and roots (Ro) samples were manipulated separately (Fig 1). The rhizosphere soil in PBS was vortexed for 10 sec and then two samples of 1 mL of the rhizosphere suspension were taken after 15 sec and placed in two separate 2 mL Eppendorf tubes to be used in bacterial culture-dependent (CDA) or culture-independent approaches (CIA) for diversity estimation by a direct 16S amplicon barcoding approach. Similarly, washed roots were cut into 2 cm fragments, and then divided and placed in two 2 mL Eppendorf tubes for CDA and CIA assessment. Bacterial culture isolation media Four culture media with different carbon and nitrogen sources were used to maximize the isolated bacterial diversity. First, non-selective tryptic soy agar (TSA, Sigma) medium was used at 10% (TSA10) and 50% (TSA50) concentration. It contained digests of casein and soybean meal, NaCl and agar. In addition, two nitrogen-free media were used for the isolation of potential nitrogen fixers, semi-solid NFb [22] and Norris glucose nitrogen-free medium (NGN, M712, [23]). NFb was used as semi-solid medium, which allows the development and growth of free nitrogen-fixing bacteria, due to their growth at an optimal distance for micro-aerobic conditions favourable for nitrogen fixation [22]. Finally, we included a plant-based medium, rice flour (RF), which is commonly used for isolation of fungal rice pathogens [24]. The compositions of the above culture media were as follows: TSA 10% (g/L): 0.5 NaCl, 1. Culture-dependent (CDA) and independent (CIA) approaches For the CDA, roots (200 mg) and rhizosphere soil (200 mg) were transferred into PowerBead Tubes from the DNeasy PowerSoil kit (QIAGEN) where 1 mL of PBS buffer was added, and homogenized in a TissueLyser II (QIAGEN) for 2 min (Fig 1). Dilutions (10 −2 to 10 −5 ) were performed and 50 μL of each dilution were spread on solid culture media (TSA 10%, TSA 50%, NGN, RF). For NFb medium, 50 μL of the 10 −1 root and rhizosphere soil suspensions were inoculated in 20 mL tubes containing 10 mL of NFb semi-solid medium. Each dilution was inoculated (on plates or in tubes) with 4 replicates. After 2 to 5 days of incubation (depending on the culture medium) at 28˚C, plates were examined and dilutions selected for further processing (details in S1 Table). For selected dilutions, cultivable bacteria were recovered from petri plates by adding 1 mL of sterile distilled water, scraping and mixing bacterial colonies. Bacterial suspensions obtained from the same dilution plates were collected with a pipette and transferred to sterile 15 mL Falcon tubes. For the NFb medium, bacteria which had grown in a ring shape 0.2-0.3 cm below the surface of the medium were collected. Bacterial suspensions were stored at -20˚C until DNA extraction. The number of cultivable bacteria in the obtained suspensions was roughly estimated by measuring the optical density (OD) at 600 nm for all suspensions and adjusted to 10 6 (assuming that OD600 nm of 1 corresponds to 1x10 8 bacteria/mL). The volumes collected from the samples were centrifuged 10 min at 14,000 rpm, and the pellets obtained were used for DNA extraction. For the culture-independent approach (CIA), pooled roots were homogenized in liquid nitrogen using a mortar and pestle, while the pooled rhizosphere samples were used directly for DNA extraction (Fig 1). A mass of 250 mg was used for DNA extraction from both sample types. DNA extraction Cultivable bacteria suspensions (� 10 6 cells) and ground roots and rhizospheres soil (250 mg) were transferred to PowerBead tubes (DNeasy PowerSoil, Qiagen) containing C1 buffer and homogenised in a TissueLyser II (Qiagen) at 240 rpm for 2 x 1 min. Extraction was then performed according to the protocol provided by the supplier. Bioinformatics analysis of 16S amplicons For this study, we performed all diversity analyses using an amplicon sequence variant (ASV) detection approach (DADA2 pipeline), but we also compared the diversity with an OTU clustering method (based on FROGs, [26]). For ASV analysis, raw amplicon barcoding data were demultiplexed and processed using the Bioconductor Workflow for Microbiome Data analysis [27]. This workflow is based on DADA2 [12] that infers amplicon sequence variants (ASV) from raw sequence reads. Forward and reverse reads were trimmed at 20 bp, respectively, to remove primers and adapters, and then quality-truncated at 280 and 205 bp, respectively. The dada2 denoise-paired function with default parameters was used to correct sequencing errors and infer exact amplicon sequence variants (ASVs). Then forward and reverse corrected reads were merged with a minimum 20 bp overlap, and the removeBimeraDenovo function from DADA2 was used to remove chimeric sequences. Eighty-two percent of reads passed chimeric filtering. The numbers of reads filtered, merged and non-chimeric are indicated in S2 Table. A mean of 58.6% of reads passed all filters (denoising, merging, non-chimeric), with a minimum of 15,347 and a maximum of 31,134 reads in filtered libraries, yielding a total of 2,712 ASV. ASV were then assigned at the taxonomic level using the DADA2 AssignTaxonomy function, with the Silva 16S reference database (silva_nr_v132_train_set) [28]. We subsequently filtered out plasts (especially mitochondria from root samples) to keep only ASVs assigned to the Bacteria or Archaea kingdoms. A last filtering was done to remove ASV with <10 read occurrence across all libraries. A dataset of 1,647 ASV was used for subsequent diversity analyses. A Neighbour-joining phylogenetic tree of the 1,647 ASV was constructed using MEGA11 [29] by first aligning ASV sequences with MUSCLE [30] and then building a Neighbour joining-tree based on a distance matrix corrected with the Kimura 2P method. Metadata and ASV tables and the phylogenetic tree were uploaded to the NAMCO server for downstream microbiota diversity analyses (https://exbio.wzw.tum.de/namco/, [31]. NAMCO is a microbiome explorer server based on a set of R packages, including Phyloseq for diversity analyses [32] and PICRUSt2 for functional predictions [18]. Alpha-diversity analyses (observed richness, Shannon and Simpson diversity, statistical test with pairwise post-hoc Dunn test) were performed with Phyloseq and tidyverse, ggpubr, rstatix, multcompView R packages and plotted with ggplot2. Beta-diversity (NMDS, PERMANOVA) was performed with Phyloseq and Vegan. PICRUSt2 functional predictions were performed to infer metabolic capacities from our 16S amplicon ASV. Functions were predicted in three classes: enzyme classification (EC), KEGG orthology (KO) and molecular pathways (PW). Data were normalised with relative abundance, and a Kruskal-Wallis test was performed across conditions (medium used for CDA and CIA) with the ALDEx2 package [33]. Circular phylogenetic tree annotations and mapping were obtained with iTOL [34]. Additional R scripts for the DADA2 pipeline, Phyloseq, and the production of figures are freely available on GitHub (https://github.com/lmoulin34/Article_ Moussa_culturingbias). For the OTU clustering approach, the FROGs pipeline ( [26]; http://frogs.toulouse.inra.fr/) was used in the Galaxy environment. After demultiplexing and pre-processing, reads were clustered into OTU using the swarming method with default parameters (aggregation distance of 3), then chimeric sequences were removed and OTU were affiliated with taxonomic levels using the same Assign taxonomy tool as described above. Quality filtering and diversity indices of 16S amplicon libraries (CIA versus CDA) We first assessed the quantity and quality of reads produced for each amplicon library originating from direct rice root or rhizosphere genomic DNA extraction (CIA) or from DNA extracted from cultures (CDA) of the same samples grown on bacterial culture media. A range of 24,000 to 44,000 reads (mean 36,120) was obtained for all 16S amplicon libraries (S2 Table). Rarefaction curves (Fig 2A) showed sampled diversity saturation for each library, with a clear difference between the CIA reads (much higher in alpha diversity) compared to CDA. After DADA2 pipeline processing, we obtained 2,712 amplicon sequence variants (ASV) that were assigned at the taxonomic level using the Silva database. One library (S36) was removed from the analysis (from CIA) as it showed only 3 ASV. For the remaining libraries, ASV were filtered with regards to their abundance (cumulated reads � 10 among all libraries) and mitochondria, chloroplast and eukaryote reads were removed (remaining ASV = 1647). We first compared the diversity obtained from root (Ro) and rhizosphere (Rh) samples. There was no statistical difference in ASV alpha diversity (Shannon index) or beta diversity (PERMANOVA) between Ro and Rh samples (S1 Fig). These results could be explained by the fact that we did not surface disinfect or remove the rhizoplane from roots, so the rhizosphere (soil adhering to roots) and the root (rhizoplane + endosphere) from the same samples did not show significant differences. As the focus of this study was to compare the diversity obtained from a non-culturable versus a culturable approach on different media, we pooled Rh and Ro data from the same plant samples for all subsequent analyses. The bacterial sequences obtained by the CIA method exhibit significantly higher alpha diversity than those obtained from the five CDA media (TSA10, TSA50, NGN, NFb, RF) (Shannon or Simpson index, Kruskal-Wallis test, p = 0,002; Fig 2B). The TSA, RF and nitrogen-free media alpha diversities were not statistically different (Fig 2B). The ASV richness sampled from each medium represented about 15% of the diversity of all ASV detected in both CIA and CDA (TSA10: 16%, TSA50: 14.9%, NFb: 17%, NGN: 17%), except for RF (11%) Rarefaction curves (A), alpha (B) and beta diversity (C) in 16S amplicon libraries. Alpha diversity (A) rarefaction curves were calculated on unfiltered data, while alpha diversity indices (B) were calculated using the Shannon and Simpson index. Letters above boxplots indicate statistical significance using a pairwise Wilcoxon test (adjusted with the Bonferroni method). The p-value of a PERMANOVA test on beta diversity is indicated at the bottom of (C). Coloured circles on the NDMS are ellipses of confidence (at 95%) for all media and CIA conditions. Abbreviations: CIA: culture-independent approach; media used in the CDA: RF: rice flour, NFB: nitrogen-free medium, NGN: Norris glucose N-free medium, TSA10 and TSA50: tryptone soya agar at 10 and 50%. https://doi.org/10.1371/journal.pone.0279049.g002 which captured less diversity, while the CIA approach represented 67%. NMDS on the beta diversity analyses showed no overlap between ASV obtained from the different media (CDA) and the CIA (Fig 2C, PERMANOVA, R 2 (Linear fit) = 0,88, p = 0.001). A substantial overlap was observed for TSA10 and TSA50, which was expected since it was the same medium but used at two different concentrations. Culturable sampled diversity: Comparison between ASV and OTU We also analysed our amplicon barcoding reads using an OTU-clustering approach (FROGs pipeline, using the swarming method to merge reads into OTU). This approach produced 1,023 OTU after quality filtering (same as for the ASV analysis). We then assessed if the diversity obtained by OTU gave the same percentage diversity recovery compared to ASV. In Table 1, we present the number of ASV and OTU obtained from the culture-dependent approach (CDA) and from the culture-independent approach (CIA), as well as the number of classes, orders and families represented in each. The ASV analysis produced more richness (38% more) than the OTU analysis. This higher diversity was observed at different taxonomic levels: class (ASV:50; OTU:38), order (ASV: 124; OTU:67), and families (ASV:219; OTU:119). Given this result, we conducted all subsequent analyses with ASV-analysed data as it was better at capturing the diversity of our 16S amplicon libraries. In both analyses, the diversity shared between CDA and CIA was relatively low (7% for ASV, 22% for OTU). From the culturable approach, we thus recovered many bacterial taxa that were undetected in the amplicon sequencing performed on gDNA extracted from roots or the rhizosphere, yet only a small proportion of the root bacteria were able to grow on our culture media. Comparison of bacterial taxonomic diversity between culture-independent (CIA) and culture-dependent (CDA) approaches Taxonomic binning was performed at different taxonomic levels for the top 30 phyla and the top 25 classes, orders and genera (Fig 3A-3D). The phylum distribution showed a dominance of Proteobacteria, Bacteroidetes and Firmicutes in all libraries, with a clearly higher diversity of phyla in the CIA samples. We identified 22 bacterial phyla in the rice root sample microbiota, with only 5 present in the CDA (Proteobacteria, Firmicutes, Bacteroidetes, Actinobacteria, Verrucomicrobia). The proteobacteria phylum was the most abundant in all samples, with a greater proportion noted on the rice flour culture medium. At the class level, the difference in diversity was even more visible with Gammaproteobacteria, Alphaproteobacteria and Bacteroidia dominating in the CDA, while high class diversity was present in the CIA ( Fig 3B). At the order level, the CIA showed (as expected) high diversity, while the CDA data were dominated by Enterobacteriales, Betaproteobacteriales, Rhizobiales and Flavobacteriales. Finally, in the top 25 genera, differences among CDA libraries clearly appeared, with the exception of the Enterobacter genus which was enriched in all (although to a lesser extent for NFb) (Fig 3D). In the CIA, Devosia was the most represented genus. To better visualize the sampled diversity distribution, we built a phylogenetic tree of ASV (diversity labelled at the PLOS ONE class level) and mapped their distribution and abundance in the different conditions (coloured outer circles) (Fig 3E). This representation clearly highlights which taxa diversity is sampled and over-represented with the media used in the CDA (e.g. Gammaproteobacteria in blue or Firmicutes in pink), and which whole parts of bacterial diversity were missed compared to the CIA (e.g. Patescibacteria, Armatimonadetes, Deltaproteobacteria, Planctomycetes, Chloroflexi). Statistical differential analyses between CIA and CDA at class and genus levels We performed a Kruskal-Wallis test (α = 0.05, with the Bonferroni multiple test correction method) to identify classes of bacteria with significant differences among CDA and CIA conditions. The statistical test identified 45 classes of bacteria above the significance cut-off level (p<0,05), 37 of which were present only in the CIA (S3 Table), including in the top 10 most frequent class taxa: Ignavibacteria, Saccharimonadia, Fibrobacteria and Acidobacteria. Four classes were present in both the CIA and CDA: Alphaproteobacteria, Gammaproteobacteria, Bacteroidia and Actinobacteria, with Alphaproteobacteria and Gammaproteobacteria being the most represented in the CIA and CDA, respectively (also visible in Fig 3B and 3C). Then we performed differential analyses on the mean relative abundance of bacterial genera in each condition, using a Kruskal-Wallis test (α = 0.05). Table 2 shows the 50 most abundant genera in the CIA and their mean relative abundance in each media dataset (whole dataset is available in S4 Table). Among the 20 most frequent bacterial genera in the CIA, eleven were detected in the CDA. These were Devosia (8.25% of all genera), obtained on TSA10, TSA50 or NFB media; followed by Pseudoxanthomonas (3.62%), which was found in all media conditions except RF, then Stenotrophomonas (3.36%), Bacillus (2.29%), Pseudomonas (1.42%) and Allo/Neo/Para/Rhizobium (1.3%) found in all media; and finally Sphingopyxis (2.1%) detected in TSA50; Streptomyces (1.48%) in NGN and Pseudolabrys (1.47%) in NFb. We built Venn diagrams on shared and specific diversity at ASV (Fig 4A) and genus levels (Fig 4B and 4C). Among the 244 genera from the CIA, 173 (71%) were absent from the culturable approach, while 71 were shared (29%) and 70 others were CDA-specific (Fig 4B). We also compared the genus diversity sampled in each CDA medium, and we listed specific genera obtained for each media on the Venn diagram in Fig 4C. To document genera that were the most frequent in the culturable approach for each medium, a table of the 20 most statistically frequent genera (Kruskal-Wallis test, α = 0.05) obtained for each medium of the CDA is given in Table 3. In this top 20 most frequent genera, several appeared in all media: Enterobacter, Stenotrophomonas, Bacillus, Sphingobacterium, Klebsiella, Brevundimonas and Rhizobium, all of which are known to be fast-growers on rich media and reported to contain plant-inhabiting species. On nitrogen-free media, species known as nitrogen-fixing Plant Growth Promoting Rhizobacteria (PGPR) were sampled: Azospirillum, Para/Burkholderia, Bradyrhizobium, Sphingomonas, etc. Prediction of enriched functions in CIA compared to culture-based approach We performed a functional prediction analysis using PICRUSt2 to infer metabolic capacities from our 16S amplicon ASV. In order to assess the predictive ability of the PICRUSt2 algorithm on our dataset, we focused on the specific enzyme nitrogenase (EC. 1.18.6.1) prediction in CDA libraries that included medium with (TSA, RF) or without nitrogen (NGN, NFb) ( Fig 5A). As expected, we observed nitrogenase enrichment (p = 0.00492) in nitrogen-free NFb and NGN media, with NGN medium exhibiting much higher enrichment than NFb. The non- selective medium (TSA) and the plant-based medium (RF) did not enrich bacterial taxa with the nitrogenase function ( Fig 5A). We also aimed to predict which functional pathways were specific to CIA compared to CDA in order to help design conditions to capture the yet unculturable diversity. We thus analysed the metabolic pathways (based on PW/Metacyc categories) predicted as being enriched in the CIA compared to CDA conditions, and represented the results in a dot-plot (Fig 5B). Among the detected Metacyc pathways enriched in CIA, several functions linked to specific ecological niche abilities were detected: anaerobic/fermentation metabolism, carbon dioxide fixation, bacterial photosynthesis, methanotrophy and methylotrophy. As our CDA culture conditions were aerobic and in the dark, this enrichment was logical and gave clues on the culture conditions that could ultimately capture more bacterial diversity. Enriched pathways in the TSA and RF media libraries (compared to others) could be linked to heterotrophy on rich media in aerobic conditions (sugar degradation, amino acid/lipid/nucleotide biosynthesis, vitamin biosynthesis). For the nitrogen-free media (compared to the others), several pathways were detected as phenolic compound/polyamine/amino acid degradation and sugar degradation. Nitrogen fixation does not appear as itself in Metacyc pathways, it is embedded in "nitrogen metabolism" together with "nitrification" and "denitrification" capacities among others, so that no pattern from nitrogen fixation ability is possible, apart from the analysis on E.C. for the nitrogenase enzyme (Fig 5A). Discussion In this study we used Illumina sequencing on 16S-amplicon barcodes (variable region V3-V4) to quantify the bacterial diversity bias that occur when culturing rice root-associated bacteria on a range of culture media compared to the real diversity. Our goal was to precisely document the bacterial taxa diversity that could be recovered from a set of culture media compared to the real diversity, the predicted and enriched/depleted functions that could be inferred from this diversity, while seeking how to design new culture conditions to capture it. We have used the term "real diversity" in reference to that captured by Illumina amplicon sequencing, although this approach can also be biased, since it is based on the amplification of a marker gene from a DNA matrix that could originate from dead bacteria. As rice is a non-perennial plant and we expected root and rhizosphere soil to be under high metabolic turnover, we hypothesized that DNA from dead bacteria in the culture-independent approach would not represent high diversity in our analysis. Yet this is a possibility and could represent bias from the unculturable approach when comparing culture and uncultured-based diversity analyses. Several studies have already compared culturable and real rice microbiota diversity [35][36][37], but they often relied on comparisons between regular 16S Sanger sequencing on isolated bacteria compared to NGS sequences. Here we used the same sequencing methodology at high depth and were able to compare diversity levels without sequencing/analytical bias. We also Colors indicate frequencies of occurrence in the 16S amplicon data (highest in red, lowest in green, for each medium and CIA condition). Kruskal-Wallis test, α = 0.05. used two different analytical methods to infer operational taxonomic units, i.e. ASV (based on exact sequence variant detection, [12] or OTU (based on clustering by swarming, [11]). As ASV analysis detected more diversity than OTU at different levels (even class and order levels, Table 1), we preferred to use ASV for all subsequent diversity and functional predictions. As exact sequence variants highly rely on algorithms for sequencing error detection and correction, we could not exclude that some of the obtained diversity was due to algorithm imperfections. However, given that the obtained higher diversity also concerned higher taxonomic levels, this artificial diversity issue is unlikely as it would involve a high number of mutations. PLOS ONE In this study, the diversity obtained from the CDA culture media (TSA10, TSA50, RF, NGN, NFb) was lower compared to the CIA. If we combine all diversity of the CDA, it represents 11.7% (ASV level), 29% (genus level), 22.4% (class level), 25.6% (order level) and 23.1% (family level) of the diversity of the CIA. As there are few comparable studies in the literature, it is hard to determine if our recovery rate was low or high since this is the first study to our knowledge to have assessed culturable recovery by amplicon barcoding and NGS sequencing. The review of Sarhan et al. [16] detailed recent advances in culturomics methodologies, and established a recovery rate of about 10% for conventional chemically-synthetic culture media, which is in the range that we obtained at the ASV level (although we obtained 23 to 29% at higher taxonomic levels). Samson et al. [36] claimed to have recovered up to 70% of bacterial genera (on 16S V5-V7 amplicon, at >97% similarity) from Oryza sativa indica and japonica rice microbiota, but they applied a 0.1% frequency cut-off. Applying the same cut-off on our dataset would indicate the detection of 121 genera in the CIA, with 36 (29%) of them present in the CDA. From all media used in the CDA, we could recover a total of 142 bacterial genera, with each medium capturing 15 to 23 specific genera (Fig 4C). The only exception was the plant-based rice flour medium, which in our study captured low bacterial diversity compared to the other media, probably due to its low composition complexity. Plant-based media have been suggested to be a good alternative to popular bacterial chemical media for increasing the cultivability of plant-associated microbes [16], but the use of homogenised roots, leaves or exudates has been recommended to complement minimal or more complex media. The recovery of specific ASV from the CDA that were not detected in the CIA was an unexpected finding in our study. This diversity represented 532 ASV, 1 class, 3 orders, 16 families and 70 genera ( Table 1, Fig 4A and 4B). This number of ASV in CDA may seem high (the total PLOS ONE number in both CIA and CDA was 1,647), but it only represented a quarter of the total ASV diversity at the class, order and family levels ( Table 1). We cannot exclude that a technical PCR bias could have increased the diversity from the CDA, since DNA polymerase errors may arise. Moreover, if there is low diversity in the DNA matrix, a diverse range of sequence variants could be produced, but these errors would only affect diversity at species or genus levels in the amplicon sequencing, not at higher taxonomic levels as in our results. One explanation for not detecting, in the CIA, the ASV diversity found in CDA could concern the sequencing depth, yet the rarefaction curves did reach a plateau but at much higher alpha diversity for the CIA compared to the CDA (Fig 2A). The mean sequencing depth obtained was 36,120. If differences between bacterial ASV frequencies exceed 10e 4 , then several genera may be undetected in the CIA approach, whereas they may be selected by specific culturable media. We set the read number filter at 10 (cumulated in all libraries), but we also looked at lower filtering (>2) and unfiltered ASV data (S5 Table). In the unfiltered data, we counted 102 specific bacterial genera for CDA, 243 for CIA, with 90 in common, while these numbers were 70, 173 and 71 in the filtered results (10 read filter, Table 1 and Fig 4B), i.e. similar proportions. Processing unfiltered data thus produced similar proportions of specific ASV for the CDA compared to CIA. Regardless of the filtering method, we detected one specific class in CDA (undetected in CIA), i.e. Erysipelotrichia, represented by one genus, i.e. Erysipelothrix, and 4 ASV recovered from TSA medium (at 10 and 50% concentration). A BLAST study of these ASV sequences revealed 100% sequence identity with the 16S rDNA of Erysipelothrix inopinata-a species whose type strain was isolated from sterile-filtered vegetable broth [38]. As our medium was sterilized by autoclaving, it is unlikely that these 4 ASV were contaminants. It should also be noted that we are not the first to have found that isolates from culturable approaches were undetected in culture-independent approaches [14,37]. It would be better to assess rice microbiota diversity by substantially increasing the sequencing depth in order to get a better image of the overall diversity. The frequency differences between ASV exceeding 10e 3 (at the genus level, S4 Table) that we found in our study means that they would not be detected in the CIA, while they would be in the culture-based approach. This was a crucial finding since several studies have underlined the role of rare species (also called satellite taxa) in plant-microbe interactions and more broadly in key ecosystem functions [13,39]. Increasing the representativeness of taxonomic diversity in databases should also be the focus of further scientific research since many ASV cannot be affiliated to taxonomic ranks due to missing descriptions of these taxa in taxonomic databases. We also tried to predict functions and metabolic pathways that would be enriched when using different types of media, and we also conducted statistical tests to highlight functions that were missing from our culturable approach. Metagenome-guided isolation and cultivation of microbes has been developed in recent years, but these approaches are based on metagenomic sequences and the reconstruction of genomes and metabolic pathways [19]. The massive sequencing effort focused on a highly diverse range of bacteria from different environments has led to the development of genome database and prediction tools that may be used with simple amplicon taxonomic markers [18]. We applied a prediction tool to our dataset to investigate the ecology and functional capacities of our detected bacteria. We found that many taxa with anaerobic metabolisms such as methanogenesis (methane production), methanotrophy (methane degradation) and methylotrophy (one-carbon reduction), or with photosynthetic capacities, were missing from CDA (Fig 5) compared to the CIA. It is well known that rice microbiota differ from microbiota of other crops since rice is often grown in flooded conditions, thereby creating an oxic-anoxic interface between the rhizosphere/root system and the bulk soil [4,5]. Our functional prediction approach thus underlined the presence of these probably strictly anaerobic bacteria adapted to anoxic conditions in the CIA, and their absence from the CDA. These predictions provided clues on the specific conditions and compositions of media required to capture these yet unculturable functional groups of bacteria. They could also serve to develop culturomics, a growing scientific field for microbiologists interested in synthetic microbiota and for biotechnological applications of plant-associated microorganisms.
2023-04-07T06:18:05.652Z
2023-04-06T00:00:00.000
{ "year": 2023, "sha1": "6b09fef65016ac9faeb6567c62f52e2946b518b7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0279049&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c2b8b0854fa15576856044674a9ba933b502858f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
73107624
pes2o/s2orc
v3-fos-license
CLINICO-HISTOPATHOLOGICAL CORRELATION IN LEPROSY : A TERTIARY CARE HOSPITAL BASED STUDY Introduction: Leprosy is a chronic infectious disease affecting mainly cutaneous and peripheral nervous system. Histopathology is an important tool to diagnosis leprosy in situation where it mimics other clinical condition. This study was conducted to know the correlation between clinical and histopathological diagnosis of Leprosy. Material and Methods: This was a retrospective study and patients were enrolled in whom leprosy was clinically diagnosed or suspected and histo-pathological examinations were carried upon. Results: A total of 71 patients were studied. Of them 48 patients (67.6%) were males and rest 23 (32.39%) patients were females. Mean age of patients at presentation was 37.85 +/2.021 years. Clinically in 42 patients (59.1%) type of leprosy could not be specified. Borderline tuberculoid was diagnosed in 7 patients (9.8%), Tuberculoid in 6(8.5%), Relapse in 3(4.2%), lepromatous in 6(8.5%) and Borderline, borderline lepromatous 1(1.4), Indeterminate 1 patient (1.4%). In 7% cases, Hansens disease was considered as differential diagnosis along with other clinical conditions. In 47% cases, data was not available. On histopathological evaluation on skin biopsies, epidermal changes seen were 29.5%. Of the total 71 patient, dermal changes seen were granuloma (42%), dermal infiltrate (11%), adnexal infiltrate (7%), nerve infiltrate (11%), adnexal with nerve infiltrate (6%), perivascular with adnexal infiltrate (20%) and nonspecific (3%). Dermal infiltrates in 46.4% cases constituted of lympho-histiocytes. In 48 patients (69%) leprosy was histopathologically confirmed and in rest 31% cases diagnoses was non-specific in 20 patients (28.1%), vasculitis, Dariers and Fungal infection 1 patient each (1.4%). Borderline Tuberculoid (BT) and TT was the most common diagnosis among leprosy patients around 29.2% each, followed by Indeterminate 25%, LL 8.3%, BL and and Pure neural 4.1% each. When clinical diagnosis and histopathological diagnosis was correlated it was found that the parity was seen in TT as 66.6%, BT 42.9%, LL 16.7%. Where Hansen’s disease was kept as differential diagnosis two patients had leprosy. Conclusion: The study being retrospective the uniformity in clinical diagnosis and histopathological evaluation couldnot be assessed. With the limitations this study still give information about the importance of histopathology to diagnose Leprosy and for proper treatment category and decrease the burden of the disease in the society. Introduction Leprosy is a disease affecting mainly skin and peripheral nervous system but can also affect other organs and one of the most common public health problems in this country [1].In Nepal though Leprosy has been on decline with government declaring elimination of leprosy after achieving a prevalence rate of 0.89 per 10,000 persons, still the disease is prevailing [2].According to Ridley & Jopling classification it has been classified on the basis of clinical, histopathological and immunological status of the host.Due to its clinical diversity as well as its ability to mimic other diseases sometimes leprosy is difficult to diagnose clinically.In such catch-22 situations, histopathological examination is a helpful diagnostic tool to confirm diagnosis.This study was conducted to know the correlation between clinical and histopathological diagnosis of Leprosy in a tertiary care hospital based scenario. Material and Methods We conducted a retrospective study in outpatient department of Dermatology, Nepal Medical College and Teaching hospital.We enrolled patients between 2008 and 2012, in whom leprosy was clinically diagnosed or suspected and histo-pathological examinations were carried upon.The data were retrieved from the records maintained in the department including age, sex, residence, clinical diagnosis, histopathological findings and treatment. Results A total of 71 patients were studied.Of them 48 patients (67.6%) were males and rest 23 (32.39%) patients were females.Youngest patient was 12 years old and oldest was 80 years at presentation; however mean age of patients at presentation was 37.85 +/-2.021years.Clinically, in 42 patients (59.1%) type of leprosy could not be specified (Tabl.I).Borderline tuberculoid was diagnosed in 7 patients (9.8%), Tuberculoid in 6(8.5%),Relapse in 3(4.2%), lepromatous in 6(8.5%) and Borderline, borderline lepromatous 1(1.4),Indeterminate 1 patient (1.4%).In 7% cases, Hansens disease was considered as differential diagnosis along with other clinical conditions.Slit skin smear was positive in 4 cases (5.6%) and negative in 25(35%).PAS stain was positive in 1 patient (1.4%).Fite stain was positive in 2 patients (2.8%) but was negative in 9.8% cases.In 47% cases, data was not available.On histopathological evaluation on skin biopsies, epidermal changes seen were thinning (11.26%), hyperkeratosis (9.8%), acanthosis (7%) and cleft (1.4%) however it was normal in 70.4% patients Interface dermatitis was seen in 2.8% cases and grenz zone in 7% cases but in 90.1% interface changes were not specified.Of the total 71 patient, dermal changes seen were granuloma (42%), dermal infiltrate (11%), adnexal infiltrate (7%), nerve infiltrate (11%), adnexal with nerve infiltrate (6%), perivascular with adnexal infiltrate (20%) and nonspecific (3%).Dermal infiltrates in 46.4% cases constituted of lympho-histiocytes followed by lymphocyte (39.4%), epitheloid cells (8.4%) and foamy cells (8.4%) but was not mentioned in 3% cases.Of the 4 cases that had infiltrates seen in subcutaneous layer, 2 had giant cells and 1 each had lymphocytes and mixed cellular infiltrates.In 48 patients (69%) leprosy was histopathologically confirmed and in rest 31% cases diagnoses was non-specific in 20 patients (28.1%), vasculitis, Dariers and Fungal infection 1 patient each (1.4%).Borderline Tuberculoid (BT) and TT was the most common diagnosis among leprosy patients around 29.2% each, followed by Indeterminate 25%, LL 8.3%, BL and and Pure neural 4.1% each.When clinical diagnosis and histopathological diagnosis was correlated it was found that the parity is seen in TT as 66.6%, BT 42.9%, LL 16.7%, where it was not classified 69%, relapse 66.7 and Hansens as Differentials 40%.There was no parity seen in BL, Pure Neural and Indeterminate.There were some interesting findings like indeterminate cases were more histopathologically diagnosed.One LL case was found to be TT histopathologically.Clinically where diagnosis was not specified, 69% patients had leprosy.Where Hansen's disease was kept as differential diagnosis two patients had leprosy.Details of the correlation between clinical and histopathological diagnosis is given in Table II. Discussion In developing countries like Nepal, Leprosy is still one of the major public health problems.The Ridley jopling classification is a standard classification to diagnosis leprosy which is based on clinical, histopathological and immunological status of the host.In our study clinicopathological correlation was found in TT as 66.6%, BT 42.9%, LL 16.7% and where it was no classified according to Ridley Joplings criteria found to be 69% which means clinically where hansens was suspected histopathologically it was confirmed and these percentage of patients were treated and rendered noninfectious.On statistical analysis it was found to be statistically significant (P value 0.034).Pandya et al found parity in 68.3%, Moorthy et al in 62.63% [4], Kar et al in 70%, Jerath et al in 68.5% and Mathur et al in 80.4% [2][3][4][5][6][11][12][13].In most of these studies like moorthy et al, Kar et al and Jerath et al found parity in TT pole and Mathur et al in LL pole [14].Our study also found parity in TT and BT.Jha et al also found parity in BT cases [7].There was lack of uniformity in clinical impression and clinical details in our study.Slit skin smear report was not available in 40% and in 47% fite stain was not mentioned.In histopathology too Ridley jopling classification was not used.Interface changes were not interpreted in 90.1%.In dermal changes none of the reports described about exact location of the granuloma, whether infiltrating appendages or not.In 53%, location of the dermal infiltrate were not mentioned.There were some interesting findings in our study like one case of LL was found to be histopathologically TT.In histopathological evaluation it was found that epitheloid giant cell granuloma was seen.But it was not mentioned it was eroding epidermis or not.Most of the indeterminate cases was diagnosed histopathologically where periadnexal, perineural infiltrate were seen.In two patients even granuloma was also found and histopathologically it doesn't fit in Indeterminate type.Moorthy et al [4] also found indeterminate type more histologically than clinically.Due to non specific histology it becomes difficulty to diagnose IL type.It also depends upon various factors like depth of biopsy, quality of sections, and number of sections examined and staining method including both H&E and acid fast stain [4,[8][9][10].Clinically where diagnosis was not specified 69% had histopathological diagnosis of leprosy.Where Hansens disease was kept as differential diagnosis two patients had leprosy.Most of the above studies have strictly followed Ridley jopling classification but in our study it was not but still the percentage of parity is similar in their studies compared to our study.It is therefore important to have histopathological evaluation in suspected case of leprosy mostly in the Borderline groups and where slit skin smears are negative.Clinical information like site of lesion, type of lesion, nerve involvement, sensory impairment, treatment history along with immunological status of patients is very important for the pathologist to correlate histopathologically.Histopathological diagnosis also depends on various factors like size of biopsy specimen, age of lesion, depth of biopsy, quality of section and very important interobserver variation has a role in clinico-pathological evaluation [15]. Conclusions There are certain limitations in our study.The study being retrospective the uniformity in clinical diagnosis and histopathological evaluation could not be assessed.With the limitations this study still give information about the importance of histopathology as in few of the cases where diseases was not specified or Hansens was kept as differential diagnosis, histopathologically different poles of hansens disease as well as others like Dariers or fungal was evaluated and is important for treatment point of view.Sometimes it is difficult on clinical grounds due to its varied presentation and could mimic with other diseases therefore histopathological examination is needed to confirm diagnosis for proper treatment category and decrease the burden of the disease in the society. Table II.Correlation between clinical and Histopathological diagnosisP=0.034according to pearson's rank correlation © Our Dermatol Online 3.2013 295 com Source of Support: Nil Competing Interests: None Cite this article: Deeptara Pathak Thapa, Anil Kumar Jha: Clinico -histopathological correlation in leprosy: a tertiary care hospital based study. Our Dermatol Online To determine clinico-histopathological correlation of skin biopsies in leprosy, statistical evaluation SPSS version 11.5 was used.Chi square test and Fishers exact test was used for statistical significance and p value <0.05 was considered significant. www.odermatol.
2018-05-31T21:07:15.174Z
2013-07-09T00:00:00.000
{ "year": 2013, "sha1": "9a962dbcb2fa6945d677e43c80f710608d470048", "oa_license": "CCBY", "oa_url": "http://www.odermatol.com/odermatology/32013/7.Clinico-ThapaDP.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a962dbcb2fa6945d677e43c80f710608d470048", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
146294
pes2o/s2orc
v3-fos-license
Association of Polymorphisms of Genes Involved in Lipid Metabolism with Blood Pressure and Lipid Values in Mexican Hypertensive Individuals Hypertension and dyslipidemia exhibit an important clinical relationship because an increase in blood lipids yields an increase in blood pressure (BP). We analyzed the associations of seven polymorphisms of genes involved in lipid metabolism (APOA5 rs3135506, APOB rs1042031, FABP2 rs1799883, LDLR rs5925, LIPC rs1800588, LPL rs328, and MTTP rs1800591) with blood pressure and lipid values in Mexican hypertensive (HT) patients. A total of 160 HT patients and 160 normotensive individuals were included. Genotyping was performed through PCR-RFLP, PCR-AIRS, and sequencing. The results showed significant associations in the HT group and HT subgroups classified as normolipemic and hyperlipemic. The alleles FABP2 p.55T, LIPC −514T, and MTTP −493T were associated with elevated systolic BP. Five alleles were associated with lipids. LPL p.474X and FABP2 p.55T were associated with decreased total cholesterol and LDL-C, respectively; APOA5 p.19W with increased HDL-C; APOA5 p.19W and FABP2 p.55T with increased triglycerides; and APOB p.4181K and LDLR c.1959T with decreased triglycerides. The APOB p.E4181K polymorphism increases the risk for HT (OR = 1.85, 95% CI: 1.17–2.93; P = 0.001) under the dominant model. These findings indicate that polymorphisms of lipid metabolism genes modify systolic BP and lipid levels and may be important in the development of essential hypertension and dyslipidemia in Mexican HT patients. Introduction Hypertension and dyslipidemia are major global public health problems because they are highly prevalent, contribute to the development of cardiovascular disease, and are components of metabolic syndrome. In Mexico, 26.6% of the working-age population is hypertensive (HT), 48.7% exhibits high total cholesterol (TC) levels (≥200 mg/dL), and 57.3% displays high triglyceride (TG) concentrations (≥150 mg/dL) [1]. Essential hypertension (EH) is a multifactorial disease, whose development involves environmental and genetic factors. A significant percentage (30%-60%) of blood pressure (BP) variation is due to genetic factors, and at least 69 genes have been associated with EH [2]. A group of genes that have been less well studied but may play an important role in EH development are those encoding 2 Disease Markers proteins involved in lipid metabolism. Hypertension and dyslipidemia show an important clinical relationship because an increase in nonesterified fatty acids (NEFAs) (lipoprotein metabolism products) increases BP, but, in normal conditions, the endocrine, sympathetic, and parasympathetic systems stabilize BP [3]. However, in the presence of chronic hyperlipidemia, such regulatory mechanisms are insufficient, and the sustained increase in NEFAs directly affects cardiac output and peripheral resistance, which are important hemodynamic variables that regulate BP. Cardiac output can be altered by a decrease in the sensitivity of baroreceptors, increased synthesis of catecholamine, and an increased heart rate [3,4], while peripheral resistance may increase by alpha-1 adrenergic receptor hypersensitivity [5], endothelial dysfunction, and atherosclerotic plaque formation [6]. Considering the close relationship between BP and blood lipid levels, we selected seven polymorphisms of seven genes that have mainly been associated with cardiovascular disease, atherosclerosis, dyslipidemias, and obesity. (1) APOA5 p.S19W (rs3135506): the APOAV protein plays an important role in triglyceride (TG) metabolism, and the p.19W allele is negatively correlated with the secretion of this protein [7] and has been associated with increased TG and high-density lipoprotein cholesterol (HDL-C) levels [8,9]. (2) APOB p.E4181K (rs1042031): the APOB protein is the main apolipoprotein found in chylomicrons (CM) and low-density cholesterol lipoproteins (LDL-C). The p.4181K allele increases LDL-C catabolism; therefore, it is associated with low TC, LDL-C, and APOB protein levels [10]. (3) FABP2 p.A55T (rs1799883): the FABP2 protein is involved in fatty acid use in the intestine, and the p.55T allele shows a higher affinity for fatty acids; hence, it promotes TG-rich lipoprotein formation [11]. (4) LDLR c.1959C>T (rs5925): the LDL receptor is involved in cholesterol homeostasis through lipoprotein endocytosis, and the c.1959T allele is associated with low TC, LDL-C, HDL-C, and TG in different groups of individuals [12,13]. (5) LIPC −514 C>T (rs1800588): the LIPC protein functions as both a lipase and ligand, and the −514T allele has been positively correlated with HDL-C levels and negatively correlated with LIPC activity [14]. (6) LPL p.S474X (rs328): the LPL protein hydrolyses TG from very low-density lipoprotein cholesterol (VLDL-C) and CM. The mutated allele increases this enzymatic activity; hence, it improves TG-rich lipoprotein depuration [15]. (7) MTTP −493 G>T (rs1800591): MTTP is involved in VLDL-C and CM assemblage and secretion, and the homozygous genotype −493 TT has been associated with low TC, LDL-C, APOB, and MTTP mRNA levels [16]. The aim of the current study was to analyze the relationship of the APOA5 p.S19W, APOB p.E4181K, FABP2 p.A55T, LDLR c.1959C>T, LIPC −514 C>T, LPL p.S474X, and MTTP −493 G>T polymorphisms with blood pressure and lipid levels in Mexican HT patients and in subgroups classified according to the presentation of different dyslipidemias. Study Population. A total of 716 unrelated individuals aged 19-81 years were recruited from Family Medicine Unit number 93 and the Centro de Investigación Biomédica de Occidente of the Instituto Mexicano del Seguro Social in the metropolitan area of Guadalajara, Jalisco, Mexico. Blood samples were drawn from each of the subjects after at least 12 hours of fasting and without drinking alcohol for the preceding 72 hours. Two tubes of blood were collected; the first was used for biochemical testing, and the second was used to extract DNA. Biochemical measurements were performed using enzymatic methods with commercial kits and a semiautomatic ALS 2000 spectrophotometer. Teco diagnostics (Anaheim, CA), Trinder GOD-POD, Spinreact (Esteve de Bas, Girona, Spain), and AccuBind kits (Lake Forest, CA) were employed to determine the lipid profile (TC, LDL-C (using the Friedewald formula), HDL-C, and TG), glucose, and insulin, respectively. Blood pressure was measured according to the criteria proposed by NOM-030-SSA2-2009 [17]. From the total subjects ( = 716), we selected 320 individuals, mainly considering their blood pressure values, personal clinical history, age, and sex. The subjects included in the study consisted of 160 HT patients with systolic blood pressure ≥140 mm Hg and/or diastolic blood pressure ≥90 mm Hg [17] (121 women and 39 men), with a mean age of 47 ± 10 years (range 23-69 years), and 160 normotensive subjects (NT) with systolic blood pressure < 140 mm Hg and diastolic blood pressure < 90 mm Hg [17] (114 women and 46 men), with a mean age of 47 ± 11 years (range 20-81 years). Both groups were paired by 10-year age range and by sex, and no significant differences were identified based on the age ( = 0.69) or sex ( = 0.69) variables. None of the included individuals fulfilled all of the criteria for metabolic syndrome. The protocol was designed based on the guidelines outlined in the Declaration of Helsinki. It was reviewed and approved by the institute's ethics committee. The participants were informed of the study objectives and signed a consent letter. Molecular Analysis. DNA was extracted from peripheral blood using standard procedures. Polymorphism genotyping was performed by means of three techniques: (a) polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) analysis, to identify the APOB p.E4181K, FABP2 p.A55T, LDLR c.1959C>T, and LPL p.S474X sites; (b) artificial introduction of restriction sites (PCR-AIRS), to detect APOA5 p.S19W and MTTP −493G>T; and (c) sequencing, for the LIPC −514C>T site. The primers used to identify the MTTP −493G>T polymorphism were designed with Oligo 6.0 software (forward 5 CTA GTG TGC TAA TGA CAG ACA ATG C 3 and reverse 5 GGA TTT AAA TTT AAA CTG TTA ATT CAT ATC AC 3 ). A 155 bp fragment was amplified and digested with the Hph I enzyme. The PCR mixture included 13.3 ng/ L genomic DNA, 0.46 pg/ L of each primer, 0.204 mM of each dNTP, 1.5 mM Mg 2 Cl, and 0.04 U/ L Taq polymerase. The final reaction volume was 15 L. The amplification program was as follows: initial denaturation for 4 min at 95 ∘ C, followed by 30 cycles of denaturation at 95 ∘ C for 25 sec, annealing at 91 ∘ C for 40 sec, and extension at 72 ∘ C for 45 sec, with a final extension at 72 ∘ C for 7 min. The PCR and amplification conditions for the remaining six sites were described elsewhere [18]. FABP2 p.A55T, which was previously identified through sequencing [18], was genotyped in this work using the restriction enzyme Hha I. The samples employed as digestion controls were confirmed through sequencing in an ABI PRISM 310 sequencer using the BigDye Terminator v3.1 kit and following the manufacturer's instructions. Statistical Analyses. The genotypic and allelic frequencies for the seven polymorphisms were established through direct counting. The frequencies were compared among the groups (NT and HT) using the chi-square or Fisher exact test. The effect of the polymorphisms on BP and lipid profile levels was determined in the group of HT patients and the HT subgroups classified as normolipemic and hyperlipemic according to lipid profile. An analysis of variance (ANOVA) was performed for the genotypes, followed by post hoc tests (Bonferroni or Dunnett, in relation to variance homogeneity). The mean differences between alleles were calculated with Student's -test. The Pearson (for quantitative and normally distributed variables) and Spearman (for qualitative variables) correlations and multiple linear regression models with genotypes and alleles were employed. Furthermore, to estimate the relationships of the seven polymorphisms with hypertension and different dyslipidemias, multiple logistic regression models were generated. A value of < 0.05 was considered statistically significant. The analyses were performed using SPSS v.18.0. Genotypic and Allelic Frequencies. The genotype and allele frequency distributions of the seven polymorphisms are shown in Table 1. All three possible genotypes were observed at each site and exhibited heterogeneous frequencies. The frequencies of the mutant genotypes of FABP2 p.A55T, LDLR c.1959C>T, and LIPC −514C>T were approximately 30% in the HT and NT groups. The four remaining polymorphisms displayed frequencies lower than 5%. In both of the studied groups, the mutant alleles of LDLR c.1959T and LIPC −514T were more frequent than the wild-type alleles; in contrast LPL p.474X mutant allele did not exceed a 10% frequency. Comparison of the genotypic and allelic frequencies between the groups revealed statistically significant differences only for the APOB p.E4181K polymorphism ( = 0.01 and = 0.04, resp.) due to a higher percentage of heterozygotes in the HT group (Table 1). Effect of the Polymorphisms on the Lipid Profile and Blood Pressure of HT Patients. In the association analysis between the polymorphisms versus the variables, 70 patients from the total HT group receiving antihypertensive and/or hypolipidemic treatments were excluded. Considering all of the HT patients not receiving drug treatment ( = 90), the following significant associations with TC and TG were observed. Subjects with the LPL p.474X allele showed lower TC concentrations (185.0 mg/dL) than the patients with the p.474S allele (203.0 mg/dL, = 0.006), while individuals with the FABP2 p.55T allele exhibited higher TG concentrations (161.2 mg/dL) than the patients with the p.55A allele (131.2 mg/dL; = 0.006). A significant multiple linear regression model with the FABP2 p.55T and LDLR c.1959T alleles to explain TG levels was established (Table 4). Hypertensive Patients Classified as Normolipemic and Hyperlipemic: Effect of the Polymorphisms on the Lipid Profile and Blood Pressure. All of the studied polymorphisms are located in genes that modulate lipid metabolism and have been associated mainly with dyslipidemia and/or cardiovascular disease. Dyslipidemias distribution in HT patients is shown in Table 2 and the classification was done according the NOM-037-SSA2-2002 [19]. Association analysis was performed in two subgroups of hypertensive patients, normolipemic (without dyslipidemia) and hyperlipemic (with dyslipidemia) ( Table 2). Normolipemic subgroup was integrated by 50 patients, with a mean age of 50 ± 11 years (range 30-69 years). Hyperlipemic subgroup was formed by 40 patients, with a mean age of 45 ± 11 years (23-69 years). Age variable was not significantly different between subgroups ( = 0.60). Analysis of the two subgroups showed many significant results. However, we present only those for which a multiple linear regression model was generated and showed significant values for ANOVA, Student's -test, or Spearman correlation. Table 3 provides the ANOVA, Student's -test, and Spearman correlation results; two ANOVA nonsignificant values were included to indicate trends (FABP2 p.55T allele for TG and SBP with = 0.09 and = 0.08, resp.). The multiple linear regression models for each polymorphism and quantitative variable are presented in Table 4. To avoid duplicate data, the values of the association tests are not provided in the text and are only shown in tables. APOA5 p.S19W. The p.19W allele was associated with significant increases in HDL-C and TG in normolipemic and hyperlipemic HT patients, respectively. APOB p.E4181K. We detected one significant association in the normolipemic subgroup; the HT patients carrying the p.4181K allele showed a lower TG concentration than those bearing the p.4181E allele. Multiple Linear Regression Models. In total, nine distinct multiple linear regression models with statistically significant differences were generated with genotypes and alleles data; in Table 4, the results obtained with the alleles are shown. An integrated model with two polymorphisms FABP2 p.A55T and LDLR c.1959C>T, explaining TG levels, was observed in the total group of hypertensive ( = 90) and hyperlipemic subgroup. The eight remaining models included only one polymorphism. The TG variable presented a greater number of significant models (Table 4). Logistic Regression Models. In this analysis, we observed two models. Discussion Essential hypertension is a multifactorial disease with a strong genetic component; therefore, genes corresponding to different metabolic routes have been explored. Because increased blood lipid levels are known to result in an increase in blood pressure, in the present work, we analyzed seven polymorphisms of genes involved in lipid metabolism in HT patients to examine their role in the development of essential hypertension and dyslipidemias. It is important to note that when the association analysis was performed in the entire HT group, only a few significant results were observed. However, the analysis of the HT patients classified according to lipid levels revealed more significant associations between the polymorphisms and diverse variables. Studies examining these polymorphisms in different populations have revealed heterogeneous results. A possible explanation is that these differences occur because most of the authors did not analyze the data by separating individuals according to dyslipidemia. Genotypic and Allelic Frequencies. In the genotypic and allelic frequency distribution analyses for the HT and NT groups, we observed significant differences for the polymorphism APOB p.E4181K, based on the presence of more heterozygotes in the HT group (41.9% versus 23.3%). Other studies have analyzed this polymorphism in relation to cardiovascular disease, but the authors did not observe significant differences in the genotype and/or allele distribution [20,21] We compared the results for the seven polymorphisms with the reported frequencies in the general Mexican population [18,22]. The analysis showed similar genotypic frequency distributions for the general population and NT individuals ( > 0.05). However, we observed significant differences between the general population and the HT group for three polymorphisms: the APOB p.E4181K site ( = 0.008), FABP2 p.A55T ( = 0.002), and the LIPC −514C>T polymorphism ( = 0.04). Only two of the seven analyzed polymorphisms have been studied in HT patients: LDLR c.1959C>T (Chinese population) [12] and LPL p.S474X (Chinese and Caucasian populations) [23]; similar to the results of the present study, the authors did not report significant differences in the genotypic and allelic frequency distribution. Distribution of Dyslipidemias in HT Patients. In Mexico, dyslipidemias and essential hypertension are the most common risk factors for the development of cardiovascular disease in the general adult population [24,25]. In this study more than 40% of the subjects exhibited some form of dyslipidemia (hypercholesterolemia or hypertriglyceridemia or mixed hyperlipidemia), which indicates a significant health problem. Various reports have revealed that one of the most frequent dyslipidemias in the Mexican population is hypoalphalipoproteinemia, and similar results were obtained in this study. Low HDL-C levels in HT patients have also been observed in other populations at frequencies higher than 30% [26]. Although a clinical finding of hypoalphalipoproteinemia does not represent a health problem by itself, it is an important risk factor for developing cardiovascular disease and metabolic syndrome when associated with high TG and TC levels. Furthermore, in patients with hypertension, several coincident biochemical alterations can increase cardiovascular risk and complicate hypertension management. Effect of the Polymorphisms on the Lipid Profile and Blood Pressure of Hypertensive Patients and Normolipemic and Hyperlipemic Subgroups APOA5 p.S19W. The APOAV protein is a component of the lipoproteins HDL-C, VLDL-C, and CM; it activates LPL for efficient TG lipolysis. In this study, the p.19W allele was associated with increased HDL-C in normolipemic HT patients (Tables 3 and 4). This association between high HDL-C levels and the p.19W allele has been reported in healthy subjects from Puerto Rico [8]. High TG levels associated with the p.19W allele were observed in hyperlipemic HT patients. Similar results have been detected in young, healthy Caucasian males [9], Spanish subjects in the ICARIA project [27], and Caucasian children 6-8 years old [28]. This association was also reflected in the logistic regression analysis because, in the HT group, the p.19W allele (under recessive model) increased the risk of presenting mixed hyperlipidemia by more than 8-fold. In general, the findings of increased HDL-C and TG are consistent with the function of the APOA5 p.19W variant because reduced secretion of the protein decreases LPL activity [7], increasing TG levels in plasma. Moreover, APOAV is a component of HDL-C, and APOAV deficiency may reduce the metabolism of these particles, which increases their blood concentration. APOB p.E4181K. APOB is the primary protein found in VLDL-C, IDL-C, and LDL-C; thus, it is important for cholesterol homeostasis. The APOB p.E4181K polymorphism is located in the proximal portion of the terminal carboxyl and increases LDL-C catabolism, with a consequent decrease of LDL-C and APOB levels [10]. In the present study, the p.4181K allele was associated with lower TG levels in normolipemic HT patients; this association has not been observed in other populations. The association between low TC and LDL-C levels and the p.4181K allele has been demonstrated in most populations studied to date. However, the literature is inconsistent regarding whether the p.4181K allele is protective or is a risk factor for cardiovascular diseases, as both models have been observed [10]. In this work we found a risk of developing hypertension conferred by p.4181K allele, with an OR = 1.85; 95% CI: 1.2-2.9; = 0.01, which is similar to the findings of a meta-analysis of 30 case-control reports where the risk of developing cardiovascular disease and myocardial infarction was evaluated, and the authors obtained an OR of 1.73 (95% CI: 1.19-2.50) [29]. Thus, we suggest that the p.4181K allele increases the risk of developing cardiovascular disease, myocardial infarction, and hypertension; however, its pathway of action must be different from that related to lipids because this allele is associated with lower TG, TC, and LDL-C concentrations. FABP2 p.A55T. The FABP2 p.A55T polymorphism has been extensively studied and referenced in the literature. The p.55T allele results in a higher affinity of this protein for longchain fatty acids and, hence, greater absorption of such fatty acids [11]. In this study, the p.55T allele was associated with increased SBP in normolipemic HT patients, which differs from the findings of de Luis, who observed a decrease in SBP in nondiabetic obese subjects [30]. However, an association between the mutated allele and an increase in TG was observed in the HT patients and hyperlipemic subgroup and has been reported previously [31,32]. This association is consistent with the increase in fatty acid absorption found in individuals with the p.55T allele and therefore enhances TGrich lipoprotein formation [11]. Moreover, the studied groups in which this association had been observed also exhibit low HDL-C, as found in our patients with hypoalphalipoproteinemia. LDLR c.1959C>T. The c.1959T allele was associated with low TG levels in HT patients and hyperlipemic subgroup, in a multiple linear regression model (Table 4). In general population of China, this allele has been associated with lower TC, LDL-C, and TG concentrations [12,13]. LIPC −514 C>T. One of the primary functions of hepatic lipase is to hydrolyze TG and phospholipids, and hepatic lipase is an important enzyme in HDL-C metabolism [14]. HT patients group with the −514T allele exhibited higher SBP values compared with those carrying the −514C allele. To our knowledge, this is the first study to analyze this polymorphism in HT patients and to assess its association with blood pressure. LPL p.S474X. The primary function of LPL is to hydrolyze TG from CM and VLDL-C. It has been shown that the p.474X allele increases LPL enzymatic activity; hence, it has been associated with decreased plasma TG and increased HDL-C [15]. In the present work, in the HT and HT hyperlipemic groups, the p.474X allele was associated with lower TC levels, as previously observed in a meta-analysis. The mutated allele has also been associated with lower TG and SBP levels as well as higher HDL-C levels [33]. Consistent with such results, the presence of the p.474X allele has shown in previous reports a lower risk for developing coronary heart disease [33] and hypertension (OR 0.78; 95% CI: 0.62-0.98, = 0.03) [34]. MTTP −493 G>T. The MTTP enzyme is involved in the assembly and secretion of VLDL-C, which transports TG, cholesterol, and phospholipids [16]. In this study, the −493T allele was associated with increased SBP in HT hyperlipemic subgroup, which has not been previously reported in the literature. However, in healthy males and hypercholesterolemia patients, the mutated allele has been associated with low TC [16] and TG levels [35], respectively. Conclusions We highlight three important conclusions. (i) The APOB p.E4181K polymorphism (under the dominant model) is associated with an increased risk for hypertension. (ii) Three polymorphisms were found to be associated with systolic blood pressure levels in HT patients. Increased SBP was associated with the FABP2 p.55T, LIPC −514T, and MTTP −493T alleles. (iii) Modifications of the four different lipids studied herein were observed to be correlated with certain polymorphisms. Total cholesterol is decreased in subjects with the LPL p.474X allele; LDL-C is decreased with FABP2 p.55T; HDL-C is increased with APOA5 p.19W; and triglycerides are increased with APOA5 p.19W and FABP2 p.55T but decreased with APOB p.4181K and LDLR c.1959T. These findings indicate that polymorphisms of lipid metabolism genes modify systolic blood pressure and lipid levels and may be important for the development of essential hypertension and dyslipidemia in Mexican HT patients.
2016-05-18T10:37:47.167Z
2014-12-21T00:00:00.000
{ "year": 2014, "sha1": "ff59648b670f5a608f704d22bda757717726537a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dm/2014/150358.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f79f6ee5fa5c8a8a2154563f3dbadd7a444d17c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229550868
pes2o/s2orc
v3-fos-license
Complications Developing in Intensive Care Patients Receiving Enteral Feeding and Nursing Interventions Matherials and Methods: The study was designed as a descriptive study model. The study sample included 52 patients who were fed enterally during treatment. A three-part data collection form was used for the collection of study data. The first section was aimed at garnering patient information, the second section gathered data on the enteral feeding method and any complications that developed, and the third section detailed the nursing interventions applied in the unit. INTRODUCTION In cases where oral feeding is not possible, enteral feeding is preferred in patients treated in intensive care units (1). Current manuals recommend that enteral feeding should be initiated within 24-48 hours of unsatisfactory oral food intake in patients referred to the intensive care unit (2). There are many complications associated with enteral feeding that can lead to an interruption of feeding (3), including gastrointestinal disorders, such as diarrhoea, nausea and vomiting, mechanical problems of aspiration or tube blockage, problems associated with fluid-electrolyte balance and metabolic complications such as hyperglycaemia (4,5) The main gastrointestinal complications, known collectively as intolerances, are abdominal distension, vomiting, diarrhoea and an increase in gastric residual volume (GRV), all of which can lead to an interruption in enteral feeding (3,6). Intolerances in this sense tend to emerge three days after the start of enteral feeding, leading to nutritional deficiencies (6). A study involving patients being fed enterally found that intolerance developed in 33% of the total, and at significantly higher rates in patients treated in the intensive care unit (7). In a study investigating digestive system dysfunctions, risk factors and complications in intensive care patients fed by way of enteral nutrition showed that diarrhoea and vomiting developed in 26% and 19% of the patients, respectively (8). In a similar study, Pancorbo-Hidalgo and colleagues (9) reported that besides diarrhoea (32.8%) and vomiting (20.4%), there were other complications connected to enteral feeding, such as dislocation of tubes (48.5%) and aspiration issues (3.1%). Another study examining drug-food interactions in intensive care patients receiving enteral feeding noted the development of hyperkalaemia (12%), hypernatremia (8%) and hyperglycaemia (32%) in the study sample (10). There have been several studies investigating the complications that develop in connection to enteral feeding in intensive care patients (3,6,8,9) although there have been only limited studies focusing specifically on nursing interventions aimed at preventing and managing such complications (11,12). This study examines complications in intensive care patients undergoing enteral feeding, and provides insight into the nursing interventions that are applied to prevent such complications. The study further identifies the strengths and weaknesses in nursing interventions, providing data for inclusion in current manuals and contributing to randomised controlled studies. Design The study was designed as a descriptive study model and conducted between April 2016 and April 2017 Participants This study was conducted in the adult medical intensive care unit of a university hospital in Turkey and involved 52 patients being fed enterally. Data Collection The data was collected using a questionnaire developed by the researchers that had been created based on the findings of previous studies (3,7,13,14,15) and comprising three sections. The first section garnered data about the patients, such as age, gender, diseases and date of hospitalization; the second section documented the enteral feeding methods and the complications that were likely to develop; and the third section detailed the nursing interventions applied in the unit. The forms were filled in by the researchers who visited the patients between the hours of 08:00 and 16:00, during which patient records were examined and the abdominal circumference of patients was measured. As gastrointestinal symptoms arising from enteral feeding can emerge within a few hours, or in some cases, 72 hours after the onset of feeding, the patients included in the study were monitored for at least three days, and at most, seven days. Ethical Considerations Ethical approval was obtained from the Ethical Committee for Clinical Studies of a university and prior to starting the study, written approval was obtained from the Office of the Head Physician of the hospital and also from the participating patients and their relatives. Data Analysis Statistical analyses were performed using IBM SPSS (Statistical Package for Social Sciences) statistical program version 21.0. Continuous variables were stated as mean ± standard deviation (SD), and categorical variables as number (n) and percentage (%). Chi-square, Fisher's Exact Chi-square and Friedman Variance Analyses were made of the associative groups. The significance level of the statistical comparison tests was p<0.05. RESULTS The mean age of the patients, of whom 59.6% were male, was 68.08±13.3, and 44.4% were hospitalized for a disorder arising from the respiratory tract. Table 1 below shows that while 57.8% of the patients had a Glasgow coma score (GKS) of between 8 and 12 (precoma), 25% had a Ramsey sedation scale value of 4 (distinct response to a light touch of the glabella or loud audio stimulus). The findings concerning the feeding of the patients indicate that 88.4% were fed through postpyloric feeding, with an average feeding speed of 46.76±7.24 ml/h and an average daily calorie amount of 1086.61±426.65. Only 38.5% of the patients monitored during the administration of the study continued to receive enteral feeding for seven days ( Table 1). The complications that developed in the patients were categorized into those arising from the gastrointestinal system, those from metabolic complications and those from mechanic complications. Among the complications resulting from the gastrointestinal system, the patients most frequently experienced diarrhoea (44.2%) and abdominal distension (28.8%). Vomiting, on the other hand, was the complication that was least frequently experienced among these patients (1.9%), and no aspiration was reported. The abdominal circumference of the patients was measured throughout the monitoring period, and was found to increase in parallel to the duration of the feeding time in days (x 2 =22.108, p=0.001). Metabolic complications included hypoalbuminemia Hyperglycaemia was diagnosed in 46.2% of the patients being fed enterally, and the most frequent mechanical complication was the dislocation of the tube (7.7%) ( Table 2). The study identified a significant relationship between the age and Ramsey scores of the patients, and complications related to the gastro-intestinal system (GIS) (p<0.05), indicating that patients over the age of 65 years and with a Ramsey score of over 3 are at greater risk of GIS complication development. No significant relationship as noted between gender, body mass index (BMI) and Glasgow Coma Scale (GCS) values, and GIS, metabolic and mechanical complications. Although no significant difference was identified between the location of the feeding tube and the probability of development of complications ( Table 3). The interventions made by the nurses included the monitoring of gastric residual volume, vital indicators, amount of fluid intake and discharge, dehydration indicators and electrolyte values, as well as the replacement of feeding bags once every 24 hours, monitoring of the blood glucose levels and regulation of the head of bed at 30-45 degrees for all patients. Other interventions included electrolyte replacement and increased feeding speed (77%), interruption of feeding (63.5%) and regulation of insulin dosages (52%) ( Table 4). DISCUSSION Numerous interventions are undertaken to prevent enteral feeding-related complications that fall under the responsibility of intensive care nurses. The results of the present study indicate that it is gastrointestinal and metabolic complications that generally occur in association with enteral feeding. In the present study, the most frequent gastrointestinal complications seen in the patients were diarrhoea (44.2%) and abdominal distension (28.8%). Diarrhoea is the most prevalent complication that occur in association with enteral feeding (4). Prieto and colleagues (16) reports that its incidence rate varies between 5 and 64%. Diarrhoea seen in patients fed through enteral nutrition do possibly develop in interrelation with different variables such as current diseases, current medication content of the nutritional solutions, hydration level, patients' condition in respect of mobilisation, in-bed exercises or passive exercises, nutrition, intestinal peristalsis and age. In our study during the monitoring period, abdominal distension occurred in 28.8% of the patients. The distension of the abdomen increased in parallel to the length of the feeding period (x 2 =22.108, p=0.001). Another study performed with patients receiving enteral feeding in an intensive care unit found that 14.4% of the patients had the complication of distended abdomen (17). Abdominal distension can develop depending on various reasons such as the density and temperature of the feeding solution delivered, its fat content and the drugs used by the patient. It is, however, of essential importance to deliver the nutrition in an appropriate speed to prevent distension (14). Another problem seen in 30-51% of the patients fed through enteral way is gastric residual volume (GRV). In our study, 13.4% of the patients had a higher GRV. A study investigating the effects of abdominal massage on patients fed enterally at intervals demonstrated that while 8% of the patients to whom abdominal massage was applied had high GRV, the same rate was found to be 34% in patients not treated with abdominal massage (3). The patients in the present study had a lower GRV than those in previous studies, which can be attributed to the interventional procedures adopted in the intensive care unit in the present study where enteral feeding is started at a lower speed in the initial phase and accelerated in accordance with the GRV values recorded over time, with feeding interrupted as necessary. The laboratory results of metabolic complication tests revealed that almost all of the patients had an electrolyte imbalance and hypoalbuinemia. Hypercatabolism developing in intensive care patients may lead to changes in their laboratory values (18). The reason for the electrolyte and albumin imbalances observed in the present study can, therefore, not be entirely attributed to enteral feeding. In addition, more than half of the patients (66.2%) had hyperglycaemia, and to bring hyperglycaemia under control, it is important to monitor patients' calorie intake, to re-structure their insulin doses, to monitor blood glucose levels and to deliver the appropriate nutritional solutions (19). In the present study, the blood sugar level of all patients was monitored, and insulin treatment was administered when necessary under the direction of the responsible physician. Previous studies have shown that the mechanical complications that occur in 2-10% of patients during enteral feeding include pulmonary aspiration, dislocation or obstruction of the feeding tube or placement in the tracheae, and perforation or intestinal obstruction (14). In the present study, tube dislocation occurred in 7.7% of the patients, which can be attributed to factors such as the patient's state of consciousness, and the fixing and maintenance of the tube, mirroring the findings of previous researches. The study identified a significant difference between patients over 65 years of age and those with a sedation scale score of over 3, and the probable development of a GIS complication (p<0.05). In contrast, Metin and Özdemir (14) demonstrated that no significant difference existed between the age of patients and the probable development of complications resulting from enteral feeding. Pinilla and colleagues (20) showed, that the demographic features of patients had no effect on incidences of complications arising from enteral feeding. Old age brings some changes to the gastrointestinal system, such as a decline in gastric acid secretion, a slowing of intestinal movements and a slower food passage through the intestines (21). The results of the present study suggest in this respect that the patients over the age of 65 years suffered more GIT, metabolic and mechanical complications. At the same time, a Ramsey score of over 3 suggests that patients show a weaker response to stimuli and are not fully conscious, which is a condition that may increase the risk of the development of a GIT complication, particularly aspiration. The knowledge of enteral feeding practices among nurses, and the prevention of complications through appropriate nursing interventions undertaken on the basis of current evidence are aimed at lowering hospitalization times and improving the quality of life of the patient (22). Designing a bedside protocol, which is a practice undertaken in recent years by nurses to support the enteral feeding process, is also recommended. Through the application of such protocols, feeding can be initiated relatively faster, leading, consequently, to a higher calorie intake, lower infection rates, shorter hospitalization times and lower mortality rates. Such protocols come with another positive effect, in that they motivate nurses to take an active role in the care of patients being fed enterally (23). In the present study, the nurses made the following interventions in all the patients: monitoring of GRV, monitoring fluid intake and discharge, replacement of feeding bags once every 24 hours, interruptions to feeding, reductions in feeding speeds, monitoring defecation frequency and form, monitoring blood glucose and electrolyte balance, and ensuring the level of the head of bed is always set at 30-45 degrees. Koçhan and Akın (22) found that nurses had a medium level of knowledge about enteral feeding practices, and that they needed to be supported during the assessment of GRV. A higher GRV delays gastric discharge and increases the risk of intolerance, regurgitation and aspiration, and this led a previous research to recommend that nurses monitor abdominal distension, evaluate gastrointestinal function by listening to bowel sounds and control GRV levels so as to reduce the risk and intensity of aspiration (24). However, in publications on enteral feeding appeared in recent years, and several studies have reported that monitoring GRV cannot be used to identify enteral feeding intolerance, and to show that no relationship exists between GRV and gastric discharge. Nowadays, GRV monitoring is being suggested as an inappropriate means of determining GIT intolerance (23). Another result of the present study was that none of the patients received abdominal massages and no in-bed exercises were conducted. There have been many studies reporting the use of abdominal massage to improve digestive functions (25,26,27). Abdominal massage reduces vomiting and distension, and improves defecation patterns (26). A study undertaken by Momenfar and colleagues (28) involving intensive care patients and investigating the effect J Crit Intensive Care 2020;11:60−65 of abdominal massage on GRV concluded that GRV levels were reduced. In a study carried out by Uysal (3) involving patients being fed at intervals enterally also found that abdominal massage reduced GRV, vomiting and abdominal distension. Early exercise/ mobilization plays an important role in intensive care units due to the positive effect on general health. In recent years there have been several studies investigating the effects of early-stage exercise/mobilization in intensive care patients (29,30). In a study of intensive care patients with reduced intestinal motility who had undergone cardiovascular surgery, it was shown that passive exercises applied to the lower extremities and body increased the volume of bowel sounds (31). The nurses participating in the present study still routinely monitor GRV, but do not undertake such interventions as abdominal massage or passive exercises to regulate GIS functions suggest that the intensive care unit involved in the study do not yet sufficiently draw upon current practices. A study performed by Al-Hawaly and colleagues (32) demonstrated that while the majority of nurses (71.1%) had sufficient knowledge of feeding management, 62.2% were unqualified with respect to the practices of feeding management. In a study where they examined the files of patients fed through a tube, Mula and colleagues (33) found that many nursing interventions regarding feeding through a tube were not recorded; the interventions recorded were feeding regime (55.1%), monitoring the bowel sounds (19.2%), monitoring the fluid balance (52.6%) and monitoring the complications (15.4%). The nurses involved in our study undertook, contrary to these results, many routine interventions intended to early diagnose and prevent complications. The interventions that were not undertaken were, on the other hand, those that the nurses could carry out independently, such as evaluations of distension, monitoring of bowel sounds, abdominal massage and in-bed exercises. We believe that the results based primarily on the records of practical work provide no information of the level of knowledge of the nurses. CONCLUSION The study concluded that gastrointestinal and metabolic complications are highly prevalent in patients being fed enterally. To prevent these complications and to monitor them when they develop, nurses generally carried out interventions in accordance with the literature, but did not apply such actual nonpharmacological interventions as abdominal massage and exercise. Enteral feeding is an important therapeutic approach aimed at improving the outcome of patients in the intensive care unit, and nurses play a key role in achieving the feeding objectives set for the patients. The findings of the present study may contribute to the design of protocols and manuals and the planning of in-service training for nurses, supporting them in the prevention of complications in patients fed enterally and in the management of complication in the event of emergence.
2020-12-27T18:34:20.650Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "c9cae64a2c33a57a4db5b0ea6b1cb927267828b3", "oa_license": null, "oa_url": "https://doi.org/10.37678/dcybd.2020.2498", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9cae64a2c33a57a4db5b0ea6b1cb927267828b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234410375
pes2o/s2orc
v3-fos-license
COMBINATION OF TAGUCHI METHOD, MOORA AND COPRAS TECHNIQUES IN MULTI-OBJECTIVE OPTIMIZATION OF SURFACE GRINDING PROCESS This study presentes a combination method of several optimization techniques and Taguchi method to solve the multi-objective optimization problem for surface grinding process of SKD11 steel. The optimization techniques that were used in this study were Multi-Objective Optimization on basis of Ratio Analysis (MOORA) and Complex Proportional Assessment (COPRAS). In surface grinding process, two parameters that were chosen as the evaluation creterias were surface roughness (Ra) and material removal rate (MRR). The orthogonal Taguchi L16 matrix was chosen to design the experimental matrix with two input parameters namely workpiece velocity and depth of cut.  The two optimization techniques that mentioned above were applied to solve the multi-objective optimization problem in the grinding process. Using two above techniques, the optimized results of the cutting parameters were the same. The optimal workpiece velocity and cutting depth were 20 m/min and 0.02 mm. Corresponding to these optimal values of the workpiece velocity and cutting depth, the surface roughness and material removal rate were 1.16 µm and 86.67 mm3/s. These proposed techniques and method can be used to improve the quality and effectiveness of grinding processes by reducing the surface roughness and increasing the material removal rate. INTRODUCTION In the machining methods, grinding method are the most common method to machine the surfaces that requires high precision and high surface gloss. The efficiency of the grinding process is evaluated through many parameters such as surface roughness, material removal rate (MRR), cutting forces, cutting heat, system vibrations, ... Many studies have been done to determine the optimum value of the machining parameters to achieve one or more objectives. Mahajan et al. [1] determined the optimal value of wheel grit size, the grinding wheel speed, the feed rate per revolution, and depth of cut in surface grinding process of D2 steel. Taguchi method was applied to design and optimize the machining surface roughness and material removal rate. To obtain the minimum value of surface roughness, the optimal value of wheel grit size, the grinding wheel speed, the feed rate per revolution, and depth of cut were 46 (mesh), 2300 (rev/min), 0.834 (mm/rev), and 0.05 (mm), respectively. Besides, to obtain the maximum value of MRR, the optimal value of wheel grit size, the grinding wheel speed, the feed rate per revolution, and depth of cut were 36 (mesh), 1650 (rev/min), 0.834 (mm/rev), and 0.075 (mm), respectively. However, this method has not yet given a set of values of input parameters to ensure simultaneously the minimum value of surface roughness and maximum value of MRR. Rai et al. [2] used Taguchi method to optimize the sur-face grinding process of AISI410 steel. The wheel grit size, the feed rate per revolution, and depth of cut were chosen the input parameters to design the experimental matrix. The aim of this study was determination of the input parameters to ensure the average surface roughness and mean square of surface roughness having the minimum values. Using this method, the average surface roughness and mean square of surface roughness have also the minimum values when the wheel grit size, the feed rate per revolution, and depth of cut were 54 (mesh), 0.5 (mm/rev), and 0.06 (mm), respectively. Atish et al. [3] used Taguchi to optimize the surface grinding process of the mild steel. The aim of this study was determination of values of the depth of cut, the workpiece velocity, and cross feed rate to achieve the minimum value of surface roughness and maximum of MRR. This study found that to obtain the minimum of surface roughness, the depth of cut, the workpiece velocity, and cross feed rate were 0.1 (mm), 20 (strokes/min), and 10 (strokes/min), respectively. To obtain the maxinmum of MRR, the depth of cut, the workpiece velocity, and cross rate were 0.1 (mm), 30 (strokes/min), and 30 (strokes/ min), respectively. Aravind et al. [4] combined the Taguchi method and response surface method (RSM) to optimize the surface grinding process of ASIS 1035 steel. The wheel grain size, the grinding wheel speed, depth of cut, and feed rate were selected as the input parameters to design the experimental matrix. This study showed that to obtain the minimum values of both Ra and Rz, the wheel grain size, the grinding wheel speed, depth of cut, and feed rate were 54 (mesh), 0.05 (mm), and 0.45 (mm/stroke), respectively. Hamid Reza FAZLI SHAHRI et al. [5] combined Taguchi method and regression analysis to optimize the surface grinding process of AISI 1045 AISI 1045. This study showed that to achieve the maximum of machining surface hardness, the grinding wheel must be fine dressed, and the optimal values of depth of cut, cutting velocity, workpiece velocity, cross feed were 0.03 (mm), 32 (m/s), 10 (m/min), and 5 (mm/rev), respectively. Prashant et el. [6] combined the Taguchi method and grey relational analysis (GRA) technique to optimize the surface grinding process of EN8 steel. The parameters that were selected as the input parameters were depth of cut, type of lubricant, feed rate, grinding wheel speed, coolant flow rate, and nanoparticle size. This study showed that to obtain the minimum of surface roughness, the type of lubricant was the water containing the CuO particles with the grain size of 100 (nm), concentration of 2%, and flow of 5 (ml/min), and the depth of cut, feed rate, grinding wheel speed were 5 (µm), 2000 (mm/ min), and 35(m/s), respectively. Luu Anh Tung et al. [7] also combined Taguchi method and GRA to determine the optimal values of the grinding wheel dressing parameters in grinding process of 9CrSi tool steel. The purpose of this study was assurance of the minimum value of machining surface roughness and minimum value of the flatness tolerance. In this study, the optimal values of the dressing parameters were determined including: The coarse dressing depth was 0.025 (mm), the coarse dressing times were 3 times, the fine dressing depth was 0.005 (mm), the fine dressing times were 2 times, and the dressing feed rate was 1.6 (m/min). In other study, the multi-objective optimization in surface grinding process of 90CrSi tool steel was also performed by Nguyen Thi Hong et al. [8]. This study aimed to determine the dressing parameters to simultaneously ensure the minimum values of surface roughness and tangential cutting force, and maximum value of tool life. In this study, Taguchi method and GRA were combined to determine the optimal values of dressing parameters as following. The coarse dressing depth was 0.015 (mm), the coarse dressing times were 2 (times), the fine dressing depth was 0.005 (mm), the non-feeding dressing times were 3 (times), and the dressing feed rate was 1.6 (m/ min). The combination of Taguchi method and GRA was also applied to solve the multi-objective optimization problem in the surface grinding process of AISI D2 steel [9]. In this study, three different coolant conditions were applied in surface grinding process including dry conditions, flood cooling condition, and minimum quantity lubrication (MQL) condition. This study showed that to simultaneously ensure the minimum values of machining surface roughness, cutting heat, and normal cutting forces, the grinding process was performed with a cutting depth of 15 (µm), a workpiece velocity of 3 (m/min), cutting velocity of 25 (m/s), and with flow rate 250 (mL/h) of MQL condition. Prashant J. Patil et al. [10] also combined Taguchi method and GRA to solve the multi-objective optimization problem in surface grinding process of EN-24 steel in MQL condition. The input parameters that were selected in this study included the nanoparticles in the lubricating solution (Al2O3, CuO, water), the concentration of particles, particle size, flow rate of solution, depth of cut, feed rate, and cutting velocity. The obtained results showed that to simultaneously achieve the minimum values of mormal cutting force, tangential cutting force, and cutting heat, the grinding process must be performed in the lubricating solution using CuO nanoparticle with a concentration of 2%, a nanoparticle size of 100 (nm), a coolant flow rate of 5 (ml/minute), and with a cutting depth of 5 (µm), a feed rate of 2000 (mm/min), and a grinding wheel speed 35 (m/s). The combination of Taguchi method and GRA to solve the multi-objective optimization problem in grinding process of OCR12VM material was performed by Hendri Jumianto et al. [11]. This study showed that to ensure the minimum values of system vibrations, the cutting velocity, workpiece velocity, and cross feed rate were 3000 (rpm), 11 (mm/s), and 5 (mm/stroke), respectively. From above studies, it shows that the Taguchi method has been successfully applied in solving the optimization of the surface grinding process in many specific cases. Among published studies, the cutting parameters are often chosen as the input parameters for the experimental process. This issue could be explained that the adjustment of these parameters during machining is more easily done by the operator than by adjusting other parameters such as the rigidity of the machine system, the vibrating factors transmitted into the system, etc. However, with each specific case about the machining material, the optimal values of cutting parameters were different. So, for each machining material and each specific machining condition, the experimental and optimization studies must be performed under specific conditions. Besides, if only using Taguchi method, only one evaluation criteria is optimized (the single objective optimization), if the multi-objective optimization problem is solved, the Taguchi method must be combined with other methods or techniques. MOORA and COPRAS are two of the famous optimization methods that were applied into different research fields. Gadakh [12] combined Taguchi and MOORA technique to optimize the cutting parameters of the milling process. The purpose of this study was determination of the optimal values of spindle speed, feed rate per flute, tool diameter, tool nose radius, and the machining time to ensure the minimum of the tool wear, and to ensure the maximum of MRR. Mesran et al. [13] applied the MOORA technique to in-Nhu-Tung Nguyen, et al. -Combination of taguchi method, moora and copras techniques in multi-objective optimization of surface grinding process vestigate the division of the students into each class when entering the universities. This study proposed the best method to divide the students into the class based on the parameters of each student (UN Average Score, Psychotest Value, IPA Value, Mathematics Value, Interview Value). Nguyen et al. [14] used the MOORA to optimize the Powder mixed electrical discharge machining (PMEDM). The aim of this study to determine the optimal values of the workpiece material, tool material, polarity, peak current, pulse-on-time, pulse-off-time, and titanium powder concentration to ensure the minimum values of surface roughness and electrode wear. Tran et al. [15] applied MOORA and COPRAS techniques to determine the optimal values of the materials (straw, corn cobs, sawdust, rice bran, and CaCO3) for growing mushrooms, and so on. However, up to now, there have not been any published studies on the application of these two techniques to solve the multi-objective optimization problem in the machining process in general or the surface grinding processes in particular. Surface roughness has a significant influence on the workability and life of the product. While MRR is a parameter that reflects machining productivity, energy consumption, grinding wheel consumption, so the efficiency of grinding process also can be evaluated through this parameter. Therefore, these two parameters are often chosen as indicators of evaluating the efficiency of grinding processes in general and the surface grinding process in particular. In this study, SKD11 steel was grinded on a surface grinder. The experiments were designed according to the Taguchi method including 16 experiments. In which, the workpiece velocity and cutting depth were selected as the input parameters for each experiment. The surface roughness and MRR were chosen as the two output parameters. Two techniques MOORA and COPRAS have been applied to solve the multi-objective optimization problem. The results showed that these techniques have determined the same set of values of workpiece velocity and depth of cut to ensure the minimum value of surface roughness and maximum value of MRR. The two techniques MOORA and COPRAS not only successfully applied in solving the multi-objective optimization problem of the surface grinding process in this study, but also opened up a very potential research direction in multi-objective optimization of other machining processes. Experimental system The experiments were conducted in the surface grinding APSG-820/2A machine. The grinding wheel aluminum oxide APSG-820/2A was used in this study. The workpiece material that was used in the experimental process was heat treated SKD11 with the hardness of 60 HRC. The length, width, and height of workpiece were 80 (mm), 40 (mm), and 10 (mm), respectively. The surface roughness tester SJ-301 (Japan) was used to measure the surface roughness of machining parts. Each experiment, the surface roughness was measured at least three consecutive times. The average value of surface roughness was used for evaluation and analysis process. The material removal rate RMM was calculated by equation MRR =v w ×b×t (mm 3 /s). Where v w , b, and t are the workpiece velocity (m/s), the width of the grinding wheel (mm), and the depth of cut (mm), respectively. Experimental design The Taguchi method was applied to design the experimental matrix. The cutting parameters that were chosen as the input parameters were workpiece velocity and depth of cut. The orthogonal L 16 matrix with 16 experiments was used and listed in Table 1. Grinding conditions The experiments were conducted with the controllable factors in Table 1 and with the grinding conditions as following: -The cutting velocity: 26 m/s. -The dressing depth of cut: 0.01 mm. Multiple-Criteria Decision Making (MCDM) The multiple criteria decision making -MCDM can be used to select the best solution from the solutions A= {A 1 ,A 2 ,…,A m } based on the criterias C= {C 1 ,C 2 ,…,C n }. In this study, in the MOORA and COPRAS techniques, the weights were calculated using measurement of Entropy, because this method can get the high accuracy. The steps of the weight calculation process will be performed as following [17, 18]: Step 1: Calculating the values p ij with i = 1, 2, …, m and j = 1, 2, …, n using Eq. (1). Step 2: Calculating the measurement entropy e j of each criterion C j with j = 1, 2, …, n by Eq. Step 3: Calculating the weight w j of each criterion C j with j = 1, 2, …, n by Eq. . This multi-objective optimization technique can be successfully appliec to solve the complex decision problems in the production environment with the together conflicting objectives. The MOORA technique includes the steps as following: Step 1: Calculating the values p ij with i = 1, 2, …, m and j = 1, 2, …, n using Eq. (1). Step 3: Calculating the weight w j of each criterion C j with j = 1, 2, …, n by Eq. (3). where B and NB are the set of benefit criteria and the set of non-beneficial criteria with i = 1, 2, …, m. COPRAS technique COPRAS technique was introduced by Zavadskas et al. Step 2: Calculating the measurement entropy e j of each criterion C j with j = 1, 2, …, n by Eq. (2). Step 3: Calculating the weight w j of each criterion C j with j = 1, 2, …, n by Eq. (3). Step 8: Ranking the solutions A k > A i if Q k < Q i with i, k = 1, 2, …, m. Experimental results The experiments were conducted according to the experimental matrix in Table 1. The experimental results were listed in Table 2. To facilitate for the using of the mathematical symbols when optimizing according to MOORA and COPRAS techniques, the surface roughness criterion and the MRR criterion were set as C1 and C2 as presented in Table 3. The optimized results using MOORA technique From the data in Table 3, MOORA technique was used to calculate the values as following: Step 1: Calculating the values p ij by Eq. (1). The calculated results were listed in Table 4. Step 2: Calculating the values e j by Eq. (2). The calculated results were listed in Table 5. Step 3: Calculating the values w j by Eq. (3). The calculated results were also listed in Table 5. Step 4: Calculating the matrix X = [X ij ] m×n by Eq. (4). The calculated results were listed in Table 6. Step 5: Calculating the matrix W by Eq. (5). The calculated results were listed in Table 7. Step 6: Calculating the values P and Ri by Eq. (6) and Eq. (7). The calculated results were listed in Table 8. Step 7: Calculating the values Q i by Eq. (8). The calculated results were also listed in Table 8. The calculated results from Table 8 showed that the solution A 16 was the best solution in 16 solutions. If considering only the surface roughness criteria or only the MRR, A 16 is not the best solution (Table 2). However, when considering the two parameters of surface roughness and MRR at the same time, this solution was considered to be the best solution. The optimized results using COPRAS technique From the data in Table 3, COPRAS technique was applied according to the steps in above section. The results were calculated and listed in Table 9, Table 10, and Table 11. The calculated results from Table 11 also showed that the solution A16 was the best solution in 16 solutions. The ranking order of the solutions in Table 11 also co-incided with the ranking order of solutions in Table 8. Thus, in this case, the MOORA and the COPRAS techniques gave a unified result when determining the optimal solution. That further confirms the correctness of the implemented methods. So, in surface grinding process of KSD11 steel, to ensure the minumum value of surface roughness and maximum value of material removal rate, the optimal values of the workpiece velocity and cutting depth were 20 m/min and 0.02 mm. CONCLUSIONS This study was performed using Taguchi method, MOO-RA and COPRAS techniques to solve the multi-objective optimization problem for surface grinding process of SKD11 steel. The conclusions of this study were drawn as following: • Taguchi method, MOORA and COPRAS techniques were successfully applied to solve the multi-objec-tive optimization problem for surface grinding process of SKD11 steel. Using these above techniques, the optimized results of the cutting parameters were the same. • The optimal workpiece velocity and cutting depth were 20 m/min and 0.02 mm. Corresponding to these optimal values of the workpiece velocity and cutting depth, the surface roughness and material removal to improve the quality and effectiveness of grinding processes by reducing the surface roughness and increasing the material removal rate.
2020-12-31T09:11:09.108Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "0f59646399b8dad9c2a1227aa8e4872edbf2017e", "oa_license": "CCBY", "oa_url": "https://aseestant.ceon.rs/index.php/jaes/article/download/28702/16593", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f92c988f62faf1e5bb6c938a0f4f5cc9f8f3ccca", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237606084
pes2o/s2orc
v3-fos-license
Synthetic Mesh Reconstruction of Chronic, Native Quadriceps Tendon Disruptions following Failed Primary Repair Case Two patients presented with chronic knee extensor mechanism disruption after failed primary repairs. Both patients had minimal ambulatory knee function prior to surgical intervention and were treated with a synthetic mesh reconstruction of their extensor mechanism. Our technique has been modified from previously described techniques used in revision knee arthroplasty. At the one-year follow-up, both patients had improvement in their active range of motion and had returned to their previous activity. Conclusion Synthetic mesh reconstruction of chronic extensor mechanism disruption is a viable technique that can be utilized as salvage for the persistently dysfunctional native knee. Introduction Quadriceps tendon (QT) ruptures occur most frequently in middle-aged males [1] and typically can be successfully treated with primary surgical repair [2,3]. While worse surgical outcomes are associated with delays to primary repair, the overall rate of repair failure or rerupture of acute injuries remains low (approximately 2%) [2,4,5]. In the acute setting, QT injuries are typically repaired with direct tissue apposition, transosseous tunnels, or suture anchors, depending on whether the injury occurs midsubstance or at the osseotendinous interface [2,4,5]. Treatment of chronic ruptures or reruptures of prior repairs represents a greater surgical challenge, with no clear gold standard for reconstruction. Described options for surgical reinforcement include the use of allograft [6] and autograft tissue [7][8][9]. We present two cases of chronic, reruptured QT injuries in native knees treated with synthetic mesh reconstruction. QT reconstruction using this technique, typically reserved for post total knee arthroplasty (TKA) knees, resulted in favorable outcomes in both patients at the final follow-up. Statement of Informed Consent Both patients signed informed consent permitting us to report on their deidentified cases. Case Presentation and Surgical Technique 2.1.1. Case 1. An 82-year-old male with baseline function of daily jogging and a past medical history of chronic kidney disease presented with right knee pain and dysfunction. He had failed two attempts at primary quad tendon repair, first with suture anchors 4 months prior to presentation and subsequently with transosseous tunnels one month later. At presentation, he had a palpable defect just proximal to the superior patellar pole and was unable to actively straight leg raise (MRI and X-ray shown in Figure 1). His passive ROM was 0-120, and active ROM was 70-120°(i.e., 70°extensor lag). His Knee Society Score (KSS) was 35. The patient was able to ambulate with his knee locked in extension with a compensatory circumduction gait. At the 12-month follow-up after mesh reconstruction (described below), he was ambulating without assistive devices and had an active knee range of motion (ROM) of 5-120°. He had resumed light running activities and achieved a KSS of 73. Case 2. A 58-year-old male with a remote history significant for a left, traumatic above-knee amputation presented with right knee pain and dysfunction 1 year following primary QT repair complicated by a fall and rerupture post-operatively. He previously ambulated unassisted with a prosthesis. The patient was wheelchair-bound and had passive ROM of 0-120°and active ROM between 75 and 120°. His KSS was 31. His preoperative X-ray was significant for patellar baja and no fracture (Figure 2). At the 12-month follow-up after extensor reconstruction (see below), he had returned to unassisted ambulation with his prosthesis. His passive ROM was preserved, and his active ROM had improved to 10-120 degrees and achieved a KSS of 71. Surgical Technique. The patient is positioned supine on a regular surgical bed, and a tourniquet is used. A midline incision that incorporates or excises the prior surgical scar is made extending between the tibial tubercle to the distal Atrophic tendon ends are debrided to healthy tissue, with large gaps expected (10-15 cm in our cases). The mesh (Covidien macroporous polypropylene mesh, 45 × 30 cm) is tubularized as previously described [10], measuring 2 cm × 30 cm. We employed two distal fixation techniques. In case 1, a subperiosteal tunnel was created over the anterior surface of the patella (Figure 3). Distally, the paratenon overlying the patellar tendon (PT) was incised and reflected. The mesh is then passed subperiosteally over the anterior patella and incorporated onto the PT with Krakow suture fixation (Figures 4(a) and 4(b)). The paratenon layer is then repaired over the mesh, similar to prior reports [11]. In case 2, the mesh captures the patella distally using a transverse tunnel through the PT, 1 cm distal to the inferior pole of the patella ( Figure 5). The mesh is passed through the tunnel ( Figure 6) in a loop fashion and sutured to each side of the quadriceps tendon proximal to the patella, using a Krakow suture technique (Figure 7). These techniques differ from previously described techniques in knee arthroplasty that relied on intraosseous graft fixation in the tibia for distal fixation [10]. Proximally, an intrasubstance, longitudinal tunnel is made in the remnant quadriceps tendon stump, and the mesh is secured (after reapposition and tensioning) to the tendon with running, Krakow suture technique ( Figure 8). For all cases, the QT-mesh unit is tensioned tightly with the knee in full extension. Deep wound closure should ensure complete coverage of the synthetic mesh when possible. Proximally, this includes mobilization of the vastus medialis and lateralis myofascial units for coverage, as previously described [10]. Distally, the closure includes closure of the paratenon (case 1) or retinaculum (case 2). Postoperative Protocol. Postoperatively, patients are weightbearing as tolerated in a removable hinged knee brace locked in extension for three months, after which flexion limits are increased (via the brace) in 30°per 2-week intervals. Upon achieving 90°of flexion, ROM is progressed to tolerance. It is important to avoid active or passive knee flexion for a prolonged time. Discussion QT ruptures exceed patella fractures and PT ruptures in their incident disruption of the knee extensor mechanism 3 Case Reports in Orthopedics [12]. When not treated acutely or after failure of primary repair, QT tears become increasingly difficult to treat. In this series, we highlight two successful reconstructions in QT deficient native knees utilizing a synthetic mesh augmentation for reconstruction. While many methods have been described to reconstruct irreparable QTs, our small series demonstrates a reliable alternative to soft tissue reconstructions. Classically, techniques such as the Codivilla or Scuderi advancement techniques are effective for reconstruction of smaller QT gaps than we encountered (10-15 cm) [13]. Despite the array of autograft, allograft, and synthetic surgical augmentations, outcomes remain suboptimal [14][15][16][17][18][19]. Unlike customizable synthetic grafts, auto-and allografts have unique risks that include graft-host mismatch [20], reliance on graft tissue quality, donor site morbidity (in the case of autograft), risk for delayed creep failure, allograft availability, and disease transmission. However, when possible, utilization of autograft represents the most cost-effective source for extensor mechanism reconstruction tissue. Given concerns over the risk-success profile of soft tissue graft reconstruction, we adapted a TKA reconstruction technique [10], applying a synthetic mesh augmentation for the reconstruction of chronic extensor mechanism disruptions with good outcomes [21][22][23][24]. This augmentation technique was previously modified to augment acute, native QT repairs with good success [25]. It has also been described to augment an allograft chronic QT reconstruction [13] but has never been portrayed in isolation. Monofilament mesh is well-studied in general surgical hernia repairs 26 and functions by inciting a robust inflammatory fibrotic reaction that promotes host/graft integration [27,28]. A TKA retrieval study demonstrated similar histological findings [29]. The use of mesh in this technique is technically uncomplicated and affordable, and polypropylene mesh has favorable biomechanical properties [13,29]. As such, there is growing interest in its use in the traditionally tenuous reconstruction of relatively devitalized post-TKA extensor mechanism ruptures [10,21,30,31]. We selected this method for these two patients given their unique circumstances: chronicity, kidney disease in a high-functioning patient (case 1), and high-demand knee reliance in a contralateral amputee (case 2). Thus, both patients demanded a reliable method for recalcitrant chronic QT ruptures. As such, reconstruction in this setting likely Case Reports in Orthopedics represents an approximate worst-case scenario [22], and success would seem relatively promising for adaptation to similarly exacting pathology [24,32]. The success of this technique relies heavily on distal mesh fixation with described techniques including screw and cement fixation in the tibial plateau [10,21,24] and suture fixation into the PT [11,23]. While most prior literature describes fixation in postarthroplasty knees, our technique demonstrates practicality in the native knee with chronic QT disruption. While the proximal fixation has limited flexibility, we demonstrate two successful distal fixation methods, including a self-retaining sling and direct tendon onlay through a subperiosteal tunnel. We believe that avoiding any knee flexion for a prolonged time period after surgery is critically important. This can be achieved with a cylinder cast [22] or with a knee immobilizer in a reliable patient. Conclusion Chronic QT tears that have failed primary repair have notoriously poor results. Multiple options exist, but synthetic mesh has emerged as an option for reconstruction in the arthroplasty patient. This report demonstrates its viability in the native knee and offers a technical description of two distal fixation methods. Longitudinal investigations should quantify and compare the efficacy of this novel technique; however, we hope it provides a usable alternative to graft reconstruction for chronic tendon injuries given our success here and in prior descriptions after TKA. Conflicts of Interest The authors declare that there are no relevant conflicts of interest pertaining to this work.
2021-09-24T05:19:52.906Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "6dd44365c79bdd1138f8a4df79ba7328d41022eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2021/5525319", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6dd44365c79bdd1138f8a4df79ba7328d41022eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
94175469
pes2o/s2orc
v3-fos-license
Global parameter optimization of Mather type plasma focus in the framework of the Gratton-Vargas two-dimensional snowplow model Dense Plasma Focus (DPF) is known to produce highly energetic ions, electrons and plasma environment which can be used for breeding of short-lived isotopes, plasma nanotechnology and other material processing applications. Commercial utilization of DPF in such areas would need a design tool which can be deployed in an automatic search for the best possible device configuration for a given application. The recently revisited [S K H Auluck, Physics of Plasmas 20, 112501 (2013)] Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus provides a numerical formula for dynamic inductance of a Mather type plasma focus fitted to thousands of automated computations, which enables construction of such design tool. This inductance formula is utilized in the present work to explore global optimization, based on first-principles optimality criteria, in a 4-dimensional parameter-subspace of the zero-resistance GV model. The optimization process is shown to reproduce the empirically observed constancy of the drive parameter over 8 decades in capacitor bank energy. The optimized geometry of plasma focus normalized to anode radius is shown to be independent of voltage, while the optimized anode radius is shown to be related to capacitor bank inductance. Introduction: Optimized design of dense plasma focus (DPF) devices has been a long standing goal since the early days of DPF research [1]. Many empirical criteria and experimental optimization procedures have been suggested over the years [2][3][4][5][6][7] for choosing the device parameters for a given capacitor bank. Simplified numerical models [8][9][10][11][12][13][14][15] have been deployed in the quest for a well-optimized device. One of the observations [16][17][18][19][20][21][22] of this research is that the drive parameter, I a p , where I is the peak current, a is the anode radius and p is the deuterium gas pressure, has nearly constant value for devices experimentally optimized for neutron production. This implies [16][17][18][19] that many parameters such as axial and radial velocities, ion temperature, plasma energy density, Alfven velocity, magnetic energy per unit mass have nearly identical values across a wide spectrum of neutron-optimized devices spanning 8 decades of capacitor bank energy. Many possible physical reasons have been proposed [1,19,20] for this circumstance. Designing a DPF device for a given capacitor bank, characterized by the four parameterscapacitance, inductance, resistance and voltage, involves choosing values for the following 6 parameters -anode radius, anode length, insulator radius, insulator length, cathode radius and gas pressure (or density). The cathode length is generally taken to be equal to the anode length; a somewhat different value is sometimes adopted for facilitating diagnostic access but usually that does not lead to a noticeable difference in the device properties. The empirically observed near-constancy of drive parameter in neutron-optimized devices is suggested as a design criterion for determining the anode radius [21,22] using the additional empirical observation that most DPF devices work with a few millibar pressure of deuterium; the anode length and working pressure are then chosen so that the plasma arrives at the axis at the peak of current [22]. Recent realization of DPF [23] which operates at unusually high deuterium pressures of tens of millibar and which also reports a better-than-globalscaling neutron yield [23] raises the possibility that conventionally-designed neutron-optimized DPF devices may not represent a globally optimized ("best possible") DPF device. Resurgence of interest over the last 10 years [24] in commercially significant applications of DPF [25,26,27] indicates that the time has arrived for a deeper examination of the question of global optimization of DPF in the 6-dimensional parameter space, which is based on a transparent model of DPF operation devoid of unstated assumptions. Of particular importance are applications for plasma nanotechnology processes [25], where DPF acts as a provider of a unique plasma environment rather than of fusion neutrons. Breeding short-lived isotopes [28] for medical diagnostics is also of considerable commercial interest. These two applications use fundamentally different properties of the DPF: the former uses the intense power delivered by soft-x-rays (for lithography) or plasma and ions with few tens of eV temperature /kinetic energy (for coatings or surface treatment); the latter uses confined ions with hundreds of keV energy interacting with relatively dense target plasma. Industrialscale investment in development of DPF as a technology platform for commercial utilization of these phenomena would demand that the adopted design should be globally optimized ("best possible") for the intended application in order to avoid the risk of premature technical obsolescence, using an experimentally validated design tool and well-defined, first-principles optimization criteria, not excessively dependent on but compatible with empirical thumb rules, in an automated unbiased parameter search. The practical logic of industrial-scale investment has significant implications for scientific aspects of global optimization efforts. A purely scientific view of optimization would involve maximization of appropriately defined quantitative performance criteria subject to known technical constraints. A commercial view of optimization would include the possibility of overcoming some of the technical constraints through innovation, (such as new kinds of current generators, new ways of forming the initial plasma, new device geometries), which would have the effect of protecting the technical leadership of the investor through intellectual property rights. It would also involve strategic trade-offs, sacrificing a technically better option in favor of one that affords long term business advantages or which makes better commercial sense in the short term. This implies that techno-commercial optimality criteria themselves are undefined a priori; they are to be 'discovered' iteratively as part of the optimization project. Therefore, the optimization effort needs to be based on a non-judgmental tabulation of the behavior of a variety of optimality parameters in practically important regions of the parameter space followed by a process of discovery of the optimality conditions and of the optimal configuration itself. Recent re-appraisal [29] of the Gratton-Vargas (GV) two-dimensional analytical snowplow model of plasma focus evolution [30] has revealed opportunities for global parametric optimization of the Mather type plasma focus based on a numerical formula for its dynamic inductance, determined by fitting inductance data calculated from thousands of automated computations. Current profile data from contemporary DPF facilities can be fitted very well with the proposed modification [29] of the GV model to include circuit resistance, when gas fill pressure, static inductance and circuit resistance are treated as fitting parameters. This formulation enables calculation of certain quantities related to the current profile in a very short time, enabling automated tabulation of optimal properties of DPF configurations in the 6-dimensional parameter space. The present work seeks to initiate exploration of these opportunities. The next section recapitulates relevant results from the revised Resistive GV model [29]. Section 3 introduces the concept of similarity classes of the GV model and looks at their properties. Some issues involved with global parametric optimization are described in section 4. Examples of automated parameter space survey using an optimization algorithm are described in section 5. Section 6 presents a summary of the main results and conclusions. Salient features of the revised GV model: This section recapitulates nomenclature and salient features of the revised resistive GV model [29], in order to provide a condensed, self-contained background for the present discussion. The GV model is based on the snowplow hypothesis, which equates the magnetic pressure acting behind the plasma current sheath (PCS) with the 'wind pressure' experienced by the PCS driven into stationary neutral gas. This results in a partial differential equation (called GV equation [29]) for the propagation of the (azimuthally symmetric) PCS in two-dimensional (r,z) space as a function of time. The GV equation admits scaling to a dimensionless form, where coordinates (r,z) and linear dimensions are expressed in units of the anode radius 'a': r r a ≡ ɶ and z z a ≡ ɶ and time is replaced as an independent variable by the dimensionless variable ( ) , , can be determined [29] as a function of the independent variable τ for a Mather-type DPF, assumed to have a straight solid cylinder of radius 1 and length A z ɶ as anode, a straight cylinder of outer radius I r ɶ and length I z ɶ as insulator and a straight cylinder of inner radius C r ɶ and height A z ɶ as cathode, in terms of two characteristic values of τ : The current ( ) I t obeys the following circuit equation for a capacitor bank of capacitance C 0 , internal inductance L 0 and internal resistance R 0 charged to voltage V 0 : Introducing dimensionless quantities: For the case of zero capacitor bank resistance ( The case of non-zero circuit resistance is dealt with using the method of successive approximations using the smallness of the parameter εγ . The flux function ( ) Φ τ is treated as the limit of a sequence of The real time t corresponding to the independent variable τ is determined in terms of the short-circuit The partitioning of stored energy A numerical formula for the dynamic plasma inductance profile ( ) τ L , calculated from the PCS shape using 2, has been fitted [29] The three parameters k 1 , k 2 , k 3 are found to be independent of the scaled anode length A z ɶ , scaled insulator length I z ɶ and scaled insulator radius I r ɶ . They depend on the scaled cathode radius C r ɶ as The formula for the volume swept by the PCS up to R τ [30] in the notation of this paper is given It should be noted that the region over which formula 12 has been fitted defines the region for parametric search in this paper; however, it is in principle possible to obtain similar formulas applicable in other regions or indeed for other geometries such as the Filippov geometry [31], hypocycloidal pinch [32] and many still-to-be-discovered concepts. The GV model has been experimentally verified [33] and used by M. Milanese and co-workers [34][35][36][37] and H. Bruzzone and co-workers [38] to interpret their experimental results and generate insight into the dynamics of energy transfer during the rundown phase in the Dense Plasma Focus. A good (manual) fit of the resistive GV (RGV) model with an experimental current waveform has been demonstrated [29] earlier. A refinement of the fit can be obtained by minimizing the expression for the mean deviation The good quality of fit supports the idea that the resistive GV model proposed earlier [29] is an adequate representation of the gross energy transfer from capacitor bank to the plasma using nondimensional parameters in an analytical format. It can thus be used as a tool for examination of questions related to the optimization of energy transfer in device performance. Similarity classes in the GV model: A significant feature of the GV model is that the ten parameters describing a DPF facilitycapacitance C 0 , inductance L 0 , resistance R 0 , voltage V 0 , gas pressure p (or density 0 ρ ), anode radius a, anode length, insulator radius, insulator length and cathode radius -are mapped on to 7 independent dimensionless parameters of the RGV model: Each point in the 7-dimensional RGV parameter space then has three degrees of freedom in the space of all possible DPF facilities, which may be chosen to be capacitance C 0 , internal inductance L 0 and charging voltage V 0 . A similarity class in the GV model is defined as a set of all DPF facilities which have identical values for all dimensionless parameters of the GV model. All DPF facilities in the GV similarity class (GVSC) are characterized by identical dynamic plasma inductance profile as well as scaled current profile parametrically defined as . . However, they differ in terms of their quantitative performance as well as cost. One basic distinction between different members of the GVSC consists of the capacitor bank has been assumed to be identical for all members of the GVSC, later discussion of Section 4 reveals another characteristic impedance related to physics external to the GV model, whose ratio to 0 Z imparts a different behavior to each member of GVSC. Linear dimensions of the plasma, which scale with anode radius a, and its density, which scales with the gas density 0 ρ , are related to parameters of the capacitor bank and dimensionless parameters of the GV model as: The mass density of gas of molecular weight A at normal temperature is related to its pressure as The quantity CH 19 is a characteristic pressure scale for a gas of molecular weight A for a given capacitor bank. Empirical observations [17] that the pinch radius ~0.12a and pinch height ~0.8a can be used to estimate the scaling of mass content of the pinch zone (not taking into account plasma compression) as The scale of axial and radial velocities is given by This equation reveals a characteristic velocity scale, numerically related to the drive parameter by a multiplicative constant, which, for every capacitor bank, is proportional to its impedance. The scales of linear dimension and of plasma velocity define the scale of time for gross plasma motion: X-ray and neutron yields would then scale as the yield parameter Y , The intensity and spectral properties of the x-ray and neutron emission would depend on the details of the plasma processes in the pinch phase -an important aspect, which is beyond the scope of the GV model. Typically, the cost of a capacitor bank for the DPF facility should scale as its energy. One could define a cost-effectiveness parameter (CEP) in terms of 23: This indicates that within a GV similarity class (GVSC), it is far more cost effective to choose a lower inductance than to increase the voltage and capacitance as far as the yields of emissions from the DPF are concerned. Sometimes, yield may not be the only desired goal; one may want a single, short, intense pulse of neutrons as a diagnostic probe. From 22, the optimization goal may include a smaller value of ( ) R I ε τ ɶ and members of the GVSC with lower 0 0 L C may be preferred. An important caveat needs to be mentioned here. The GV model is a theory of the consequences of the snowplow hypothesis in the context of the DPF geometry: it is not a theory of any of the plasma phenomena. It neglects the details of the plasma formation process at the insulator [39,40] as well as of the processes [41] which lead to the approximate validity of the snowplow hypothesis. It also neglects plasma processes which govern the temperature and energetic-ion velocity distribution in the pinch phase. It is seen from 17 that every member of the similarity class of the GV model is associated with a characteristic scale of the fill density. For some values of the fill density, the plasma processes associated with the formation phase and /or leading to the snowplow phenomenon may not proceed in an optimal manner in the conventional design of a DPF. These are considerations which lie beyond the scope of the present formulation of the GV model and represent important subjects of research and innovation in their own right. 4. Quantitative performance criteria and optimal properties of GVSC: From the point of view of device optimization, the snowplow phenomenon represented in the GV model using a set of dimensionless parameters may be looked upon as just a mechanism for delivering energy stored in the capacitor bank to a dense plasma formation process at the end of the rundown phase of a Mather type DPF; this plasma formation process may proceed in somewhat different manner for devices having different shapes for the anode end-caps. The objective of a theoretical optimization procedure is to provide the initial values of parameters for an iterative empirical optimization campaign to assist its rapid convergence; theoretical optimization has no meaning other than as a prelude to such indispensable empirical optimization. Because of this, theoretical optimization needs to consider only the zero-resistance case; for sufficiently low values of the circuit resistance (γ<1), the zero-resistance optimum configuration should provide a good starting point for the iterative empirical optimization. In practice, the scaled radius of the insulator I r ɶ does not change much, and may be taken as nearly equal Within such bounded 5-D parameter space, a "region of practical interest" can be identified from accumulated worldwide research experience. The optimization problem involves choosing the snowplow device parameters to yield the 'best results' for a given capacitor bank. The very idea of optimization implies existence of opposing trends in desired quantitative performance criteria; the first step in global optimization of DPF must therefore be identification of desirable quantitative performance goals and of opposing tendencies. Study of optimal properties of GV model is facilitated by defining the following numbers [30], also bounded between 0 and 1: This yields the expressions One of the possible quantitative performance criteria is the fraction of energy converted into magnetic energy coupled with plasma inductance at the end of rundown phase [29]: In applications involving material modification, the total energy (magnetic energy + work done) coupled with plasma inductance may be more important than only the magnetic energy: It is easily seen that these performance criteria have a maximum with respect to R ετ , which involves the gas pressure, the anode radius, the charge on the capacitor bank and the scaled lengths of anode and insulator. But the maximum value asymptotically reaches unity as a function of κ, which represents the physical size of the DPF for a given capacitor bank. This is understandable, since as the size increases, the DPF inductance becomes the dominant inductance in the circuit and hence will acquire most of the capacitor energy at the peak of current in a resistance-less circuit. The 'penalty' for indiscriminate increase in physical size would be increase in the discharge time, which is not reflected in any of the expressions 25-28. This suggests that although the definition of 'best results' may differ from application to application, a common desirable optimal feature would be maximum conversion of stored capacitor bank energy into magnetic energy associated with plasma inductance in minimum time so that parasitic energy losses from radiation, heat conduction to electrodes and dissipation in circuit resistance are minimized. This criterion is applied to the end of the rundown phase because that is the dominant phase in the evolution of the Mather type plasma focus, because the anode top is many times configured to have a cavity or a hemispherical or conical profile, which is not taken into account in the formula 12 for the dynamic inductance and also because the GV model becomes less accurate near the pinch phase in view of its neglect of gas dynamics which plays a significant role at that stage. It was reported [29] earlier that for given values of A z ɶ , C r ɶ , I r ɶ and I z ɶ , the average power parameter (APP) P defined as It is reasonable to assume that a globally optimized DPF will be a member of the set of all DPF facilities which maximize the average power transferred to the plasma during the rundown phase. The problem of maximizing the average power parameter P is analogous to the maximum power transfer theorem of electrical circuit theory, which leads to an impedance matching condition between the power source and the electrical load. The existence of a maximum value of P as a function of κ similarly translates into a relation between the static inductance of the power source (capacitor bank) and the plasma inductance at the end of rundown. It leads to the conclusion that the physical size of the device which maximizes the average power transfer within a GVSC should be related to the static inductance of the circuit and should be independent of the voltage. This point is revisited in section V. It appears from 21 that the plasma velocities should unconditionally scale with the capacitor bank impedance [1]. However, there is an important consideration, first pointed out by Bruzzone, Kelly, Milanese and Pouzo (BKMP) [42] and subsequently elaborated in other publications [43,44,45,46], that the electromagnetic work done in accelerating the PCS must be adequate to heat, dissociate and ionize the entire mass of gas swept up. The energy available for this purpose is [43] onehalf of the electromagnetic work done; the remaining half is the kinetic energy of the plasma. This criterion is easily formulated in terms of the GV model using 9, 14, and 17 The quantity S E in 30 is the specific energy ( ) 1 24MJ gm . / necessary to heat (upto ~10 eV), dissociate [47] and ionize deuterium gas initially at room temperature; S , the 'specific energy parameter', is defined by It is interesting to note that 30 introduces a characteristic impedance associated with the deuterium gas Each capacitor bank is then characterized by an additional dimensionless number The criterion 23 can be stated as an equality in terms of a "BKMP factor" B f as , which is a physical property of the working gas, and dimensionless quantities related to the GV model and which does not additionally depend on any property of the capacitor bank including its energy. In the neighborhood of an optimal point (using any criterion) of the GV model, the quantities Q and S are seen to vary much more slowly than 0 Y , the reduced yield parameter. This probably explains the empirically observed [17] The BKMP criterion is a global formulation of the condition that the energy required to maintain the plasma state must come out of the work done by the electromagnetic force. It should in principle be possible to formulate this condition in terms of local values of parameters. However, such local formulation is beyond the scope of this paper, which revolves around the GV model; it forms the subject matter of a forthcoming paper (under review). Cardenas [7] has proposed maximization of magnetic energy per particle in the pinch phase as a design criterion. This can be formulated in the following manner. The fraction of energy converted into magnetic energy at the end of radial implosion phase can be estimated as The energy per particle is also seen to be related with the capacitor-bank-dependent characteristic velocity Relation 37 does not take into account plasma compression; the actual value of energy per particle would be smaller by the compression ratio. The value of this parameter works out to 66 eV for PF-1000 [29]. Because of the progressive inaccuracy of the GV model near the pinch phase pointed out earlier, this cannot be used as a primary optimality criterion in the present version of the GV model; however, it may be used as a secondary optimality criterion to rank configurations shortlisted from a primary optimization scheme. The same consideration applies to the use of the pinch current ( ) I * τ ɶ at the limit of validity of the GV model as an optimality criterion. The above discussion illustrates the value of the idea of GV model similarity class. Using a comparative study of devices which have the same representation in GV model but differing physical parameters, one can look at deviations from predictions of the resistive GV model, which should contain signatures of phenomena which violate the conditions of proper plasma formation assumed in the resistive GV model. Understanding conditions of proper plasma formation is one of the major goals of plasma focus research and the concept of the GVSC proposed in this paper provides a tool to secure a major advance in that direction. Parameter space survey of GVSC: Optimization search algorithms are a research subject by themselves; however, application of such algorithms to the determination of a globally optimized DPF requires a prior determination of an adequate definition of optimality. This is difficult for the case of DPF because of a profusion of diverse end-applications and practical considerations of an industrial-scale investment already mentioned earlier. The next best option is to perform unbiased tabulation of optimality parameters in a uniformly discretized parameter sub-space as a permanent database, on which to perform optimization search tailored to specific end-applications. Different optimization queries on this database are expected to come up with different answers; the design of the query would sensitively depend on the type of application. This section therefore attempts to provide illustrative examples of two variations of optimization search and to highlight some counter-intuitive aspects of the optimization process. The formulation of GV model in terms of dimensionless parameters allows a universal determination of its optimal properties, valid for all members of a GVSC, justifying more extensive efforts than would be practical for optimization of DPF for a particular capacitor bank (such as PF-1000). This has taken the form of once-for-all generation of a database of properties associated with very large number of points in the 5-dimensional parameter sub-space of the zero-resistance GV model, chosen in the following manner. Many working devices (notably PF-1000, used as an example in this paper) use the scaled insulator length I z ɶ close to 1; this was chosen as a fixed value in this series of investigations although there are indications that the optimum value may be different. The scaled cathode radius C r ɶ was varied from 1.2 to 2.0 in steps of 0.1. For each value of C r ɶ , the scaled anode length A z ɶ was varied from For DPF as a source of radiation, the "performance" has many aspects: it must have the best power transfer efficiency, highest current, highest conversion into magnetic energy of the plasma as well as high cost-effectiveness parameter CEP; for a GVSC, this translates to best 0 Y . On the other hand, some applications of DPF (such as material modification) may prefer to maximize the average total power parameter ( ) ( ) along with high plasma velocity parameter V and a high fill pressure of gas. These two cases were studied separately using the same database to provide comparison and contrast. cases. This was arranged in order of decreasing values of the parameter V in 21 and cases lying between 90% and 100% of its maximum value were taken into the fifth shortlist containing 4 semifinalists. These 4 cases (see Table I) then represent the best combination of highest average power transfer to the plasma, highest scaled current, most efficient magnetic energy transfer to the plasma, best yield parameter and maximum plasma velocity for a given capacitor bank. They are quite similar in their properties so only the first case, with the highest value of the pressure parameter 2 4 ε κ , was chosen as Global Optimum I within the context of this study, subject to various caveats and conditions already mentioned. The second optimization search arranged the same database in decreasing values of T T MP = ⋅ η η P P and cases lying between 95% and 100% of its maximum value were shortlisted into SL-Ib containing 2009 configurations. This shortlist was ranked in decreasing values of V and cases lying between 95% and 100% of its maximum value were taken into SL-IIb containing 18 cases. This was ranked according to decreasing values of the pressure parameter 2 4 ε κ and cases lying between 80% and 100% of its maximum value were taken into the final shortlist containing 4 almost similar configurations shown in Table-II. The configuration with the highest velocity parameter V was designated as Global Optimum II. Note that these values, including specific optimal values of κ, are independent of the nature of the gas and of the capacitor bank voltage, capacitance and inductance. This shows that the optimal size of the DPF, represented by anode radius a, is dependent on the capacitor bank inductance and independent of its voltage. The scaled current profile for Global Optimum I is shown in Fig 1 for γ=0 . GV model fit to PF-1000 reported earlier [29] is included for comparison. The black dot represents the end of rundown phase. Beyond the rundown phase, the GV model becomes progressively less accurate as neglected gas dynamic phenomena start becoming important during radial implosion. The upper limit of velocity scale in 21 is given by ; the value of Q in Table I The black dot represents the end of rundown phase. The two configurations have differences as well as similarities: the first configuration has more peak current, but both configurations have similar values of MP η . The 45% higher current of the first configuration comes from a better utilization of stored energy: only 4% energy remains in the capacitor bank at the end of rundown as compared with 16.2% for the second configuration. The energy per particle parameter GV E is 40 times higher and the velocity parameter V is 5.4 times higher in the second configuration; the pressure parameter is however 127 times lower and the yield parameter is 2.6x10 6 times smaller. In terms of device geometry, the first configuration has 1.44 times smaller anode radius and 3 times smaller anode length. This exercise shows the profound effect of the nature of the query on the outcome of the optimization search; it also shows that empirical and theoretical optimization procedures have a mutually supportive complementary role to play where theory provides a wide coverage of the parameter space while experiments provide validation to specific conclusions of the theoretical optimum search. It needs to be remembered that the GV model cannot be used to make any predictive statements concerning the neutron yield; therefore the above configurations do not necessarily represent neutronoptimized regimes. However, it should not be surprising to experimentally discover that they do indeed have enhanced neutron emission properties in view of the high current and high magnetic energy. The above global optimization scheme could in principle be extended in future by incorporating additional physics external to the GV model which would further restrict the searchable parameter subspace, leading to more efficient optimization. Two such examples can be mentioned. It may be possible to incorporate some elements of ionization dynamics in the form of a generalization of Paschen's Law or ionization stability condition. Another could be incorporation of a model of nuclear reactions from fast ions interacting with a target plasma within which they remain confined. Both these aspects require considerable preparatory work before they can be taken into account in the global optimization. The following example illustrates how a globally-optimized DPF facility with stored energy of 1 MJ and design short-circuit current I 0 =10 MA (static inductance 20 nH) might be realized. Using 34, with Table I, the deuterium fill pressure comes to 187.75 mbar. Note that the BKMP criterion is well-satisfied even though the value of pressure is much higher than that of any operating DPF. This provides a counter-example to the suggestion that DPF devices have an inherent limitation on their operating pressure related to the BKMP criterion [43]. At a circuit resistance of 1.28 mΩ (γ=0.2), the magnetic energy in the dynamic inductance at the end of the rundown should be 300 kJ. The current at the end of rundown should be ~5. 3 MA, the pinch current should be more than 3.5 MA. This may be compared with an extreme example: a hypothetical globally optimized facility with static inductance of 20 nH and I 0 =100 MA, needing 100 MJ of stored energy. The impedance of the capacitor bank is still 0 Z 6 42 m . = Ω according to the logic described in the previous paragraph and the capacitance remains 485 µF; the operating voltage however becomes 10 times higher at 642 kV. The anode radius, insulator length, anode length and cathode inner radius remain the same as for the 10 MA case. But the pressure of deuterium works out to 345 bar! At a circuit resistance of 1.28 mΩ (γ=0.2), the magnetic energy in the dynamic inductance at the end of the rundown should be 30 MJ; about 7.5 MJ should be dissipated in the circuit resistance. The current at the end of rundown should be ~53 MA and pinch current should be more than 35 MA. These are impressive technological challenges but they do not represent limitations imposed by the physics of primary energy transport via the snowplow effect in DPF as represented by the GV model [48]. The conclusion [48] that the large capacitor banks cannot drive Mather type DPF devices to multi-mega-amperes pertains to a design procedure that keeps the voltage rather than the impedance constant as pointed out above. This example highlights the following counterintuitive aspect of optimization within the ambit of the RGV model. The optimization happens in the dimensionless model parameter space, in terms of quantitative performance criteria defined from first principles, with no reference to empirical thumbrules, without regard for practicalities of realization of the DPF configuration, which reflect the accessible sophistication of device technology. On the one hand, this presents the best expected performance under well-defined conditions, which cannot be exceeded even in principle, thus affording protection from the risk of premature technical obsolescence. On the other hand, it allows customization of the search algorithm to avoid bias rooted in existing state-of-art facilitating identification of areas where investment in innovation can reap rich dividends in terms of clear technical advantages over existing state-of-art. This feature recognizes the distinction between a widely practiced consensus and limitations imposed by laws of nature. This needs to be seen in the context of applicability of the RGV model for arbitrary devices based on the snowplow effect, not limited to the Mather type DPF as in this study. Summary and conclusions: This paper provides a first look at the possibility of theoretical global optimization, as the first step of an empirical optimization campaign, of a Mather type DPF using a formula for dynamic inductance reported earlier [29] representing a numerical fit to thousands of automated calculations of 2-D plasma profile provided by the GV model. Feasibility of such global optimization is an essential prerequisite to the emergence of DPF as a technology platform for diverse commercial applications, from plasma nanotechnology and material modification [24,25] to use of intense neutron bursts [26,27] and production of short-lived radioisotopes [28] for medical applications to fusion energy [23] using advanced fuels. The desirability of such optimization can be inferred from the fit of PF-1000 current profile to the resistive GV model (Fig. 1) which shows about 47.5% energy remaining in the capacitor bank and 10% dissipated in circuit resistance by the end of rundown -a clearly unsuitable situation in a commercial context. The utility of the GV model for this purpose lies in its dimensionless form, which maps 10 physical parameters representing a DPF installation on to 7 independent dimensionless model parameters. Each point in the GV model parameter space then has three degrees of freedom in the space of all possible DPF configurations, which are chosen as the voltage, capacitance and inductance of the capacitor bank. The present discussion of optimization in a 4-dimensional parametric subspace of the GV model includes the BKMP criterion which requires the electromechanical work done in plasma propagation to exceed an energy threshold related to the change of the thermodynamic state of the working gas from an initially neutral gas at room temperature to fully-dissociated, ionized and sufficiently heated plasma. This reveals a characteristic impedance associated with the working gas, whose ratio to the characteristic impedance of the capacitor bank is a dimensionless number characteristic of a capacitor bank. Together with the ratio of the static resistance of the capacitor bank to its impedance, this ratio should form part of the definition of a generic technological limit on the maximum performance of a Mather type DPF [48]; the importance of the present approach is that it allows a precise demarcation of this limit at a given level of capacitor bank energy, potentially leading to innovative workarounds. The scope of this study can clearly be expanded to the discovery of optimum configurations for a specific capacitor bank, such as PF-1000. The GV model fit to current waveform from PF-1000 reveals a high value of circuit resistance compared to circuit impedance (γ=1.26). The strategy of using optimum configurations of the zero-resistance case as initial step of optimization for low γ situations then becomes of doubtful validity. This suggests generation of a database specific to PF-1000. The optimum value of κ (0.9) for the Global Optimum I configuration is seen to be quite close to that of PF-1000 (0.92) as revealed through the fitted value of L 0 =25 nH; the value of I z ɶ for PF-1000 is 0.98, quite close to that used for the database calculation. Therefore, the existing anode of radius 115 mm, and existing insulator (of radius 115 mm and length 113 mm) of PF-1000 device can be retained and the procedure outlined above can be used to determine the anode length, cathode diameter and pressure that would maximize average power transfer, current and energy per particle. This however requires an iterative solution to 6 for each point in the database, which is computationally much more resource intensive and has to be a separate undertaking forming part of a project for upgradation of existing large facilities. This study also suggests that several physical phenomena, which are responsible for the approximate validity of the snowplow effect and which probably play an important role in limiting the pressure range for neutron producing devices, need to be incorporated in a future extension of the GV model. The GV model should therefore prove to be a rewarding subject of both theoretical and experimental research.
2019-04-04T13:15:04.726Z
2014-07-31T00:00:00.000
{ "year": 2014, "sha1": "cfc7c36cb9840bdb2f05bf7e3bedbaaf901cc4ab", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1407.8271", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cfc7c36cb9840bdb2f05bf7e3bedbaaf901cc4ab", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
221864244
pes2o/s2orc
v3-fos-license
Effect of Electronic Activity Monitors and Pedometers on Health: Results from the TAME Health Pilot Randomized Pragmatic Trial Background: Brief counseling and self-monitoring with a pedometer are common practice within primary care for physical activity promotion. It is unknown how high-tech electronic activity monitors compare to pedometers within this setting. This study aimed to investigate the outcomes, through effect size estimation, of an electronic activity monitor-based intervention to increase physical activity and decrease cardiovascular disease risk. Method: The pilot randomized controlled trial was pre-registered online at clinicaltrials.gov (NCT02554435). Forty overweight, sedentary participants 55–74 years of age were randomized to wear a pedometer or an electronic activity monitor for 12 weeks. Physical activity was measured objectively for 7 days at baseline and follow-up by a SenseWear monitor and cardiovascular disease risk was estimated by the Framingham risk calculator. Results: Effect sizes for behavioral and health outcomes ranged from small to medium. While these effect sizes were favorable to the intervention group for physical activity (PA) (d = 0.78) and general health (d = 0.39), they were not favorable for measures. Conclusion: The results of this pilot trial show promise for this low-intensity intervention strategy, but large-scale trials are needed to test its efficacy. Introduction Cardiovascular disease is the leading cause of death world-wide [1]; however, approximately 12% of cardiovascular disease related deaths are attributed to physical inactivity [2]. For primary and secondary prevention of cardiovascular disease it is recommended that individuals take part in at least 150 min of moderate intensity physical activity (PA) a week. For older adults, who have an increased risk for cardiovascular disease [3], this recommendation equates to 7000 to 10,000 steps per day [4]. Unfortunately, older adults fall far below this recommendation [5]. Self-monitoring of behavior is an effective behavioral strategy to increase PA among inactive individuals [6]. Electronic activity monitors (EAMs) are commercially-available technologies that are recommended to self-monitor behavior [7]. EAMs are operationally defined as "a wearable device that objectively measures lifestyle PA and can provide feedback, beyond the display of basic activity count information, via the monitor display or through a partnering application to elicit continual self-monitoring of activity behavior [8]". EAMs are growing in popularity, with approximately 3.3 million units sold in 2014 [9]. They provide an adequate estimation of PA [10] and they are proliferating in community-based PA interventions [8]. In addition to self-monitoring of behavior, some EAM devices offer other behavior change techniques such as: providing feedback, goal-setting, planning, social support, social comparisons, commitment, instructions on how to perform a behavior, and information on consequences [11]. The implementation of these behavior change techniques is meaningful because these strategies are known to significantly improve PA [12]. There is evidence that EAMs can increase PA and improve cardiovascular disease related outcomes [8] but evidence is lacking on their of their effectiveness in a primary care setting [7]. PA interventions through primary care are common in cardiovascular disease prevention for they rely on the strong clinician-patient relationships and the longitudinal nature of primary care [13]. It is recommended that these interventions take a two-tiered approach to promote PA incorporating both brief behavioral counseling and technology-based resources [14]. Pedometers can be used as a technology resource to facilitate self-monitoring of PA [15]. Pedometers are low tech activity monitors that can significantly increase and maintain an individual's level of PA [15]. Despite their frequent utilization in primary care-based interventions, pedometers have several limitations. Pedometers have been scrutinized for their inaccuracy [16] and their limited methods for motivating exercise [17]. Furthermore, they do not provide features that are central to preventing cardiovascular disease such as providing education and customizability [18]. For these reasons, EAMs may be more successful for primary care interventions. EAMs are attractive in primary care because they offer the convenience of pedometers while having a potentially higher effectiveness by addressing their reported limitations. EAMs yield a high validity for measuring steps [19,20] and the embedded behavior change techniques can theoretically augment traditional behavioral counseling, and support autonomous motivation for PA [21]. Even with a modest effect size, the potential reach of EAMs coupled with brief counseling in primary care could produce a large public health impact. Therefore, we conducted the TAME health (Testing Activity Monitors' Effect on health) pilot randomized controlled trial which aimed to investigate a low-intensity intervention to increase PA and decrease cardiovascular disease risk within the primary care setting. This study was designed to test initial feasibility. The effect sizes collected can be used to assist in development of larger trials to evaluate the intervention's efficacy in improving PA behavior through a combination of EAM and brief counseling. We hypothesized that individuals in the EAM group would demonstrate greater improvements in PA and cardiovascular disease risk than the pedometer group. We also hypothesized that the EAM group would have greater improvements in secondary outcomes than the pedometer group. While initial feasibility results and PA change scores have been reported [22], here we focus on effect size information for the quantitative PA, function, and quality of life outcomes investigated in the study. Materials and Methods The methodology of TAME health is described succinctly below. Further details on methods have been previously published [21]. This study was approved by the University's Institutional Review Board and is registered on clinicaltrials.gov (NCT02554435). This study was conducted in accordance with the Declaration of Helsinki and follows CONSORT reporting guidelines (Supplementary File). Sample Older primary care patients (N = 40) were recruited to participate in the 12-week TAME health pilot randomized controlled trial. Patients aged 55-74 years with a body mass index of 25-35 kg/m 2 , fewer than 60 min of planned exercise a week, access to a smart device and in good health were eligible. Participants were recruited in person or through posted flyers at two clinics affiliated with a large university-based health care system. Recruitment was conducted from October 2015 to June 2016. Screening for eligibility was conducted in person and over the phone. Once an individual was deemed eligible, informed consent was promptly obtained and assessment visits were scheduled. Intervention All participants received brief 5 A's counseling which is optimized for primary care [23]. The counseling components included: assess, advise, agree, assist, and arrange. During the counseling a researcher with a background in exercise physiology and training in motivational interviewing reviewed the participant's PA levels, agreed on step goals, and taught behavioral change strategies. After counseling, the researcher provided an exercise prescription and randomized the participant. Due to the nature of the intervention, the participants and the assessor were not blinded to group assignment after randomization. Participants were randomized to one of the two groups: pedometer or EAM group. Participants in the pedometer group were given a digital pedometer (Digi-walker CW-700/701, YAMAX, San Antonio, TX, USA) and a PA log to record their daily steps, activity time, and distance walked. Participants in the EAM group were given an UP24 monitor by Jawbone (San Francisco, CA, USA) and downloaded the corresponding UP app on their personal smartphones. The UP system offered an array of behavioral change techniques including: goal setting on behavior and health outcome, providing instructions and information on consequences, as well as facilitating social support [11]. More detailed information on the specific features of the UP system is available elsewhere [21]. Measures The study consisted of two assessments conducted at baseline and at 12 weeks. All assessments were conducted at the two clinic locations. The primary outcome variables of interest were cardiovascular disease risk and PA. Cardiovascular disease risk was measured using the Framingham non-laboratory risk equation [24] and from fitness measured by the six minute walk test [25]. Variables used in the Framingham equation included sex, age, treatment of hypertension (yes or no), smoking status (yes or no), diabetes diagnosis (yes or no), systolic blood pressure, and body mass index. PA was measured across a 7-day period prior to each assessment using a Sense Wear Armband (BodyMedia, Pittsburgh, PA, USA) [26]. Secondary outcome variables included: anthropometrics, body composition, blood pressure, resting pulse, health status and quality of life, and physical function. Anthropometric measurements included height (cm), weight (kg), and body mass index (kg/m 2 ) using a portable stadiometer (Seca Corp., Hamburg, Germany) and a portable, calibrated electronic scale (Tanita, Arlington Heights, IL, USA). Body composition was estimated by measuring waist and hip circumference (cm) and calculating the waist-to-hip ratio. Blood pressure and resting pulse was taken using a portable sphygmomanometer (Omron BP742N, Lake Forest, IL, USA). Physical function was measured objectively through a repeated chair stand and balance test, as defined by the Short Physical Performance Battery [27], and an 8-feet up-and-go test, as outlined in the Senior Fitness Test [28]. Due to the known ceiling effect in generally healthy adults [29], the gait speed assessment of the Short Physical Performance Battery was replaced with the validated 8 feet up-and-go [28]. The same equipment for objective measurement of outcomes was used at both clinical sites. The remaining outcomes were assessed through self-reported questionnaires. Health status and quality of life were estimated from the Medical Outcomes Study Questionnaire Short Form 36 (SF-36) [30]. Physical function was estimated from the Patient-Reported Outcomes Measurement Information System (PROMIS) Short Form v1.2-Physical function 8b [31]. Statistical Analysis TAME health was primarily designed to test the feasibility of the intervention and estimate effect sizes; therefore, the analyses described in this paper are exploratory and no pre-specified power calculation was performed. Effect sizes were calculated from the mean change in study variables and were categorized as small (≤0.2), medium (0.5), and large (≥0.8) using Cohen's classification [32]. The Statistical Package for the Social Sciences (IBM-SPSS, version 20) was used and the α-level was set at 0.05. Analyses were conducted using the intent-to-treat principle by carrying baseline information to follow-up. Findings related to feasibility, acceptability, and change in PA have been previously reported [22]. Descriptive analyses were conducted using means, medians, and frequencies of all study variables. Group differences at baseline were examined using independent samples t-tests for continuous variables and through Chi-Square tests for frequency variables. Little's Missing Completely at Random test was performed to determine whether outcome data were missing at random [33]. The distribution of study variables was tested with the Kolmogorov-Smirnov and Shapiro-Wilk tests of normality. Post-intervention differences between groups were assessed using analysis of covariance (ANCOVA) for normally distributed variables and with Mann-Whitney U for non-normal data. Covariates in the analysis were baseline values of the dependent variable and any variables significantly different between groups at baseline. Analyses on the primary outcome variables (cardiovascular disease risk, fitness, PA) were conducted by a blinded statistician. Following standard protocol, only days with a minimum of 10 h of wear time from the SenseWear armband were included in the analysis [5]. Although PA goals were set in terms of steps per day, only PA minutes were included in the analysis because the SenseWear armband is not a validated measure for steps [26]. Results The CONSORT flow diagram is available in Figure 1. At baseline, the mean age and body mass index of the participants was 63.6 ± 5.3 years and 30.3 ± 3.1 kg/m 2 , respectively. A total of 75% of the participants were female, 65% were non-Hispanic White, and 55% graduated college. The demographic information by study group is available in Table 1. The mean heart/vascular age of the participants were approximately 74 ± 11.2 years with a Framingham risk score of 18.9%. Participants averaged 31.3 ± 29.4 min of moderate-vigorous PA per day and 4204.8 ± 2199.8 steps per day. Groups only differed at baseline in systolic blood pressure. Participants were comparable on all other study variables. Characteristics were not different by clinical location. Participants that did not complete the study were not significantly different on the tested variables. However, the relationship between group and missingness was significant (p = 0.04). The odds ratio for missingness was 0.103 (0.002, 0.956) which signifies that missingness was more likely in the pedometer group. There were no adverse events related to the intervention. Primary Outcomes Baseline, follow-up values and the estimated effect size using the intent-to-treat principle for all study variables are outlined in Table 2. Primary outcomes were considered normally distributed (p > 0.05) and group differences were assessed with ANCOVA, controlling for baseline values and systolic blood pressure. The EAM intervention produced a large effect on minutes of PA and a negligible effect on cardiovascular risk and fitness. Primary Outcomes Baseline, follow-up values and the estimated effect size using the intent-to-treat principle for all study variables are outlined in Table 2. Primary outcomes were considered normally distributed (p > 0.05) and group differences were assessed with ANCOVA, controlling for baseline values and systolic blood pressure. The EAM intervention produced a large effect on minutes of PA and a negligible effect on cardiovascular risk and fitness. Secondary Outcomes Functional measures, health status, and heart age required the use of nonparametric tests. All other secondary outcomes were analyzed using ANCOVA, controlling for baseline values and systolic blood pressure. As anticipated, there were no significant group differences at 12-weeks on any secondary outcomes ( Table 2). The EAM intervention produced a medium effect on waist-to-hip ratio and tandem balance time. The intervention produced a negative medium effect for chair stand and 8 feet up and go time. The EAM intervention also produced a small-to-medium effect on weight and SF-36 sub-scales, with the exception of social functioning and pain. Discussion This analysis of a pilot randomized controlled trial aimed to investigate effect sizes for behavioral and health-related outcomes, of an EAM-based intervention for decreasing cardiovascular disease risk, increasing PA, and improving secondary outcomes. Findings suggested that both the EAM and pedometer groups increased their fitness and increased their minutes of PA. Due to the pilot nature of this study, statistical significance for these outcomes should be viewed with caution. However, the magnitude of change, which indicates potential clinical significance, was greater in the EAM group. A previous pilot trial evaluation by Cadmus-Bertram et al. of an EAM also found no group differences among post-menopausal women [34]. Participants were randomized to receive a Fitbit One EAM or a pedometer and were encouraged to become more physically active. After 16-weeks the EAM group increased their PA by 62 minutes while the pedometer group increased by 13 min. Although the magnitude of change was greater in those that wore a Fitbit, the differences were not significant between groups in the small study [34], likely due to low power inherent to pilot studies. The analyses from our TAME health study using a Jawbone EAM were similar. These preliminary results may suggest that, regardless of the type of monitor used, self-monitoring behaviors in combination with brief counseling can increase PA among older adults. This concept aligns with the literature of clinic-based and technology-based PA interventions [35][36][37]. The increased minutes of moderate and vigorous PA within our EAM group is consistent with the literature. Aittasalo et al. found that providing 5As counseling in a clinic can result in a 28 min increase in moderate to vigorous PA a week after 2 months [35]. Similarly, among chronically ill patients the combination of 5As counseling administered over 4-6 months and an EAM system resulted in an 8.9 min increase in exercise per day [36]. Activity monitoring with an EAM, Fitbit, for 3 months along with PA education increased PA by 11 min per day [37]. Participants using the UP24 monitor in our study had an increase of approximately 11 min of PA a day, whereas our pedometer group had an increase of less than 1 min of PA a day. Effect sizes suggested that the EAM group saw greater increase in PA, however the pedometer group was greater at baseline. The difference was non-significant, but it brings up some considerations. Two participants in the EAM group started the intervention after completing physical therapy post-knee replacement. They were included in the intervention because they met the inclusion criteria at the start of their participation and their primary care physician did not advise against PA. However, both participants had extremely low PA levels at baseline which could have contributed to the lower EAM group average. One of these participants dropped out of the study while the other participant increased their PA at a similar rate to other EAM group participants. Considering the pedometer group averaged approximately 40 min of PA a day at baseline, there may have been a ceiling effect with regard to PA activity. In pilot studies with small sample sizes, the impacts of even a few individuals can be substantial. Future research should evaluate the PA capacity within this population. In addition, future research should consider blocking randomization based on baseline PA levels. In terms of reducing cardiovascular disease risk, we found modest increases in fitness among both groups which were comparable to providing physician advice and educational material in clinic [38]. Digital health interventions have been shown to reduce the 10-year Framingham risk score by 1.24%, systolic blood pressure by 2.12 mmHg, and weight by 1.52 kg [39]. We found more conservative changes in these outcomes in both study groups. Our results indicated that the EAM intervention produced a positive medium effect on waist-to-hip ratio and tandem balance time but a negative medium effect for chair stand time and 8 feet up and go time. The increase in chair stand time and 8 feet up and go time among the EAM group suggest that their physical function did not improve. However, the EAM group had greater increases in physical functioning based on self-reported measures, PROMIS and SF-36. The EAM intervention produced a favorable small-to-medium effect in both of these measures. It is possible that the decline in objective physical function is the result of measurement error. Participants completed the assessments in the same location using the same equipment and instructions, but EAM participants may have been more cautious during the follow-up. However, more research is needed to investigate the effects of an EAM on physical function. Limitations and Strengths This study has limitations. TAME health was primarily designed to test the feasibility of the intervention. To that end, there were no blinded assessors or follow up assessments to assess maintenance. In addition, the study was not powered to detect small group differences. Based on our reported effect sizes, a group difference may be observed with a larger sample. There is also possible volunteer and selection bias in the study. Participants were volunteers and may be more intrinsically motivated to exercise than other individuals of the same age. Eligible participants were required to have regular access to a smart device; therefore the results cannot be generalized to all older adults that do not own or have access to a smart device. There was a significant difference in systolic blood pressure between study groups which suggest that the randomization procedures may not have be sufficient for the sample. Lastly, this study was not designed to determine which aspects of the EAMs contribute to the resulting effect size; although, we theorized EAM use can increase intrinsic motivation for PA [21]. Future research should identify the mechanism by which EAMs produce an effect. The major strength of this study is that it was a comparative effectiveness evaluation. This study adds to a small body of the literature that directly compares low-and high-tech activity monitors [34]. Another strength of this study its ability to test the current recommendation for PA promotion in primary care [14,21,40]. Implications The American Heart Association encourages healthcare providers to provide 5A's interventions and provide technology-based resources for individuals at moderate risk for cardiovascular disease [14,40]. Our results provide preliminary evidence that adhering to this recommendation results in clinically meaningful improvements in health among older adults. Although not statistically significant, our results also suggest that EAMs produce a small-to-medium effect over a low-tech pedometer. Healthcare systems have the potential to deliver disseminable interventions that can impact the health of their priority population if they routinely administer the low-intensity, low-impact 5A's counseling for all patients at risk and these patients regularly self-monitor their behavior [14,40]. Large-scale, multi-site trials are needed to address the limitations of the current study and to determine the intervention's effectiveness on a population level. Conclusions PA promotion in primary care through 5A's counseling and self-monitoring is recommended for individuals at moderate risk of cardiovascular disease [14,40]. Our evaluation of self-reported sedentary, overweight adults aged 55-74 years of age suggested that a pedometer or an EAM (Jawbone UP24) can be used with 5A's counseling to improve health, and there is a small-to-medium effect of the EAM intervention on PA and health when compared to use of a pedometer. Future research should determine if a low-intensity study like TAME health can be broadly disseminated by healthcare systems to positively impact the health of their priority population.
2020-09-24T13:06:20.443Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "a5eebc7200af7ddc097977e31ff2f15b6cee659d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/18/6800/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4a426d17dd0d79b346f724877e00600bf22f1a65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
11964064
pes2o/s2orc
v3-fos-license
Short and Efficient Synthesis of Alkyl- and Aryl-Ortho-Hydroxy-Anilides and their Antibiotic Activity Abstract Ortho-hydroxy-anilides are part of natural products like the new antibiotics platencin (A) and platensimycin (B). An important step in the total synthesis of these antibiotics or their derivatives is the preparation of the o-hydroxy-anilide partial structure. The presented method allows the preparation of o-hydroxy-anilides and o-dihydroxy-anilides from 2-nitrophenol esters in a one-step synthesis without protecting the hydroxy group. Aryl- and alkyl-anilides were prepared following this method as simple analogues of platensimycin (A). The resulting compounds were tested in an agar diffusion assay for their antibiotic potency. Introduction The 3-amino-2,4-dihydroxybenzoic acid core is an essential part of the new antibiotic drugs platencin (A) and platensimycin (B) [ Figure 1], which show a high activity against The total synthesis of both natural products is long and expensive. In the last few years, a lot of total syntheses of platencin (A) and platensimycin (B) were published. Even the synthesis of the 3-amino-2,4-dihydroxy benzoic acid partial structure, which is essential for binding to the enzyme FabF ( Figure 2), takes several steps in these total syntheses [5][6][7][8][9][10][11][12][13]. In continuation of our work on simple platensimycin analogues [14,15], we hereby present the short and effective preparation of the o-hydroxy-anilide partial structure without the requirement of a protecting group. Results and Discussion In the first series, commercially available 4-hydroxy-3-nitrobenzoic acid (1) was esterified with methanol / H 2 SO 4 following a standard protocol to give the methyl ester 2 [14]. The phenol group of the methyl ester 2 was esterified with several aromatic or aliphatic carboxylic acid chlorides to give the phenol esters 3a-h. The following hydrogenation of the nitro group with Pd/C (5%) in methanol led to the amino group which reacted in the same procedure under aminolysis of the phenol ester to the resulting anilides 4a-g [14][15][16][17][18]. The olefin partial structures of 3a and 3f were hydrogenated under these conditions but the halogen substituent of 3b was stable. Reaction of 3h led only to a mixture of the products that couldn´t be separated by flash column chromatography. Exemplarily, two of the methyl esters were hydrolyzed to give the benzoic acid derivatives 5a and 5g as found in the natural products platencin (A) and platensimycin (B). In the second series, 2-nitroresorcinol (6) was esterified with a half equivalent of acyl chloride to give the monoesters 7a-h. As a byproduct, a remarkable amount of the diesters was observed even when using an excess of 2-nitroresorcinol (6), but the diesters could be separated clearly by preparative flash column chromatography. The resulting esters 7a-h were hydrogenated in the way described above to give the anilides 8a-h in high yields (80-90% The mechanism of the aminolysis of the phenol esters is an intramolecular rearrangement as shown by the hydrogenation of an equimolar mixture of 3c and 7h and a subsequent GC-MS analysis. The gas chromatogram showed only two peaks of the products 4c and 8h and no peak of 8c or methyl 3-benzamido-4-hydroxybenzoate. Tab. 1. Agar diffusion assay 100 µg / disc, (te: tetracycline, cl: clotrimazol 30 µg/disc); GI (growth inhibition), nt: not tested, zone of inhibition [mm] The resulting compounds were tested in an agar diffusion assay [21] against several bacteria (Gram-positive and Gram-negative) and fungi, but showed only weak or no antibiotic activities in this assay as shown exemplarily for some compounds in Table 1. Only the precursor 2 showed an interesting activity against bacteria and fungi. Conclusion The presented synthesis describes a simple and efficient method to prepare o-hydroxyanilides directly from o-nitrophenol esters under mild conditions and without affording any protecting group. Thus it is a helpful tool in preparing platensimycin analogues or other natural products containing this partial structure. The tested compounds themselves showed no or only weak antibiotic activity as shown exemplarily for some compounds (Table 1). This indicates that the o-hydroxy-anilide partial structure is not the determining factor alone for the interaction with the FabF enzyme of platencin or platensimycin. The complex cyclic part is also essential for the high antibiotic activity of these natural products. General Methods All solvents used were of HPLC grade or p.a. grade and/or purified according to standard procedures. Chemical reagents were purchased from Sigma Aldrich (Schnelldorf, Germany) and Acros (Geel, Belgium). General Procedure 1 (Preparation of Phenol Esters) One mmol to 2.0 mmol of the acid were dissolved in 20 mL dry toluene or dry 1,2-dimethoxyethane and 1.0 mL to 2.0 mL (11.5 mmol to 27.5 mmol) SOCl 2 were added. The solution was refluxed for 1 h, the solvent was evaporated, and the residue was dissolved in 25 mL toluene or 1,2-dimethoxyethane. Alternatively, 0.5 to 1.5 mmol of the commercially available acid chlorides were used. One mmol of the phenol or 1 mmol of the 2-nitroresorcinol and 5 mL N-ethyl-N-methyl-ethanamine or 5 mL pyridine were added and the solution was stirred for 12 h at room temperature. The solvent was evaporated and the residue was taken up in 30 mL water (for the methyl esters in 30 mL 10% aqueous NaOH) and 30 mL ethyl acetate or diethyl ether. The organic layer was separated and the aqueous layer was again extracted with 30 mL ethyl acetate or diethyl ether, the combined organic layers were dried over Na 2 SO 4 , the solvent was evaporated and the residue was purified by flash column chromatography (isohexane/ethyl acetate 8-10:1). Alternatively, the reaction could be carried out in a microwave reactor at 80°C for 15 minutes and 235 W, but in most cases with lower yields. Sci Pharm. 2014; 82: 501-517 General Procedure 2 (Hydrogenation) One mmol of the phenol ester was dissolved in 30 mL methanol and 50 mg 5% Pd on charcoal were added. The suspension was stirred for 14 h under H 2 atmosphere at room temperature, the catalyst was filtered off (over silica gel 60), and the solvent was evaporated. If necessary, the residue was purified by flash column chromatography. General Procedure 3 (Ester Hydrolysis) An amount of 0.5 mmol of the ester were dissolved in 30 mL methanolic K 2 CO 3 solution (5%) and refluxed for 24 h. The solvent was evaporated, the residue dissolved in aqueous HCl (10%), and extracted with ethyl acetate (3 × 30 mL). The combined organic layers were dried over Na 2 SO 4 and the solvent was evaporated. If necessary, the residue was purified by flash column chromatography.
2017-05-14T16:00:55.922Z
2014-03-13T00:00:00.000
{ "year": 2014, "sha1": "bf4d151b7eaa7e965d76970bbaf1deb81b100b95", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2218-0532/82/3/501/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf4d151b7eaa7e965d76970bbaf1deb81b100b95", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
234318124
pes2o/s2orc
v3-fos-license
Perception of an Ambiguous Figure: Subconscious Social Group Bias or Anchor Effect? Older participants who are briefly presented with the ‘my wife and my mother-in-law’ ambiguous figure estimate its age to be higher than young participants do. This finding is thought to be the result of a subconscious social group bias that influences participants’ perception of the figure. Because people are better able to recognize similarly aged individuals, young participants are expected to perceive the ambiguous figure as a young woman, while older participants are more likely to recognize an older lady. We replicate the difference in age estimates, but find no relationship between participants’ age and their perception of the ambiguous figure. This leads us to conclude that the positive relationship between participants’ age and their age estimates of the ambiguous ‘my wife and my mother-in-law’ figure is better explained by the own-age anchor effect, which holds that people use their own age as a yard stick to determine the age of the figure, regardless of whether the young woman or the older lady is perceived. Assimilation of others’ characteristics to one’s own is particularly likely to occur in uncertain circumstances that provide little information to base judgments on, such as estimating the age of a briefly presented ambiguous figure. Introduction Ambiguous gures are gures that contain two or more different images and therefore can be perceived in distinct ways. The classic duck-rabbit ambiguous gure [1], for instance, contains an image of a rabbit facing right and an image of a duck facing left. People see only one of these images at a time. Ambiguous gures create amusing visual effects and are therefore often found in popular media to attract and entertain readers. However, they are also used in psychological experiments to study the role of both conscious and subconscious processes in their perception [2,3,4,5]. Nicholls, Churches, and Loetscher [6] used an ambiguous gure to investigate whether high-level social processes subconsciously affect face perception. Following Bar [7], they hypothesized that early visual brain regions pass on a partially analyzed version of a face to the prefrontal cortex where it is subject to social expectations that may in uence the percept that is formed in the temporal cortex and any subsequent behavior that is directed at it. To test this hypothesis, Nicholls et al. presented their participants with the ambiguous 'my wife/mother-in-law' gure ( Fig. 1). This gure, which was originally introduced as a psychological tool by Boring [8], can be perceived as either a young woman facing away from the observer, or as an older lady facing more toward the observer. After a presentation time of 500 ms, which was deemed too short for conscious processes to be of in uence, participants were asked to estimate the age of the woman they saw. Nicholls et al. found that older participants estimated the woman in the gure to be signi cantly older than young people did. On average, participants who were older than 30 estimated the woman to be 6.32 years older than did participants who were 30 years or younger. Nicholls et al. also established a reliable positive relationship between the estimated age and the participants' own age. Across 393 participants, the correlation between the two variables measured .24. The ndings by Nicholls et al. suggest that the way a face is processed is subconsciously affected by the perceiver's age. But how does one's age in uence face perception exactly? Nicholls and colleagues interpret their nding in terms of a social group bias. The in-group vs. out-group distinction is a common one in the social sciences. One's in-group consists of people who one shares similar characteristics with. These characteristics are not shared with members of the out-group. Studies have found that people exhibit a recognition bias toward in-group members. That is, people are better able to recognize individuals from the same ethnicity [9], gender [10], or age [11,12]. For example, Wright and Stroud [13] showed that an eyewitness is more accurate at identifying the suspect of a crime if they belong to the same age group. The increased recognition accuracy for similarly aged faces, also known as the own-age bias in face perception, is believed to be due to increased contact and familiarity with people of one's own age group [14,15,16], as well as more in depth processing of the faces of the members of one's age ingroup [17; 18, 19, 20]. According to a meta-analysis on the own-age bias in face recognition [21], the way participants study and encode faces does not in uence the size of the group difference, suggesting that the bias presents automatically, which is in line with the unconscious age effects on face perception observed by Nicholls and colleagues. If superior recognition of faces from one's own age group is at the basis of the nding by Nicholls et al., this would imply that an unconscious social group bias lead participants to recognize the woman in the ambiguous 'my wife/mother-in-law' gure who most closely matches their age group, resulting in a higher age estimate by older than by young participants. That is, participants would perceive the ambiguous gure differently, depending on their age. Young participants would be likely to see the young woman facing away from the observer, while older participants would be more inclined to perceive the older lady facing the observer. A second interpretation does not require the gure that is seen by the participants to differ systematically between age groups. What Nicholls et al. observed might be explained alternatively by the own-age anchor effect. According to Tversky and Kahneman [22], anchor effects occur when people's estimates assimilate toward a salient value (the anchor) causing their estimates to be inaccurate. In the case of the own-age anchor effect, people use their own age as an anchor, causing age estimates to be biased toward their own age. An early study on the own-age anchor effect conducted by Mintz [23], for example, demonstrated that children estimate the age of the cartoon character Peter Pan to be similar to their own age. The observation that older participants estimate the ambiguous 'my wife/mother-in-law' gure to be older than younger participants do, could thus be the result of assimilation of the age estimates to participants' own age, regardless of the woman that is perceived. That is, both when viewing the young woman and the old lady, participants would make an age estimate that is anchored to their own age, resulting in a mean estimate difference between age groups, as well as a positive correlation between participant age and estimated age. People's tendency to assimilate others' characteristics to their own has received less attention than the own-age social group bias, but empirical evidence supporting the own-age anchor effect is increasing [24,25,26,27,28]. The basis for the effect is not yet well understood. One reason why one's own age is selected as a yard stick might be because people usually choose what is highly accessible in memory as their standard [29]. In that case, the own-age anchor effect might have a similar origin as the own-age bias in face recognition: both biases could result from increased familiarity with people of one's own age [30]. Alternatively, the effect may be the result of a social cost reduction strategy, whereby participants overestimate the age of younger faces and underestimate the age of older faces in order to err on the safe side [28]. The own-age anchor effect is also reminiscent of egocentricity biases, whereby people consider themselves rather than others as a reference point [31], making them judge others to be more similar to themselves than the other way around [32] or project their own properties onto others [33]. Anchoring estimates to salient or normative values might be particularly adaptive in highly uncertain situations, where there is insu cient other information to base one's judgment on [30]. The brief presentation of an ambiguous gure might therefore constitute a situation in which the own-age anchor effect is likely to present. The own-age social group bias is different from the own-age anchor effect in that the former is usually considered to be a perceptual bias, while the latter tends to be characterized as a cognitive bias. Extensive experience with faces of our own age group is believed to facilitate their perceptual processing according to the own-age social group bias, while in the case of the own-age anchor effect it is believed to skew our judgments. The own-age social group bias thus predicts that young and older participants are respectively more inclined to see the young woman and the older lady in the ambiguous 'my wife and my mother-in-law' gure, and that this difference in perception is what is driving the higher age estimates by older than by young participants. Were the relationship between own age and age estimates due to the own-age anchor effect, the ambiguous gure would not need to be perceived systematically different by young and older participants. Participants would assimilate the age of the perceived women to their own, regardless of whether the young woman or older lady is perceived. We present a study that aims to examine whether the ndings by Nicholls et al. can be best explained by the own-age social group bias or the own-age anchor effect. In order to do this, we will replicate the original study as closely as possible, but in addition ask participants to indicate whether they saw the young woman or the older lady. If participants' age were to affect their perception of the ambiguous 'my wife and my mother-in-law' gure, this would constitute evidence that older participants' higher age estimates of the ambiguous gure result from an own-age social group bias. If not, the own-age anchor effect better explains the difference in the estimates of the ambiguous gure's age. Participants An a priori power analysis was conducted using the R package pwr [34] to determine the required sample size to test the mean difference in age estimation between two independent groups using a one-tailed test with an α of .05, and assuming the original effect size (Cohen's d = . 39). Results showed that a sample of 232 participants, comprised of two equally sized groups of n = 116, was required to achieve a power of .90. In order to compensate for the potential exclusion of participants, a total of 260 participants was recruited. An analysis of participants' eligibility for inclusion (see below for criteria) resulted in the exclusion of 14 participants, reducing the nal sample to N = 246. All participants were adult US citizens (f = 135, m = 111) recruited using Proli c Academic and compensated with $0.30. In order to make sure that the young and older group had a similar number of participants, the survey was run twice; once for people of 30 years or younger, and once for people older than 30. The mean age of the young group was 23.73 years (SD = 3.28, n = 124), while the mean age of the old group was 44.98 years (SD = 12.35, n = 122). All participants answered demographic questions regarding their age, sex and nationality The experiment took on average 83.55 seconds to complete. Materials and Procedure The experiment was set up to approximate the original study by Nicholls et al. as closely as possible. There are three noteworthy deviations from the original (see below for details). After estimating the age of the woman in the 'my wife/mother-in-law' gure, participants were presented with two additional gures highlighting the young woman and older lady to have them indicate which of the two images they perceived. Participants were also asked whether they had seen the ambiguous gure prior to the experiment and invited to estimate the age of a computer generated face. The latter two changes were respectively included to explore the effect of familiarity with the ambiguous gure on age estimation and to establish the generalizability of the original ndings. The experiment was implemented in the software Qualtrics version May 2020 (Qualtrics, Provo, UT). After providing informed consent and indicating their age, sex, and nationality, participants were shown a screen which asked them to pay close attention to the gure on the next screen, as it would only be shown for a short period of time. After clicking to the next page, a copy of the original 'my wife and my mother-in-law' gure was shown for 500 ms (Fig. 1). Subsequently, participants' eligibility was tested using two questions. First, participants were asked to indicate if they saw a person, an animal, or neither. Participants who responded 'person', were then asked if that person was male or female. When a participant answered one of these questions incorrectly, the experiment was terminated immediately. Otherwise, participants indicated whether they had seen the gure prior to the experiment, after which they were asked to estimate the person's age in whole numbers. A text was then shown that stated that the gure consisted of two different women. The participants were told that on the next screen, two gures would be shown, each highlighting one of these two women (Fig. 2). The order in which the two percepts were highlighted in the gure was counterbalanced across all participants. That is, for half of the participants, the young woman was depicted in the left panel (panel A), while for the other half, the young woman was depicted in the right panel (panel B). Participants were asked to indicate which of the two women they had seen. Based on a study by Georgiades and Harris [35], critical features were removed or highlighted in order to enable participants to clearly discriminate between the two percepts. From panels A and B in Fig. 2, for instance, the eye of the older lady, and the nose of the young woman were removed, respectively. These modi ed gures were originally presented by Shakhnazarova in the newspaper The Sun [36]. After completing this part of the experiment, participants were told once again to pay attention to the gure on the next page, as it would only be presented for a short period of time. This time, a computergenerated face was shown for 500 ms. The face was synthesized using the FaceGen Modeller Software (Singular Inversions, 1998) to represent an average thirty year-old Caucasian female. The participants were once more asked to estimate the age of the face in whole numbers, after which they were thanked for participating. Results All analyses were performed with the statistical software R version 3.6.1 [37] using the packages dplyr [38], ggthemes [39], ggplot2 [40], lsr [41], and sjstats [42]. An α = .05 was employed in all analyses. The data and the R script are available on osf.io/y3bqa/. Here, we only report the results of the pre-registered analyses pertaining to the own-age social group bias and the own-age anchor effect. The results of the pre-registered exploratory analyses looking into the effect of familiarity with the ambiguous gure and the generalizability of the ndings to a computer generated face are supplied as Supplementary Information. Replication of Original Findings In order to replicate the original ndings of Nicholls et al. [6], a one-tailed independent samples t-test was performed to examine the difference in mean age estimates of the ambiguous gure between the age groups. Older participants were found to estimate the ambiguous gure signi cantly older (M = 41.04, SD = 17.51) than young participants (M = 33.50, SD = 14.56); t(234.76) = 3.67; p < .001, Cohen's d = 0.47. On average, older participants thus estimate the person in the 'my wife/mother-in-law' gure to be 7.54 years older than young participants do. A one-tailed Pearson correlation was computed between participants' own-age and their age estimates of the ambiguous gure. Results indicated that there was a signi cant positive association between participants' age and their age estimates; r(244) = .22, p < 0.001. Similar results were obtained in the original experiment of Nicholls et al., who found a mean age difference of 6.32 years and a correlation of .24. In the original paper, the initial analysis was repeated with the 10% youngest and 10% oldest participants. In the case of the current experiment, this would cause some 21-year old and some 57-year old participants to be excluded from the analysis on an arbitrary basis. Therefore, 8.94% of the youngest and oldest participants were selected for the analysis. This resulted in a sample of n = 44 with 22 participants in each age group. The youngest participants had a mean age of 19.14 years (SD = .83) and the oldest participants had a mean age of 65.77 (SD = 5.45). A one-tailed independent samples t-test showed the 96, demonstrating that the results are not a consequence of the arbitrary age split, and that the difference in the mean age estimates was 6.24 years greater when comparing the oldest with the youngest participants instead of the original age groups. Social Group Bias or Anchor effect? Participants were assigned to two different percept groups based on whether they reported seeing the young woman or the older lady. The older lady was perceived by the majority of the participants. Only 114 out of 246 participants (46.34%) reported perceiving the young woman. Another one-tailed independent samples t-test was performed to compare the mean difference in the estimated age of the ambiguous gure by the young percept group (M = 32.89, SD = 13.24) and by the old percept group (M = 41.00, SD = 18.09). This difference was found to be signi cant: t(237.81) = 4.05, p < 0.001, Cohen's d = .51, indicating that the older lady was perceived to be signi cantly older than the young woman. This difference is to be expected if participants can reliably indicate which of the two clearly differently aged women they perceived in the ambiguous picture. A chi-square test was performed to determine whether the frequency of the reported percepts differed between the age groups. The relationship was not signi cant (χ 2 (1) = .01, p = .91), suggesting that participants' age does not systematically in uence how they perceived the gure. Therefore, the relationship between participants' own age and their age estimates of the ambiguous gure does not appear to be a consequence of the own-age social group bias. To determine whether the relationship between own age and estimated age was independent of the woman that was perceived in the ambiguous gure, a one-tailed independent samples t-test was conducted for each percept group independently. A signi cant difference in age estimates was found A one-tailed Pearson correlation between participants' own age and the age they estimated the ambiguous gure to be was calculated for each of the percept groups. Figure 3 summarizes the results. It shows a signi cant positive association between participants' own-age and their age estimates within both percept groups; r(112) = .34, p < 0.001 in the young percept group (black), and r(130) = .17, p = 0.03 in the old percept group (gray). That is, participants provided higher age estimates for the ambiguous gure the older they were, regardless of whether they perceived the young woman or the older lady. Taken together, these results provide support for the hypothesis that the positive relationship between participants' own age and their age estimates of the 'my wife/mother-in-law' ambiguous gure are best explained by the own-age anchor effect. Discussion One's age has been shown to affect how old one estimates the 'my wife and my mother-in-law' ambiguous gure to be [6]. In this study, we replicated the nding that older participants estimate the woman in the gure to be signi cantly older than young participants do. Contrary to what was originally suggested, we take this age difference to re ect the own-age anchor effect, not a social group bias toward processing similarly aged faces. If the difference in age estimation were due to the social group bias, we would expect a propensity among older participants to report seeing the older lady and a propensity among young participants to report seeing the young woman. We did not nd this to be the case. The proportion of times the wife and the mother-in-law were reported was comparable in the two age groups. This makes it unlikely that a recognition bias toward age in-group members is driving the relationship between participants' own age and their age estimates of the 'my wife and my mother-in-law' ambiguous gure. After conducting the study, we learned about a similar result that was already obtained by Botwinick, Robin, and Brinsley in 1959 [43]. They presented male participants with the 'my wife and my mother-in-law' ambiguous gure and asked them to report what they saw within 90 seconds. Botwinick et al. found that of all participants who reported either of the percepts, 76% of the young participants and 94% of the older participants reported seeing the young woman, which contradicts the predicted pattern were a social group bias in play. Previous research has shown that the interpretation of an ambiguous gure is dependent on the xation point and the critical features of the imagine that are attended to [35,44]. Fixating a particular point of an ambiguous gure might make on process features that are critical for eliciting one interpretation of the gure, while xating on other critical features will result in the other interpretation. The absence of an age difference in the reported percept seems to suggest that there were no systematic age differences in the manner in which the ambiguous gure was processed (as opposed to the systematic age differences in gaze that have been established for emotional face perception [45]) or that these were overridden by differences in the various applications on which the online survey were taken and the corresponding differences in the size of the retinal image. Our ndings are more in line with the own-age anchor effect, according to which people tend to assimilate toward their own age, when estimating someone else's age [24,25,26,27,28]. The own-age anchor effect does not require participants' perception of the ambiguous gure to be in uenced by their own age. It predicts a positive association between participants' own age and their age estimates, irrespective of the woman they see. In other words, there should be a difference in age estimation between younger and older participants even if they perceive the same woman. This is what we found. A nal result that appears to be more in line with the own-age anchor effect than with the social group bias, is the observation that the age estimates for the ambiguous gure are rather low, even in older participants reporting seeing the older lady. The older lady in the ambiguous gure is supposedly intended to be older than the average age of 44.71 years reported by this group of participants. It strengthens our opinion that participants' age estimates are driven by a cognitive bias to use one's own age as a yardstick, rather than high-level social processes subconsciously affecting face perception. The brief presentation of an ambiguous gure might just constitute the uncertain circumstances in which the own-age anchor effect is likely to occur. When people have an age judgment to make, but little information is available to base that judgment on, it not illogical for them to anchor it on a normative value such as their own age, which is salient and likely to be representative for the majority of people they interact with. While we interpret the observation that older participants estimate the 'my wife and my mother-in-law' ambiguous gure to be older than young participants do in terms of the own-age anchor effect, we nevertheless believe that social in uences might play a role as well. Speci cally, people might only assimilate other characteristics to their own, if these others are considered part of the in-group. Sörqvist and colleagues [26] showed that women demonstrate assimilation toward their own age when estimating the ages of male targets, but not vice versa. The authors speculated that this difference may result from men's tendency to categorize others as an in-group or out-group member based on gender, while women do not. The ndings by Sörqvist et al. would then be the result of women having larger age in-groups than men. That is, women's age in-groups may include men, whereas men's age in-groups do not include women. In the context of ambiguous gures, this could be further investigated by comparing the extent to which the own-age anchor effect presents in men and women's age judgments of the 'my wife and my mother-in-law' ambiguous gure, and a comparable ambiguous gure featuring differently aged men, such as the 'my husband/father-in-law' ambiguous gure [46]. The current study pertained to age estimates by US citizens. Future research could examine the own-age anchor effect in populations with different cultural backgrounds to investigate the generalizability of the effect. Individuals from individualistic and collectivist cultures might perform differently, for instance. Speci cally, in individualistic cultures emphasis is placed on the individual, therefore it makes sense to expect individuals to regard themselves as the yard stick when judging others. However, individuals from collectivist cultures are taught to prioritize the group and their relationship with group members. This might make them less likely to consider themselves as a reference. In other words, individuals from collectivist cultures might be less susceptible to the own-age anchor effect than individuals from individualistic cultures. In general, in future work both the mechanisms underlying the own-age anchor effect, as well as its rami cations should be further investigated. The effect has several practical implications, for instance in the context of eyewitness testimonies and the provision of age-restricted services. Data Availability The data that support the ndings of this study are openly available on the Open Science Framework at osf.io/y3bqa/. The study was pre-registered (osf.io/xqc35). The data used in this article are licensed under a Creative Commons Attribution 4.0 International License (CC-BY), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original authors and the source, provide a link to the Creative Commons license, and indicate if changes were made. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Figure 1 My Wife and My Mother-In-Law, by the cartoonist W. E. Hill, 1915. This media le is in the public domain in the United States. This applies to U.S. works where the copyright has expired, often because its rst publication occurred prior to January 1, 1923.
2021-05-11T00:03:52.691Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "d7ecd4e5afea39298900f755f3fa27f30a412871", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-120060/v1.pdf?c=1631885119000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "98bf0ca47e8da2afa4c4956219493ce77954a255", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
247308681
pes2o/s2orc
v3-fos-license
A Study on Design and Control of the Multi-Station Multi-Container Transportation System : In considering the problem of saving spaces during the transportation of items from one station to another, for example, in warehouses, factories, hospitals, etc., an automatic transportation system (ATS) that could take advantage of the above ceiling spaces for the transportation of products is considered. Such a system guarantees that the activities occurring in the floor area will be maintained as usual. To achieve this requirement, the ceiling spaces of a building are used to construct an automatic multi-station multi-container (MSMC) transportation system. This system can transport items from one place to another in the whole system. This system is designed to utilize the spaces above the ceiling, and it has the advantage of saving floor space for transportation operations. This will increase the operational capability of the industries and also improve the productivity of the industry in which this system is implemented. The entire transportation system includes (1) the essential conveying system (which is a functional conveyor module with a specified number of containers); (2) the control block that can monitor and operate the system; and (3) the sensor block for detecting and identifying the containers. The content of this article focuses on the introduction of the mechanical system (1); the control system (2); and the operating principle of the whole system (3). Introduction When transporting and distributing products in warehouses, factories, buildings, or hospitals, the problem of allocating space in a congested area with a reasonable cost has led to the need for an automatic transportation system (ATS). This ATS needs to operate efficiently in the limited spaces and also meet the current precision machining and manufacturing capabilities in Vietnam. To ensure automation and flexibility in product distribution, the authors propose a multi-station ATS with the feature of redirecting products to transport them to the exact stations. There are some automated transportation systems in the world, but these system are unsuitable to apply in Vietnam. However, these systems are also referenced in this section. Vietnam needs a localized product, which saves costs and is convenient for future maintenance. With the proposed concept of an MSMC ATS, many options have been explored, which are the pneumatic tube system (PTS) [1,2], the Telelift rail-based system [3], and the SwarmRail overhead robot system [4]. However, these options are still not suitable and feasible when applied widely on a large scale in many areas in the country because of the following objective limitations: Bollinger et al., in [1], indicated that the PTS has an incredible average speed of 5 m/s, but it is typically used for specimen delivery and their load capacity is not significant. Furthermore, Shibani et al., in [2], mentioned several disadvantages when using the pneumatic capsule pipeline system, such as the inability to carry special delivery parcels due to the size limitation of the carriers, as each container can carry about only five pounds. Moreover, the inability to dispatch between intermediate stations is significant. Therefore, the continuous transmission between any two points and to railroad companies without other handling is unachievable. Another concept is the Telelift multi-station rail-type transport system, which is described in [3] by Halbig et al. This advanced system consists of container trucks that can travel independently along electrical railway lines between stations. Such a rail-type system ensures a significant degree of flexibility and automation, but the current precision manufacturing capability in many local regions, and also the investment cost of the investors, is not enough to meet the requirements. Regarding the rail-type multi-station transportation system, Görner et al., in [4], proposed an overhead robot system called SwarmRail. This system involves multiple robots moving along an overhead rail grid for indoor transportation. The significant advantage of the SwarmRail system is that the mobile robot units can change the direction at an intersection of the rail grid without the need for tracking switches or other active parts. Therefore, many mobile units can operate independently in parallel and with a variety of different paths, which enhances flexibility to the overall operation. However, the complexity of this system is beyond the ability to manufacture and while still ensuring a reasonable cost. In addition, automated guided vehicle (AGV) systems are also considered, but they are unsuitable when applied to confined spaces with crowded situations, such as hospitals or office buildings, and they may also change the layout of the present building. In this paper, the design of a multi-station ATS is described. This ATS is more feasible, flexible, and easy to fabricate at a reasonable cost, and at the same time, integrated with the line switching feature as referred to the rail-based system in [3]. The purpose of this proposed system is to perform the transportation to improve the productivity and mobility of many factories, hospitals, and warehouses. This multi-station ATS would also ensure that countries, which still have a manufacturing industry with many limitations, can still achieve it independently. According to [5], the Vietnamese mechanical industry has been exposed to several limitations, such as backward technologies, lack of domestic raw materials, insufficient quality of human resources and weak management effectiveness, difficulties in mobilizing capital, and developing markets and competition. After considering the above issues, the design of an ATS based on serial and parallel hung conveyors is selected, as this is the fundamental system in the mechanical industry and is completely achievable. The ATS introduced in this article consists of three different mechanical modules, which are named the forward motion module, the line switching module, and the lifting module. These modules will be operated as independent and controllable mechanical units. With the proposed multi-station ATS, the control system will need to address the complexity of distributing multiple containers to various stations. In addition, the solution for products distribution at the switching points where multiple containers appear at nearly the identical time also needs to be taken into account. For individual container identification, Pane et al., in [6], applied the radio frequency identification (RFID) technology associated with the conveying system. Vlad et al.,in [7], proposed an arrangement of RFID readers along the conveying lines to determine the crucial location in a complex system, while Osman et al. also proposed a system using RFID technology to identify products with the RFID tags being attached on their surface [8]. According to Luu et al., in [9], RFID systems are presently rapidly developing. The authors in [9] also stated that the radio frequency of 13.56 MHz is used in the access control system and payment systems, as well as for identifying goods in warehouse systems and books in library systems. Based on the previous ideas, the RFID system is selected for the identification of moving containers in the system. In general, the ATS control system proposed in this paper is designed using RFID technology which can identify and locate the containers. For data processing and data management, the server with a database is managed to deal with the logical transportation problem. The actuator system actuators, which are electrical motors, are controlled by a PLC and other expansion modules. In addition, the system needs to be designed to operate stably, even after a power failure, so a database will be managed to store and update information continuously. With the database, the control system will always be able to access the immediate operating status, as well as which stations the containers are heading to. The author's contribution is to integrate the active line switching feature into the traditional conveyor system with an appropriate control system and logic algorithm. Therefore, the products can be automatically redirected to other stations, without depending on a passive guidance mechanism or manual method. This ATS, which employs the overhead conveyors, has corrected the mentioned problem in a more feasible and achievable approach, considering the domestic technology and logistics situation. This system adopts RFID technology to identify containers. RFID is also a technology that has been widely applied in many fields to develop in identification and management. Therefore, we can localize the manufacturing process without depending on the sophisticated foreign system with excessive cost. The novelty of this study is to present a creative solution to the problem of automated freight. This solution includes a mechanical system including overhead conveyors integrated with active line switching mechanisms to change lines leading to various stations. Along with the design of this ATS, the controller, the signal processing block, the information management block, and the logic algorithm for route planning and distributing the containers are presented in detail. Simultaneously, this is also one of the pioneer systems in the country to solve the problem of multi-station freight indoor transport management, for which no previous system has been well explored for large-scale application. Moreover, the system also ensures to improve productivity, reduce labor costs, and is convenient in maintenance. This article consists of five core parts in the following order: 1. Introduction; 2. Materials and Methods; 3. Results; 4. Discussions; and 5. Conclusions and Future Developments. Overall Design In this section, a general concept of the ATS using hung conveyors is described, which will provide a better look into the structure and the working principle of the entire system, as well as the method chosen for the control system. As mentioned above, the multi-station transport system includes three separated mechanical modules: the forward motion module, the lifting module, and the line switching module. Each mechanical module carries out its function and operates independently as an absolute unit. A forward motion module consists of suspended conveyors of different lengths paired with each other to create the long travel line between stations. This design also conserves electrical energy, as these numerous conveyors will be unpowered at the same time, but sequentially. The slat top chain with POM material (Acetal) is chosen for the conveyor's design; the frame of the conveyor is made from stainless steel, as it meets both firmness and safety requirement. To hang the conveyors on the beam frame on the ceiling, threaded rods are used. The support frame under the conveyor is made from a V-shaped steel bar. The size of the container is 250 × 160 × 300 mm, with a maximum capacity of 5 kg, and the weight of the steel container itself is 2 kg. To ensure smooth and efficient operation of the plastic belt chain-conveyor, the recommended operating speed is less than 0.6 m/s. As a result, the plastic chain-conveyor is designed to operate at a speed of 0.4 m/s. The lifting module serves to move the container in upward and downward directions, as shown in Figure 1. The module is driven by vertical chain transmission, carrying a sub-conveyor fastened to its actuator to guide the containers into the suspended conveyor system. The travel distance of the lifting module is 3 m, which ensures its sub-conveyor aligns with the overhead conveyor system as well as maintain enough space for the operating area on the floor. The line switching module redirects the conveyed containers to the switching point, as shown in Figure 2. The ball screw drive is used in this module to ensure a smooth horizontal movement. The area of the Bach Khoa Research Center for Manufacturing Engineering (BK-RECME, Ho Chi Minh City University of Technology (HCMUT), Vietnam) is depicted in Figure 3 with three simulated stations. The element conveyors are designed with the standard length of 1 m, 2 m, and 4 m to determine a completed structure of the forward motion module. The forward motion modules will then be combined with other mechanical modules to design the absolute system. For the control system, a user interface program is built to produce a server, to manage and control the container routes. The program is developed on a C++ platformQt Creator [10]. The database MySQL is chosen for the system. The data from the database will not be transmitted directly to the PLC, but the server. At that time, the server will process the data and send the messages to the PLC to enable the output port, as indicated in The RFID reader sends the containers' name tag to the PC via TCP/IP. The type of RFID reader chosen is RDM8540-Q-A, the frequency is 13.56 MHz, and the available reading range is less than 8 cm. The selected reader supports both TCP/IP and Wi-Fi protocols. The PLC CompactLogix 5370 1769-L24ER-QB1B of Allen Bradley was also chosen for the system. When scanning the RFID tag mounted on the container lid, the data from the RFID reader will be transmitted to the PC via TCP/IP for processing and deciding if the container is available to move or not. To limit the travel length of the mechanical modules, proximity sensors are used. To solve the problem at the switching point where multiple containers meet, proper control algorithms are necessary. The process will continue until the container reaches the station. Figure 5 describes the mechanical principles of the plastic chain-conveyor, from which (1), (2), (3) are the electric motor, the chain drive, and the chain-conveyor, respectively. The design of a slat top chain-conveyor is selected based on [11]. The preliminary design parameters are the average speed: v = 0.4 (m/s); maximum workload: m = 7 (kg), including the weight of the product. Figure 5. The structure and principle diagram of the chain-conveyor [11], with A and B being the designed distance between the driven shaft and the roller C. Items (1), (2), and (3) are the driven motor, the chain transmission, and the chain-conveyor respectively. Mechanical Design In the lifting module, there are two separated mechanical components, which are the chain drive for upward and downward motions, and a sub-conveyor mounted on the actuator. The chain drive uses a counterweight on the opposing side of the actuator to balance and reduce the motor's torque required, as illustrated in Figures 6 and 7. The preliminary parameters are as follows: The distance between the two shafts is 3 m, the upward and downward speeds are the same value of 0.2 m/s. The total weight of the sliding actuator and the sub-conveyor is 20 kg. The maximum of the medicine container is 7 kg. The counterweight is calculated by the weight of the actuator plus 50% of the maximum load, because the system does not always lift the maximum load, but there are times when it runs freely. Hence, m C = 23.5 24 kg. Figure 8 shows the mechanical diagram of the line switching module. The transport system utilizes two types of line switching modules: the 2-line and 4-line types, which operate similarly but have a different screw length, 0.6 m and 1.2 m, respectively. The 2-line switching module acts as a transfer unit between sending and receiving processes, while the 4-line switching module acts as a line-directing unit from one station to another. Therefore, it decides the exact route for medical containers to be transported. As mentioned, the overhead conveyor system is designed to utilize the ceiling as its workspace. In Figure 9, the arrangement of the overhead system is represented. Threaded rods allow the conveyor support frame to be adjusted to calibrate the height between the conveyor surfaces. (a) The structure of a support frame using threaded rod (b) The initial structure of the hung conveyor (c) The actual structure of the hung conveyor Figure 9. An brief introduction of the hung conveyor system. Method to Evaluate The System For the assessment of the absolute system, technical tests for an comprehensive evaluation of the mechanical modules, along with specific experiments for the absolute system, will be carried out. These experiments are to evaluate these factors: (1) The stability and rigidity of the overhead system: During the testing operation conditions, vibrations, shaking, or displacements will be noted to regard how they affect the absolute system. (2) The control algorithm: The transport operation depends significantly on this factor, especially the control algorithm for the line switching module to transport the container to the desired stations. Therefore, simulation, as well as experiments, will be performed to check if the control algorithm is appropriate or if any logic error occurs. (3) The control model: As shown in Figure 4, the interaction between the server, the PLC, the access to the database, and the signal receiving unit (sensors and RFID readers) will also be evaluated through the experiments. (4) The performance of the system: For a comprehensive assessment that covers all situations of the system operation, the experimental transport process will be performed according to specific criteria stations and situations. The purpose of these tests is to evaluate how the mechanical components, the control block, the management block, and the sensor block interact with each other, or if any issues expect improvement. The detailed experiments are presented in Section 3.3, along with the results. The Characteristics of Forward Motion Module As mentioned above, the forward motion conveyor module consists of many small sizes of forward motion conveyors to arrange a proper line. With multiple component conveyors, the module can certainly transport many containers at the same time. Two RFID readers are placed on both two ends of the modules to help identify a container entering or leaving the module, as shown in Figure 10. Moreover, these RFID readers can determine the number of containers on the module at any given time. To avoid the fact that the entire forward motion module, which includes multiple conveyors, is turned on at the same time, whether there are many containers or only one container, electrical counters and proximity sensors are used. With the detection of proximity sensors and counters, the number of containers on each conveyor is determined. Then, each conveyor is only activated when there is at least one container on the conveyor (counter value greater than 1). Otherwise, the conveyor will not be activated. This process is described in Figure 11 below. The Characteristics of the Lifting Module and the Line Switching Module The two modules (the lifting module- Figure 12 and the line switching module- Figure 13) are controlled in a sequential manner. The transportation sequence will be considered as follows: (1) Sub-conveyor of the module has reached the right position. (2) Let the goods move into the lifting module or switch-line module. (3) Initiate the lifting or line switching operation. (4) Transport the goods into the next forward motion conveyor module. The module can merely handle one container at a time. Therefore, if the module is handling one container, the previous modules must wait until the module completes the transport sequence. Conversely, if there is no container left to be transported (being lifted or switched), the module will stop. Two RFID readers are mounted on both two ends of the module to identify if a container is entering or leaving the module. In addition, the proximity sensors help to limit the movement of the conveyor module. PR 4 Line switching module The RFID signal affects the control operation of the system, i.e., the processing of the server. As for the ATS system, when the signal from the RFID reader is lost, the system will stop (because the scan loop confirming the connection between the RFIDs is performed continuously). The data of the goods are name, location, path, etc. The data of the conveyor module comprise the states, number of consignments, etc. The data of RFID states are connection state, reading state. All three types of data mentioned above are stored in a database and are accessible through GUI. If the system is detached from the RFID signal, the user can navigate the location of the disconnected RFID signal. The detail of the storage of data in the database is mentioned in Section 3.2.1. (b) The proximity sensor's signal is lost during the system's operation. The proximity sensor helps limit the conveyor's travel distance. Therefore, whenever a sensor loses its signal, it will affect the operation of the conveyor module (the conveyor exceeds the travel distance), i.e., managing the operation of the PLC. To overcome this problem, additional sensors as limit switches are needed. To be specific, when the proximity sensor loses its signal, then the limit switch will cut off the power of the entire modules to limit further damage for the system, as described in Figure 14. The transportation system is built with functional blocks as described in Figure 15: • The role of the central block: -Server receives data of sensor block and operation of conveyor module from control block for further processing. -Send data after processing at the control block to operate the conveyor module. -Send data after processing to store in the database. - Database stores data of containers, conveyor modules and RFID sets. There will be no data lost in the event of a power failure or system shutdown. • The role of the control block: -PLC receives data from the server then transmits it back to the remote I/O in order to operate conveyor modules corresponding to stations. - Acts as an intermediary to receive proximity sensor signals and send them back to the server. • The role of the sensor block: Including RFID sets and proximity sensors to limit the travel distance of the conveyor and determine the number of containers on a module. • The role of the motor block: The transmission system for the entire transport system. • The role of the source block: The power supply for the entire transport system. (4) Server, PLC, sensor system, etc.: Help to control the operation of the system automatically. Simulation of the Routing Algorithm The simulation of the transportation system includes the following process: (a) Make a database connection and send the data back to the GUI. In a real system, there may be a risk of system power failure. The requirement is that when the power is turned back on, the system will continue to work effectively. Therefore, during the runtime process, the system will constantly update the data, supporting the system to maintain operating properly in case of a reboot situation. The parameters will be updated in the database according to three tables: container (also called "box"), counters, and moving modules (Figure 16). The updated parameters include the following: (1) Containers table (boxes table) The lifting module represents a link between a ground-level conveyor system and a ceiling-mounted conveyor system. The problem to be overcome is when a container arrives and needs to be lifted but the sub-conveyor of the lifting module is already occupied. According to Figure 17, when container 2 reaches the waiting position for the lifting module, there is already container 1 present on the lifting module (module 18). At this time, the waiting variable of container 2 will be ON and the PLC will turn off the previous forward conveyor module. At that time, both containers 2 and 3 stop. After container 1 is transferred to the next forward conveyor module (module 19) and the sub-conveyor of the lifting module has returned HOME, container 2 is transferred to the lifting module and container 3 will enter a waiting state. Therefore, the variable for the waiting status of container 3 will be turned ON. (c) Handling of logic at the position of the 2-line switching module. A 2-line switching conveyor module is a module that moves the container in and out of the stations. Situation 1: Containers only travel from the stations out. The system lets the containers go out sequentially according to the priority order. It will be similar to the lifting module. After 1 container moves to the next module, the sub-conveyor of the line switching module will return HOME to pick up the next container. The process is described in Figure 18 below. Situation 2: When there is a container that needs to go in, but there is additionally another container going out, and vice versa. As shown in Figure 19, at the stations, there is a bidirectional forward motion conveyor module. There is only one direction activated at a time. A 4-line switching module is the transfer unit at the meeting point of the lines, which helps move and separate containers. The key issue is to prioritize the order of each container when there are several containers at the 4-line switching module (Figure 20). The solution is that the container that gets there first will go first. However, a distinctive feature is that the returning empty containers will have higher priority than those with the item inside. Experimental Results After researching and designing the mechanical system and the control algorithm, some assumptions and constraints were set to verify the ability to transport containers among the stations: + Some assumptions of the system: (1) The containers can move through the gap in transition between the two conveyors. (2) The different height between the surfaces of the conveyors is negligible and can be ignored. + Some constraints of the system: (1) Safety factor (the ceilings of buildings must be sufficiently rigid-related to construction site inspection; the system must be examined for durability to select the appropriate material). (2) The overhead conveyor system after being installed on the truss must operate effectively, without shaking or jamming the container during transport. (3) The system needs to be expandable and easy to install. The input/output for the control module need to be reduced to a minimum. (4) Containers must stop at the exact position (entrance and end position of modules, middle of the sub-conveyor). (5) The containers must always be in the middle of the conveyor width along the length of the conveying stroke (to ensure that there are no jams or entanglements during the transition where the conveyors are perpendicular to each other). From the assumptions and constraints of the system, we conducted several experiments to test the efficiency, precision, and accuracy of the whole transportation system. The experimental process is carried out by following these steps: (1) Building a control function for the lifting module, the line switching module and performing experiments to check the operation of each mechanical module separately before installation. The experimental test of each module helps us to observe the other mechanical features that do not meet the previous assumptions and constraints. This issue is explained in detail in the Discussion section. After completing the installation: (2) Perform experimental transporting processes according to the designed algorithm to check the correctness of the proposed control model and the proposed control algorithm. This experiment confirms the feasibility of the research in designing an MSMC ATS to help transport containers from one station to others and vice versa. The Experimental Test to Evaluate the Operation of Each Mechanical Module As mentioned in Section 2.3, before installation, it is necessary to perform technical tests concerning the algorithm that we have built for each module. As a result, the lifting and line switching modules function effectively according to the designed algorithm. Figures 21 and 22 illustrate the mechanical module during the operation. The Experimental Tests to Evaluate the Operation of the Entire System The whole system after installation at BK-RECME is shown in Figure 23. For the experiments, a prototype system with three stations is tested. In addition, the control algorithm also ensures the transportation system of m containers (m > 3). In this practical setup, the whole system includes three containers. With three stations, there are six corresponding routes. The containers are transported on separate routes to ensure the consistency of the algorithm in operation. The consistency manifests itself in two ways. First, the sub-conveyors of the lifting and line switching module are checked if they return to HOME position as mentioned above. Second, the line switching modules need to take the entry order into account, as explained in the simulation (occurs in all routes). As a result, the containers completed all six transport routes from station to station. The process of transporting containers according to a route from station 3 to station 1 (one out of six routes) is shown as in Figure 24. According to Figure 24a, the transportation process is as follows: The user places the container at station 3, as shown in photo 1 in Figure 24, then the container will be transported into the lifting module as in photo 2. After that, the container will be lifted up as shown in photo 3. Next, the container exits the lifting module to enter the 2-line switching module as shown in photo 4 and photo 5. After the container exits the lifting module, the lifting module will return to HOME position. At that time, the container exits the 2-line switching module as shown in photo 6; when the container is completely out, the 2-line switching conveyor will return to HOME. According to Figure 24b, the container will subsequently continue to go to the 4-line switching module and enter into the module, as shown in photo 7 and 8. The container will go to the corresponding line and then exits, as shown in photo 9. Next, the container will wait for the lifting module, as shown in photo 10. After that, the container will enter the lifting module to be lowered down to the lower limit, and the container will go to station 1 as shown in photo 11 and 12. After that, the experimental transportation is performed many times with three containers according to optional chosen travel routes to check for logical errors in the algorithm. During transport, communication between PLC devices, the server, and the process of updating data into the database is monitored. After multiple operations, testing, and modifications, Figure 25 below introduces a common testing process of our system. According to Figure 25a, the process of transporting the containers according to optional chosen routes is as follows: The user places container 1 at station 3, as shown in photo 1 in Figure 25, then container 1 will move into the lifting module as shown in photo 2 and then be lifted to the upper limit, as shown in photo 3. Meanwhile, the user continues placing container 2 at station 3, then container 2 will go to the waiting position for entering the lifting module as shown in photo 4. Then, the lifting module travels to the upper limit and container 1 will go out of the lifting module to enter the 2-line switching module, as shown in photo 5 and photo 6. After container 1 is out of the lifting module, the lifting module will return to HOME to transport container 2. Next, container 1 will be switched to another line, and when it reaches the destination, container 1 will be out of the 2-line switching module as shown in photo 7 and photo 8. When the lifting module at station 3 returns to HOME, container 2 will be moved into the lifting module and lifted to the upper limit, as shown in photo 9 and photo 10. According to Figure 25b, after being lifted to the upper limit, container 2 will go out of the lifting module and wait for the 2-line switching module returning to HOME, as shown in photo 11 and photo 12. At the same time, the user places container 3 at station 2, and it will move into the lifting module and then be lifted to the upper limit as shown in photo 13 and photo 14. After being lifted to the upper limit, container 3 will go out of the lifting module to move into 2-line switching as shown in photo 15 and photo 16. Next, container 3 will go out of the 2-line switching module as shown in photo 17. At container 2's position, container 2 enters the 2-line switching module as shown in photo 18. When the destination has been reached, container 2 exits the line switching module. Simultaneously, container 1 arrives at the 4-line switching module's position and it will wait for the sub-conveyor to arrive at the container pickup position; at the same time, container 2 will also stop because container 1 and container 2 are on the same conveyor line as photo 19 and photo 20. According to Figure 25c, container 1 moves into the 4-line switching module and then goes out as shown in photo 21. At that time, it will go to the waiting position for entering the lifting module, then container 1 will move into the lifting module as shown in photo 22 and photo 23. After that, container 1 is lowered to the lower limit and goes to station 1 as shown in photo 24. At the 4-line switching module, container 3 comes before container 2. However, container 2 will enter the 4-line switching module first as it has a higher priority than container 3, as shown in photo 25. At that time, after reaching the destination, container 2 goes out of the 4-line switching module as shown in photo 26 and photo 27. Similar to container 1, container 2 will continue to go to the 2-line switching module then go out into the lifting module, as shown in photo 28, photo 29, and photo 30. According to Figure 25d, container 2 is lowered to the lower limit and then goes to station 2, as shown in photo 31. Back to the position at the 4-line switching module, container 3 enters the module then goes to the corresponding line and exits the module as shown in photo 32, photo 33, photo 34, and photo 35. Next, container 3 will go to the position where the lifting module lifts it to the upper limit, then enter the module and go down to the lower limit and then go out to return to station 1, as shown in photo 36, photo 37, and photo 38. The transportation process of container 1, container 2, and container 3 is then complete. Evaluation of the Experimental Results The experimental transportation with three containers among three stations optionally has proved that the proposed algorithm can function efficiently. Simultaneously, the proposed control model ensures to manage the operation of the entire transportation system. The experiments demonstrate that when the system is operating, vibration and noise occur. These problems will be addressed in the Discussions section. The assumptions and constraints are satisfied after testing along with modifying. The details of the modification will also be addressed in the Discussion section. Discussions In Figure 3, the proposed control algorithm is applied in the real system to test the ability to transport containers according to the system built at BK-RECME. During the researching and designing process, the authors considered and solved several problems that resulted in the container being unable to complete the travel routes, such as (1) Figure 29); (5) the system can be expanded, so it is necessary to ensure easy installation and a minimum number of inputs/outputs of the control modules. In other words, the system is helped to satisfy the assumptions and constraints set. • For the problem (1), to address the gaps between the conveyors, segmented transfer plates could be used to help fill in the gaps so that the containers can pass through normally. • For the problem (2), this is a featured design of the system; instead of using a curved conveyor, we arrange the conveyors perpendicular to each other, to save space and avoid cumbersome action, in designing the overhead system, as parallel curves will require different curvature radiation. To ensure the containers do not get stuck in transition at the 90-degree turn, additional guiding rails are used to help smooth and adjust the movement of the containers at those positions. • For the problem (3), we design the frame to place the proximity sensor PR12-4DN [12] for ensuring that the containers will be detected when they pass through. • For the problem (4), the selected RFID reader is RDM8540 TCP/IP + WIFI RFID reader series. It can recognize RFID tags within 8(cm) [13]. When the conveyor module is shut off, the container's inertia may cause the container itself to fall off the conveyor. Therefore, RFID readers need to be placed at the appropriate position in each conveyor module. On the other hand, to ensure the containers are detected, we propose to use more than one RFID reader at each position and more than one RF-tag on each container. • For the problem (5), the system is designed in modules to ensure interchangeability, replaceability, and ability to expand to a larger-scale system. Therefore, the control system would also need to follow the modular design. For each mechanical module, there will be a separate control function. The algorithm built in combination with the model, as in Figure 4, makes it easier to control the system. A special feature is that instead of using one output for each conveyor, which leads to an extremely large number of system outputs, we only use one output for the whole forward motion module, which consists of multiple component conveyors. Based on the built-in model in Figure 3, we decided to combine all the conveyors between the line switching modules and the lifting modules into one control module. The RFID readers will be used to limit the two ends of a module as in Figure 10 and minimize the output for the control module. With the following experimental results from Section 3.3, (a) the lifting and line switching modules operate suitably according to the proposed algorithm; (b) the system can help to transport containers corresponding to selected routes, and the control model helps control and manage the data of the whole system. To further clarify this issue, we consider the following comments: • As for the experiment (a), during the operation test, we also check the sensor position as mentioned in Section 3.1 to ensure that the conveyor operates in the exact limited range. As for the lifting module, it is also necessary to adjust the position of the limit switches to ensure that the vertical movement of the sub-conveyor does not exceed the boundaries, which will lead to a collision between the sliding actuator and the sprocket. • In addition to the experiment (a), we also test the delay between power off and reversing the conveyor direction and how it would affect the operation of the conveyor. To extend the operating life of the motor and guarantee electrical safety, the function for reversing the motor is only active after the motor has been turned off. From that, the operation of the lifting module and the line switching module are all sequential, so we use the PLC's sequential function charts (SFC) [14] programming to build the function. • As for the experiment (b) in Section 3.3, we provide a series of figures regarding the experimental process of transporting with three containers corresponding to any three routes. It shows that the algorithm solved the problems, such as waiting when another container was already in position and the lifting module or switching module is unavailable. The algorithm takes the priority order at the 2-line and the 4-line switching modules into account, in which the containers that come first will be moved in first. The result shows that the proposed control algorithm is effective. • As for the control model from the experiment (b) in Figure 4, the experiment also shows that the data will be updated to the database whenever a signal is detected. The signal from the proximity sensors is sent by the PLC to the server via TCP/IP protocol [15] for processing. The server, after synthesizing the data from the proximity sensor and PLC, will process and send it back to the PLC to execute the system's operation. After that, the data continue to be updated to the database, and the cycle repeats over again. • As emphasized above, the database only requires updates when there are changes in the data. Because the data synchronization speed is set by MySQL by default to 200 ms, the server would be delayed during data processing if updates are continuously required, as the system is not multi-threaded. Therefore, within 200 ms of transmitting data, the server cannot process the data. • In addition, experiment (b) also helps us to check if the system could function as normal again after being turned off. Based on the recorded data on the database, whenever there is a system power shutdown and then turned back on again, the previous data stored in the database could be obtained. Therefore, the system could continue activating the corresponding mechanical modules, and then update new information to the database, such as the ID of containers and their respective routes, etc. Conclusions The ATS using overhead conveyors has proved its feasibility when applied to BK-RECME. In summary, this paper describes the design and the operating principle, as well as presents the achievements of the implementation of the proposed ATS. The results obtained from this study have shown that this system is feasible and can solve the transport and distribution problems mentioned in this paper. The experimental results satisfied the assumptions and related constraints, which were analyzed in the Discussions section. After having researched, assembled, and experimented with the system operation, with the examination on the performance of the control block, the management block, and database, we identified several shortcomings, which are as follows: (1) The system vibrates while operating: During system performance testing, vibration is generated because the conveyors are not linked together, but rather are independent mechanical components. As a result, the conveyors will slightly oscillate in a narrow range, creating vibrations in the whole system. This problem is combated by creating an interlink frame between the conveyors, helping to keep the absolute system in a unified and stable block. (2) The noise was generated because of the gearing between the sprocket and the chain of the plastic chain. These represent typical characteristics of the chain-conveyor during operation, and it will require some more innovations to overcome this phenomenon. As a result, this noise may cause discomfort to people in the operating environment, or is not sufficient to satisfy the criteria when applied to office buildings, hospitals, etc. (3) The data transmission line between devices (server, PLC, and database) could be upgraded: An increase in a number of stations in a system will lead to many more data and system variables. Hypothetically, the more variables that occur, the more complicated and difficult it becomes to communicate data through manual TCP/IP. Consequently, the system could be much trickier to control due to the senior level of complexity, although the cost when using TCP/IP protocol is completely reasonable. Fortunately, the prototype system in this paper consists of only three stations and could be applied with manual TCP/IP protocol with little problem. (4) The system has only experimented in a three-station prototype: As the system was implemented at Bach Khoa Research Center for Manufacturing Engineering (BK-RECME, Ho Chi Minh City University of Technology (HCMUT), Vietnam), it was built on a scale of three stations (according to the model in Figure 3). The next target is to expand the system on a larger scale with n stations (n > 3) to help transport m containers. This system has been researched and designed to solve the problem of transporting goods utilizing the space above the ceiling, with the feature of automated line switching to coordinate products flexibly between stations. Although this system is not advanced when compared to other modern systems in the world, it has effectively solved the problem with a more feasible approach. With this system, the load capacity and productivity are far beyond that of the pneumatic system, and the flexibility is also higher than that of the AGV system. This ATS will also avoid space inadequacies when frequent people are passing by because many buildings are already designed and do not plan appropriate aisles for the AGV. Another important innovation is the control algorithm of the system. The authors have come up with a control algorithm for routing and handling the situation if many containers are moving in opposite directions at the line switching point. The proposed algorithm is one of the key points determining the performance of the system. This algorithm has opened up many development directions for transport using multi-station systems, as the success of the line switching algorithm will set the foundation for expanding the system to more stations, more containers, and more lines. Finally, the solution of using counters in combination with the conveyor has solved the problem of energy savings, especially when expanding the system to a larger scale with even more conveyors. This solution ensures that a conveyor will only be activated when the number of containers it carries is more than one, regardless of how many conveyors there are in the entire system. In conclusion, this research has contributed a new solution for freight as well as logistics. A particularly important achievement of this research is the integration of the line switching feature along with the appropriate logic algorithm into the conveyor system, making the transportation process more flexible and space-saving, while still ensuring reasonable costs. The potential of this system is a commercial application in automatic transportation of a variety of products in many different environments, such as factories, office buildings, hospitals, etc. The potential solution is feasible. This proposed system is developed considering all the available conditions in Vietnam, with the key element being the control algorithm at the location of the line switching module to serve the process of moving containers between stations. Future Developments The mechanical structure of the absolute system has been fully designed and installed while ensuring efficient operation. In addition, an appropriate algorithm for addressing the distribution problem has been completely proposed; therefore, the system could op-erate with no significant logical error. The effective absolute system has been completely manufactured and installed; therefore, proper tests were carried out, ensuring the practical applicability of the system. With the shortcomings stated in Section 5.1, the following development of the system is proposed: • As mentioned in Section 5.1, to solve problem (1) with the vibration, we have proposed a design of an interlink frame for the component conveyors. These interlink frames also need to be designed as a standardized module for easy installing and removal when access to component conveyors is required. • For noise reduction (2), when considering the working environment that takes human factors into account, such as factories, warehouses, or hospitals, other alternatives for suitable types of conveyors could be implemented, such as PVC, roller conveyor, etc. On a side note, the team of authors also propose a future mechanical development, which is the development of aluminum casting technology. By casting an aluminum conveyor frame, the difficulties of tolerances and assembling could be reduced. It also helps the system run smoothly. This technique is already feasible and can also be applied in large-scale production to reduce costs. • To solve problem (3) regarding data transmission, currently, the Open Platform Communications (OPC) standard is an effective tool for easier data communication. In the next steps, the OPC standard will be applied so that all data updates will be synchronized to the OPC server, making it easier to manage and control the system. • Regarding issue (4), as mentioned, to expand the system, there are a few problems that needed to be considered: (a) The problem of management and quality control in mass production with generous quantities. (b) The appropriate plan for installation and maintenance of the overhead conveyor system, while still ensuring rigidity and conducting the deviation when the transport distance increases. Conflicts of Interest: The authors declare that there are no conflict of interests.
2022-03-09T16:15:22.145Z
2022-03-04T00:00:00.000
{ "year": 2022, "sha1": "bda6f895b4064b49b35fec34e3b2571605ac068d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/5/2686/pdf?version=1646631140", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7600bcb5e7db2006bb76b0c2f7c4ce7baa7db51f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
73523150
pes2o/s2orc
v3-fos-license
Technical Components of Total Factor Productivity Growth in Malaysian Manufacturing Industry This article attempts to disaggregate and explore the components of TFP growth that contribute to changes in output, scale of production, and allocative efficiency and technical efficiency of the Malaysian manufacturing sector. The total factor productivity (TFP) concept defined as total output per unit of all inputs used in the production of an industry has gained a prominent place in academia. The investigation on TFP growth is obviously useful for identifying sources of output growth in the development of an industry. The TFP growth is often interchangeably understood as the technical progress or changes in technology as the sole contributor to economic development. Nonetheless there are other factors contributing to its substance. Knowledge on these technical changes would help decision makers to realize the strengths and weaknesses that contribute to the growth and development of an industry. Alternatively this research would be more beneficial in the case of cross-industry or cross-country comparative studies in order to plan for developmental goal. In such a case a model industry or country can be chosen that exhibits special growth features. Introduction Growth and development, and later sustainable development had always been in the forefront of attainment goals of world's economies.Historic attentions were given to multifaceted issues of growth of prominent economic constraints in achieving development objectives.In the oldest profession of agriculture, Robert Maltus [1] first alerted the limitation of food supply because of the growing population leading to surging man-land ratio, which then became the doctrine in food economy.David Ricardo [2] was the first to detect diminishing social well being as a result of rising land rent due to a continued decline of prolific land in cultivation and the dependence of agricultural sector on inferior productivity land.Growth in production of essential food commodities and other agricultural products had taken a turn when potential of land for expansion was constrained by its supply.The ultimate trust of economic growth in production was principally capital while labor was secondary in importance.The significance of human productivity and innovativeness was only recognized during the 1950s and early 1960s following the marvelous contribution of education on economic development [3].Human development particularly in education, health, nutrition and fertility has since become the focus of most developing nations on the agenda of human resource development and growth.Nowadays, most nations recognize that intellectual capital is essential for sustaining growth as the capacity of human mind, its creativity is limitless and its contribution to innovation is endless.In most cases labor fails to reflect its true influence on the total production for reason that data generally are not available on those skilled and non-skilled workers.Since human capital is an essential ingredient in boosting productivity the World Bank [4] had suggested the use of human-capital adjusted labor (H), which is defined as 0.1 e S H LP  where L is the working-age population, P is the participation rate and S is the number of schooling and e 0.1S is a weight used to reflect the impact of education and the size of labor force on the labor input, which is assumed to be exponential.Several findings particularly in Korea suggest that each additional year of schooling raises labor's productivity by 10 percent.Even though education is relevant to the productivity improvement it will not be so for all situations.With higher education the farmer's productivity could have declined due to the possibility of better off farm employment.Educated labor tends to be less committed to agriculture and thus its productivity. Experience and on job trainings can be another factor that contribute to higher productivity.Health condition of the workers deteriorates with limitations of medical facilities which in turn impede worker's productivity.Types of education and trainings that contribute directly to the development of human capital should be differentiated from the ordinary workers.Labor should at least be classified according to skilled and non-skilled labor.No attempt is made in the current study to classify labor according to skilled and non-skilled because of the lack of time series data.Total factor productivity (TFP) growth is the residual of output growth not accounted for by capital and labor.The derivation of TFP growth components is relevant in order to unveil the unexplained sources of economic growth beyond that reflected in the production function.The residual growth component from the microeconomic perspective can be attributed to technical progress, technical efficiency, scale of firm operation and other socioeconomic factors not captured by the variables used in the production function.In macroeconomics, sources of TFP growth can be categorized into education and training, changes in demand, economic restructuring, technical progress and capital structure [5].The objective of this paper is to disaggregate TFP growth mathematically from its original definition as suggested by Solow [6].This article begins with the theoretical development of TFP growth which was extended to the derivation of technical components.The final section deals with TFP growth application using empirical data. Framework for TFP Measurement The measurement of TFP growth is closely associated with the growth of economic activities.One could not understand TFP growth without first understanding the methodology for estimating growth.The growth concept involves time and its measurement can either be discrete or continuous depending on the extent of pronouncement of the state variables.According to Dowling [7] p. 120, "exponential functions depict constant rates of discrete growth, i.e., growth that takes place at such discrete intervals as at the end of the year or at the end of the quarter."An example of a discrete growth is given by , where y t refers to the current year variable, y o is the base year variable, r is the rate of growth and n is the number of years of the target projection.This discrete growth equation is less pronounced than the continuous growth equation which is represented in natural exponential function such as that of   y y  .Through-out this article the discrete growth equation will be used for the estimation of the relevant policy variables in the TFP growth.Theoretical formulation of the TFP growth is based on a simplified production process with only capital (K) and labor (L) utilized as inputs in the production of a commodity, q.Defining q as the product of   A t and   , f K L where   A t refers to all influences that go into determining output Q besides capital (machine hours) and labor (labor hours).This mathematical relationship can be presented as The concept of the total factor productivity (TFP) is defined as total output divided by the total inputs.Based on the above simple production function TFP which is   A t should be equal to total output divided by the total input function Assuming that all variables are a function of time, the derivative of output with respect to time yields the following equalities, and expand the derivation of capital and labor with respect to time yields the equation below, Dividing through the above equation by Q and setting   , f K L f  , the output growth equation is obtained as As shown in (2) the formulated identities yield the required total output growth that will be used for the estimation of TFP growth.Terms of Equation ( 2) can be further simplified to arrive at the final result of the output growth equation as in Equation ( 3) where is the total factor productivity growth, is the output elasticity of labor, while K K  and L L  are capital and labor growth rates respectively. The equation for TFP growth is defined as in Equation ( 4) below technical inefficiency of the industry i and a small value is otherwise.Production function with capital (K) and labor (L) as inputs and the two error terms is presented as Disaggregation of TFP Growth This is one of the techniques whereby TFP and its growth are normally estimated.Obviously there are other techniques; one of which is the Malmquist index as discussed in Coelli [8].Malmquist index is used to estimate TFP change and to decompose such change into technical change and technical efficiency change using Data Envelopment Analysis (DEA) computer package.The DEA is a special form of linear programming, which comes under operation research and hence it is nonparametric and cannot be tested for validity.In practical applications it is more flexible because DEA is more versatile since it can be easily applied to cases of multiple outputs and inputs.The current study adopts the stochastic frontier production approach that would incorporate technical efficiency change, technical progress and the scale effect as shown in Equation ( 5).Jesus Felipe [9] asserted that the use of stochastic production frontier makes possible the decomposition of the TPF growth into technical progress and technical efficiency change. The former relates to the shift in the production frontier, that is, the best practices achieved by the industry's output, and the latter associated with changes attributable to the improvement in managerial practice, workers becoming more skillful through experience and a variety of other reasons that relate to a given technology. Technical inefficiency (TE it ) is further defined as the exponential of negative U it that is .The rate of technical inefficiency growth is obtained after taking the logarithm of Equation (6.1) and differentiating it with respect to time.The technical inefficiency result is shown in Equation (6.2) which will be used throughout the discussion henceforth.The estimation of technical inefficiency for the manufacturing industry is obtained from the stochastic frontier production function. Disregarding it subscripts for the time variables Equation (5.2) is usually presented in its simplified identity as TFP growth in actual estimation uses Equation (8) where s K and s L denote the share of capital and labor on production respectively.Substituting Equation ( 7) into (8) the TFP growth can be further disaggregating into additional components as derived in Equation ( 9) Decomposition of the TFP growth acknowledges the reference of the article written by Rukmani Gounder and Vilaphonh Xayavong [10].These authors, however, did not show the TFP growth mathematical proof like many articles on the subject.Coelli et al. [11] utilized the production function with the two types of error terms, "U it " which denotes the technical inefficiency of the industry i in year t, and "V it " the white noise systematic error term.They are independently and identically distributed with 0, N  .A higher value of U implies an increase in the . and subtract 1 from equation (10.2) above and with manipulation and rearranging of the terms equations (11.1) and (11.2) are finally arrived at as shown below:   where , and Equation ( 12) is utilized in the final estimation of TFP growth with the help of Excel spreadsheet.Under an optimal production and assuming that the industry operates in a perfectly competitive market the share of capital and share of labor in production are identical to the output elasticity of capital and output elasticity of labor respectively.Defining profit (π) as the difference between total revenue and the total cost, share of labor (s L ) and capital (s K ) under the assumption of perfect competition should be equivalent to K s rK pq  and L s wL pq  where p is price of output, r and w stand for unit price of capital and labor respectively. Technical progress (TP) is normally estimated from the effect of time on the total output, which is obtained from the first term on the RHS of Equation (12).The second term following technical progress represents the return to scale level of the industry with respect to changes in capital and labor over time. Since double logarithmic function is used in the analysis, RTS is equivalent to the sum of capital and labor coefficients which are their respective elasticities.For a sum of elasticity greater than one, the production operates under increasing return to scale (RTS).While a sum of one is a constant RTS and that less than one is de-creasing RTS. This third item represents the allocative efficiency component of the industry since changes in output price and input costs of capital and labor are considered here.The fourth item refers to changes in technical efficiency of the industry.Annual changes in technical efficiency are obtained from the stochastic production frontier using the Cobb-Douglas production function which had included the technical inefficiency variables in the model such as exchange rate, interest rate and the dummy a proxy for impact of outbreak of the financial crisis on the manufacturing sector.Changes in the random error term V refer to the external disturbance not captured in the model.The rest of the notations are as defined above.facturing sector will not be the focus.Instead, the general representation of the industry at two-digit level is presumed sufficient for the illustration needed.The data gathered for the analysis comprise the value of the total output, fixed assets excluding buildings and land, and the number of employed workers for the period 1980 to 2007. Annual growth in total output, fixed assets, labor and labor productivity, capital productivity and capital-labor ratio for the period 1981-2007 is shown in Table 1.Total output growth was negative for 1985 due to an abrupt decline, after which it showed tremendous improvement for the succeeding years from 1988 through 1995 with the average growth of 9 percent and reached a record high of 10 percent in 1996.Total output growth again experienced a sudden decline of 7 percent in 1997 the year of the monetary crisis followed by a further decline of 7.3 percent in 1998. Evidently both fixed assets and employed workers had experienced remarkable growth just as did the total output during the late 1980s and the early 1990s.Capital invested in the manufacturing industry grew around 15 percent to 30 percent annually during the period of 1988-1995, while the employed workers representing labor grew at much lower percentage for this period.By 1997 growth rates for these factors as expected were negative particularly after the outbreak of economic crisis of 1997 through 1999.Unpredictable fluctuations and volatility in their growth rates appear to follow after this period.The annual trends in labor productivity, capital productivity and capital-labor ratio for the period 1981 to 2007 are shown in Table 1.A pronounced upward trend in labor productivity was evident starting from 1996 onwards.Labor retrenchment during the economic crisis could have triggered the consequence.Capital productivity has not seemed to improve remarkably and it only took effect moderately after 1999.The growth pattern of labor and capital productivity seemed to have somewhat influenced the trend in capital-labor ratio.Beginning in 1995 the growth in capital-labor ratio had fluctuated upwards reaching its peak during 1996-1998 after which a more gentle fluctuation was observed towards 2007. Manufacturing Total Factor Productivity Growth With the information on elasticity and the growth rate of inputs, that is capital and labor, one should be able to calculate the TFP and its growth using the formula in Equation (4).Mahadevan [12] utilized production frontier approach to estimate the total factor productivity growth.Coelli et al. [13] utilized time as a variable to segregate the impact of technical progress on total output and attributed the residual as an unexplained growth to the TFP.Alternatively labor can be broken up into several categories of skilled and unskilled labor.Estimates of the change in efficiency for TFP growth are obtained from the stochastic production frontier (SPF).Results of the estimated Cobb-Douglas and Translog stochastic production frontiers (SPF) using 1980-2007 manufacturing industry data are presented in Table 2.The translog function shortcoming is mainly due to loss of the degree of freedom and may not be appropriate for a small number of observations.The maximum likelihood estimate (MLE) technique was applied in both cases using the Frontier 4.1 software.The inefficiency variables were used in the stochastic production frontiers, namely exchange rate, interest rate and the dummy representing the period economic crisis whereby 1 = crisis year and 0 = otherwise.As evident statistical tests of t-ratios show that the Cobb-Douglas SPF is a better estimator for the manufacturing sector. The SPF was adopted specifically in this investigation in order to estimate the annual technical efficiency and its annual changes for the TFP growth component.The Cobb-Douglas SPF was chosen for the analysis.The annual TFP growth components are shown in Ta- ble 3 and their trends are illustrated in Figures 1 and 2 of.The TFP growth components seemed to fluctuate around the zero growth line.One peculiar observation about all the components of TFP growth for the manufacturing is that the downfalls of GDP growth occurred twice during 1981-2007.The first structural break occurred in 1985-1986 and the second which is more serious during 1997-998.1 Figure 1 shows that outbreaks of economic crisis which led to downsizing of industrial output resulted in enlargement of industrial scale.This is illustrated in Fig- ure 2 showing the sharp increases in TFP growth of scale operation only during these two periods.During such adjustment period the total output growth and the management of capital and labor as apparent was reduced significantly.Although output growth towards the end of year 2000 was positive it remained relatively low around 5 to 6 percent per annum.However, the industrial technical efficiency had in turn improved as shown in the individual TFP growth components after which it had declined.By the end of the year 2000 growth in technical efficiency continued to regist l.This pattern had er a fal some impact on the TFP growth. Conclusion This paper is specifically focused on the methodology of disaggregating the technical components of TFP growth for the manufacturing sector of the Malaysian industry for the period of 1980 to 2007.Since the investigation is centered on the measurement technique limited data was utilized, obtained from the Key Indicators of Developing Asia and Pacific Countries, Asian Development Bank.Methodology for measuring TFP growth is first presented in this investigation.This was followed by the discussion on theoretical development of the TFP growth with the objective of deriving and expanding this simple model to disaggregate TFP growth components.These components comprise the technical progress, industrial return to scale with respect to the use of capital and labor over time, the allocative efficiency given output price and input costs of capital and labor, and changes in technical efficiency of the industry.Finally the stochastic production frontiers (SPFs) were estimated to obtain the necessary parameters from empirical data.It is also emphasized that measuring TFP growth can only be done practically when researchers understand how growth rate is derived and calculated.The discrete growth function is utilized throughout the discussion, as its usage is handy and not easily exploded.The measurement of TFP growth is important from the economic standpoint in order to identify the unexplained contribution to the total output growth other than those explained by capital and labor.This unexplained growth is the technical progress.Nowadays as human resource development is becoming dominant in influencing output growth, intellectual capital besides that of physical capital has been recognized as an essential variable to sustain economic growth.The TFP growth is important for decision makers in identifying sources of technical growth in the private corporation, local government for national planning and for international comparison.Even so the ability to measure and interpret these sophisticated techniques will not be of much use to real world decision making when data sources are insufficient and not reflective of the reality.Thus, the first stage requires data collection and source that is reliable and up to date, while the second stage requires ability to measure and interpret the results obtained from the analyses.As we realize that TFP growth can be estimated by other techniques such as the index numbers, different methods of TFP measurement might produce different results.The estimation technique adopted in this study utilizing the stochastic frontier production with ineffiiency model provides a way of estimating TFP growth and component of technical efficiency is probably robust and most appealing currently.c (12)technical advancement over time, represented by the shift in the intercept of the production function.This shift can be represented by the time variable but the result of such output may not truly depict the technical progress experienced by an industry or state of a country analyzed.Some argued that technical progress is already imbedded in the utilization of physical capital.A technically experienced labor is capable of operating machine to produce a product with precision that saves time, minimizes cost and raises productivity.For computational purpose, further decomposition of the TFP growth Equation (11.2) can be performed by substituting the equivalent of technical progress element as in Equation (4).The final result of TFP growth equation is shown in Equation(12)
2018-12-29T15:42:58.565Z
2013-08-30T00:00:00.000
{ "year": 2013, "sha1": "861030bb772b03557528b15ad79a33b8d07a6652", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36453", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "861030bb772b03557528b15ad79a33b8d07a6652", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
258172715
pes2o/s2orc
v3-fos-license
Correct Use of the Transient Hot-Wire Technique for Thermal Conductivity Measurements on Fluids The paper summarizes the conditions that are necessary to secure accurate measurements of the thermal conductivity of fluids using the transient hot-wire technique. The paper draws upon the development of the method over five decades to produce a prescription for its use. The purpose is to provide guidance on the implementation of the method to those who wish to make use of it for the first time. It is shown that instruments of the transient hot-wire type can produce measurements of the thermal conductivity with the smallest uncertainty yet achieved (± 0.2%). This can be achieved either when a finite element method (FEM) is employed to solve the relevant heat transfer equations for the instrument or when an approximate analytic solution is used to describe it over a limited range of experimental times from 0.1 s to 1 s. As well as establishing the constraints for the proper operation of the instrument we consider the means that should be employed to demonstrate that the experiment operates in accordance with the theoretical model of it. If the constraints are all satisfied then an uncertainty in thermal conductivity measurements of as little as ± 0.2–0.5% can be obtained for gases and liquids over a wide range of thermodynamic state from 0.1 MPa to 700 MPa and temperatures from 70 K to 500 K with the exception of near critical conditions. It is observed that many applications of the transient hot-wire technique do not conform to the constraints set out here and therefore may be burdened with very much greater uncertainties, sometimes large enough to render the results meaningless. Introduction The transient hot-wire technique is a well-established, absolute technique, with a fully developed theoretical background [1], employed for the measurement of the thermal conductivity of fluids and solids with very low uncertainty. The evolution of the transient hot-wire technique has been described in detail elsewhere [2,3]. If applied correctly, with the exception of the critical region and the very low-pressure gas region [1], it can achieve absolute uncertainties well below 1% for gases, liquids, and solids, and below 2% for molten metals and the effective thermal conductivity of two phases systems such as nanofluids [3]. In recent years however, the 'simplification' of the technique for commercial exploitation and its application to inhomogeneous multi-phase systems has led to measurements which are often significantly less accurate than is claimed and certainly less accurate than the best that could be achieved [4][5][6]. Given the technical benefits that could accrue from enhanced heat transport in fluids as has been claimed for so-called nanofluids [4,7], it seems important to the present authors to establish for this new set of applications and a new generation of experimentalists the proper parameters and approach that should characterize all applications of the transient hot-wire technique for the measurement of the thermal conductivity in macroscopic systems. For that reason, in the following sections, a brief summary of the transient, hot-wire technique and its application will be given in a manner which emphasizes the constraints on its validity. The full theory of the practical experimental technique is governed by the second order partial differential equation of transient heat conduction in a 3D space in a homogeneous system. The available approximate analytic solution for the description of the experimental technique [8], is valid provided that a restrictive set of conditions is obeyed, that can be satisfied by exceedingly careful instrumental design and operation [1,8,9]. Many of the recent applications of the transient hot-wire technique [4][5][6] have ignored these fundamental points and simplistic implementations inevitably lead to gross differences between the practical device and its theory. No amount of calibration or empirical adjustment can ever make up for this basic deficiency. The Ideal Model and its Analytic Solution The transient hot-wire technique is based on the observation of the temporal temperature rise of a thin vertical, resistive wire immersed in the single-phase material whose thermal conductivity is to be determined. The wire is initially in thermal equilibrium with its surroundings and a step voltage is applied to it. In this way, electrical current flows through the wire and heats it up. Therefore, the wire acts as a heat source of almost constant heat flux per unit length, producing a time-dependent temperature field inside the wire and the test material. Τhe evolution of the wire's temperature depends in part on the thermal conductivity of the material around it. The concept of the technique and the origin of the various constraints upon the design are most easily explained using an ideal model of the experiment. In first step in the formulation of an ideal model, an infinitely long, vertical thin source of heat is immersed in a stationary, infinite, isotropic material of temperature-independent thermal diffusivity α, and thermal conductivity λ, and which is initially (at t = 0) in thermodynamic equilibrium with the wire at temperature T o . Furthermore, the heat transfer from the heat source, when a stepwise heat flow per unit length, q, is applied, is considered to be only conductive (radiation is considered negligible). Thus, the temperature field around the wire can be described by the partial differential heat conduction equation: or where c p and ρ, are the specific heat and the density of the material around the wire, and r is the radial distance from center of the wire. The thermal conductivity in Eq. 1 is only strictly defined for a homogeneous single phase, fluid. For inhomogeneous fluids (such as so-called nanofluids) the extension of the definition is heuristic and that apparent quantity may depend upon the distribution of the phases, the time and length scale of measurement as well as how the measurement is performed. The first constraint on the measurement technique, common to many methods follows. CONSTRAINT #1 Experiment must ensure that within its duration only conduction contributes to the measured heat transfer in a homogeneous fluid. The following initial and boundary conditions are employed: This problem is standard and the analytical solution of the above fundamental Eq. 2 for the temperature rise ΔΤ(r, t) has been given by Carslaw and Jaeger [10], as Initial condition ∶ for t ≤ 0 and for any r, ΔT(r, t) = 0 (4) Boundary conditions ∶ a) for t ≥ 0 and for r = 0, lim where α is the constant thermal diffusivity of the surrounding material. It has been shown [8] that the temperature rise at the point r o is the same as that at the surface of a thin wire of radius r o, provided that the wire has no heat capacity and an infinite thermal conductivity. At the position r = r o , for small values of (r 2 o ∕4at) an expansion of the exponential integral is possible and we have where C = e γ and γ is the Euler constant equal to 0.5772157. The ideal model of the experiment is then finally formulated by requiring that the second and following terms on the right side of Eq. 7 can be considered negligible, and the ideal model working equation can be written as Here the subscript id indicates the working equation only refers to the ideal model of the experiment. It reveals the possibility of obtaining λ the thermal conductivity of the test material from the slope of the line ΔT id versus lnt. It would also appear from this equation that the thermal diffusivity might be obtained from its intercept. However, the development of the model to account for all of the departures of a real experiment from the ideal model has been focused on the fact that the absolute value of the temperature rise is secondary to its dependence on time so that little attention has been paid to the corrections that would affect the absolute value of the temperature rise [1,8]. For that it is not advised to use the method to derive the thermal diffusivity from measurements directly. In practice the transient hot-wire technique makes use of a thin metallic wire of finite length to provide the heat source when a current flows through it and its resistance change with time is used to monitor the temperature rise within it. In order that Eq. 8 can be used for the evaluation of the thermal conductivity a large number of constraints must be satisfied. It will be the purpose of the next section to detail them. First though we set out a vital part of the thinking that underlies these constraints. Evidently, the practical instrument for the measurement of the thermal conductivity of a material inevitably introduces a number of departures from the ideal model. Therefore, if Eq. 8 is to be used to interpret experimental measurements a significant number of corrections have to be made to the acquired experimental data to adjust it to the ideal model described by Eq. 8. The design intention is that the experimental cell and operation are such that each individual departure from the ideal model is sufficiently small that each can be treated as a separate, additive correction to the measured temperature rise. In practice, the requisite smallness is obtained by du, a theoretical analysis of each departure and subsequent design of the instrument to ensure that no correction is ever more than 0.5% of the measured temperature rise. The reason for this is so that when we estimate a correction say to within 5% the residual effect upon the measured temperature rise is negligible. Here, the word negligible is intended to mean smaller than the combined random error in the measurement of the temperature rise and time. It has been verified in practice that such a constraint ensures that residual deviations between the theoretical and that observed ones should be no more than 0.01% and this in turn leads to thermal conductivity measurements with an expanded uncertainty of better than ± 0.5% [1,11,12] With this background the temperature rise of the ideal model in terms of the one observed experimentally is [8] where ΔT id (r o , t) is the ideal temperature rise r o at time t, introduced in Eq. 8, ΔT exp (t) is the temperature rise of the wire measured experimentally and Σ i δΤ i is a sum of the various corrections. The origin of each deviation from the ideal model has been explained and an explicit theoretical evaluation of the each correction to be applied has been made elsewhere [1]. We consider here only those that can be significant. These deviations can be separated into four categories by virtue of the means necessary to deal with them [1]. We note that in terms of ISO GUM (Part 6) all of the deviations between the ideal model and reality are "well understood". Convection It is, of course, impossible to avoid convective motion of a heated fluid in the earth's gravitational field. However, its effect on the measurements by the transient hot-wire technique can be eliminated by the adoption of a vertical wire and sufficiently short measurement duration. It is both possible and necessary to confirm this has been achieved [1]. If convection is present, then the linear temperature rise predicted by Eqs. 8 and 9, will not be followed. It follows that checking the linearity of the line of ΔT id vs lnt is an absolutely essential condition for each and every experiment. The manner in which such linearity should be confirmed, is detailed later. In the case of fluids, it has been shown experimentally [13,14], that proper operation ensures no deviations from the linearity of Eq. 8 because of convection, in the time range 0.1 to 1 s, although longer times show clearly such an effect in some cases [1]. CONSTRAINT #2 to avoid convective effects times larger than 1 s should not be employed (the corrected temperature rise, ΔT id vs lnt should be exactly linear between 0.1s and 1 s). Knudsen Effect The correction for the Knudsen effect has been described explicitly by Healy et al. [8], and it affects only gases of very low density. In practice, measurements can always be conducted at higher pressures or densities so that the effect is demonstrably negligible. If required, values of the thermal conductivity of a gas at low density can readily be obtained by extrapolation of values obtained at higher densities [11]. Deviations Where the Corrections are Sufficiently Small that Their Application has an Effect that is Less than 0.01% by Design or Operation. Finite Wire Diameter The ideal model that led to Eq. 7 and then Eq. 8, assumes that the line heat source is of infinitesimal diameter whereas, in practice, the wire has a finite diameter. Hence, the use of a wire with radius r o changes the inner boundary condition of Eq. 4 to: However, the solution is again Eq. 7, for small values of (r 2 o ∕4at) and the ideal model of Eq. 8, is again recovered. Truncation Error In deriving the working equation for the ideal model of Eq. 8 from Eq. 7 we neglected the terms beyond the first in the expansion of Eq. 7. Thus, to be able to use Eq. 8 to analyze the experimental observations we must apply a correction to the experimental data to account for this 'truncation error'. The magnitude of the correction represented by the second and higher order terms can easily be examined and is evidently greater for the smaller thermal diffusivities characteristic of liquids at short times and larger wire radii. For example, for an experiment in toluene at 298 K (a ≈ 9×10 −8 m 2 /s) then, at a time of 0.1 s for a wire of 20 μm diameter, the truncated terms constitute only 0.01 % of the temperature rise, whereas for a wire of 100 μm diameter, at the same time the truncated terms amount to about 3 % of the temperature rise. In the first case, if it is possible to estimate, the truncation error is below 0.01% and the allowable threshold and so is acceptable, whereas in the second case it is not. Thus, the correction for the neglect of higher order term should always be made, but is typically only small enough to be acceptable when the diameter of the hot-wire is below 25 μm. Not to make such a correction, in the case of a wire diameter of 100 μm, will cause an (10) for t ≥ 0, and for r = r o , T t = − q 2 r o error of at least 3 % in the thermal conductivity reported. For this reason alone, wires of more than 25 μm diameter should never be employed. Both effects 2.2.1 and 2.2.2 lead to the conclusion that the smallest wires possible should be employed consistent with availability and mechanical endurance under practical circumstances. CONSTRAINT #3 For measurements in liquids, wires of more than 25 μm diameter, should not be employed. Radiation In the case of transparent fluids, the heat fluxes due to conduction and radiation are independent and additive. Thus, given the small temperature rises involved the corresponding correction is usually negligible [8] or easily evaluated. Conversely, in the case of non-transparent fluids, that absorb at least some of the radiant energy emitted from the wire, Menashe and Wakeham [15] proved that, in practice, if radiation is present, this becomes evident when plotting the corrected experimental ΔT id values against lnt, because the line is no longer straight and there appear characteristic curvatures. In most cases examined to date [15] no sign of the effect of radiation has been observed. This is physically because the conductive element of heat transfer is determined by the gradient of the temperature (perhaps as much as 10 5 K/m for a wire of less than 25 μm diameter), whereas the radiative component is driven by the temperature difference to first order (a few Kelvin). Therefore, by the choice of small diameters wires; it is possible to ensure and verify the effect is negligible. Constraint 3 therefore covers this effect. Viscous Heating The inevitable movement of the fluid cause by its heating in a gravitational field must always lead to the generation of heat through viscous dissipation. It has been shown that this is always rendered negligible by the use of sufficiently small temperature rises [8]. Compression Work Inevitably, when a fluid is heated in a closed vessel work is done compressing the fluid. The effect has been analytically estimated. It is largest for dilute gases but is then readily eliminated by the choice of a sufficiently large vessel containing the fluid [16]. Its elimination can be verified experimentally by varying the size of the vessel containing the fluid. CONSTRAINT #4 Choose a large enough vessel to contain the fluid. Finite And Non-zero Properties of the Wire This correction takes into account the non-zero radius of the wire r o , its finite thermal conductivity λ W , and its heat capacity c p [8,10,14] In the above equation, subscript w refers to the wire properties. Usually, the first term in the summation of Eq. 11 constitutes the major correction owing to finite properties of the wire. In practice, this correction has a larger effect at short times and for low density gases. As we have observed earlier, to preserve the integrity of the method it should never be allowed to be larger than 0.5 % of ∆ T id . Substituting some numbers into the first term of Eq. 11, one can easily show that in a liquid a) for a wire of 10 μm radius, at 0.1 s, δT w = 0.01 K, while b) for a wire of, 50 μm radius at 0.1 s, δT w = 0.20 K. In the case of measurements in argon gas, for a wire of 10 μm radius, at 0.1 s, δT w = 0.22 K while, for a wire of 50 μm radius δT w = 5.35 K. For typical temperature rises (2 -3 K), all except the first of these values exceed the threshold of 0.5% of the temperature rise. It follows that the time range of observations of the temperature rise should be limited to a time above that for which the correction for the finite properties of the wire falls to < 0.5 % of the wire temperature rise. In practice, this can be readily satisfied for liquids with a wire diameter no greater than 25 μm. For gases, wires with diameters less than 10 μm must be used. CONSTRAINT #5 In order to allow correction for the finite properties of the wire, in liquids, wire with a diameter larger than 25 μm should not be employed, while for gases less than 10-μm-diameter wires ought to be used. Outer Boundary Correction The fluid sample must obviously be contained and there is often a desire to reduce the total volume of the sample for reasons of safety or expense and so to make the container as small as possible. The finite volume of the material, causes a deviation from the ideal model, in that there is an outer boundary at a distance r = b, where b is the radius of the vessel. This leads to the modification of the outer boundary condition as described by Eq. 5, where g ν are the roots of J 0 (g ν ) = 0, and J 0 and Y 0 are Bessel functions. Except for gases with the highest thermal diffusivity at low density, the effect can be rendered negligible by placing the cell wall more than 0.5 cm away from the wire [8]. CONSTRAINT #6 Cell wall must be more than 0.5 cm away from the wire (liquids or gases), and best practice suggest the correction should always be applied because it is simply done and it can never be wrong to do so. Sample Physical Properties that Vary with Temperature The variation of the thermal conductivity and the heat capacity of the sample with temperature, produce a correction that should be applied to the ideal solution. The analysis carried out by Healy et al. [8], considered a linear perturbation of these properties; they showed that the temperature, T r , to which the thermal conductivity deduced from the linear slope of the line ΔT id vs lnt refers is where T o is the initial equilibrium temperature of the material, and ΔT(t 1 ) and ΔT(t 2 ) are the temperature rise measured experimentally at the initial observed time t 1 and the final time t 2 , respectively. This correction to the reference temperature for the thermal conductivity is not negligible and must always be performed. CONSTRAINT #7 Thermal conductivity must always be referred to the corrected reference temperature. Wire Insulation In the case where the sample to be measured is polar or electrically conducting, it is necessary to apply an insulating coating to the bare metallic wire, to avoid a series of unwanted phenomena. The latter include current leaking through the test material, polarization on the wire's surface, and distortions to the voltage signals. Wakeham and Zalaf [17], based on the previous work of Nagasaka and Nagashima [18], and Alloush et al. [19] over a limited range of conditions, managed to overcome some of the aforementioned constraints to a significant extent, by establishing the use of a tantalum wire with an insulating layer made of its own pentoxide, via an in situ anodization process. Hence, provided that the wire and its coating are made of a suitable material and that the coating is sufficiently thin (in the case of the tantalum oxide a layer thickness < 200 nm) an analytic correction exists [18]. CONSTRAINT #8 In the case of polar or electrical conducting liquids, use insulated wire with a coating that renders the correction for its presence less than 0.5%. End Effects The wire employed in practice cannot be infinitely long, as assumed in the ideal model and thus there are always effects at the ends of a supported, finite wire owing to conduction in an axial direction not amenable to exact analysis. A rigorous analysis of this problem is not available, so these 'end effects' must, be eliminated experimentally. Two main approaches to achieve this elimination have been adopted. The first solution uses a single wire with potential leads of a smaller diameter attached about a central section of its length. The potential leads are used to measure the potential difference over the section of the wire between them and thereby deduce the resistance change following from the temperature change of the wire. If the wires comprising the potential leads are of sufficiently small diameter compared with the heating wire, then there could be an insignificant heat loss through them and the whole of the wire between the potential leads behaves as a finite section of an infinite wire. An evaluation of the effect of these potential taps should always be conducted to show that this is the case. In practice, this has not been performed and for hot wires with a diameter of 10 μm, the connection of wires of much less than 1 μm diameter is essentially impossible so that this method cannot be recommended. The other approach employs two wires, nominally identical except for their length. If both wires are subject to the same heating current then the end effects in each case will be the same provided that each is sufficiently long that a central section attains the temperature appropriate to an infinite wire [8,20]. Thus, if arrangements are made to measure the difference of resistance of the two wires as a function of time, it will correspond to the resistance change of a finite section of an infinite wire, from which the temperature rise can be determined. This approach is adopted in all the work that has led to thermal conductivity data with the lowest uncertainty. As far as the appropriate length of the wires is concerned, Antoniadis et al. [21] have presented a Finite Element Method simulation of the heat distribution around the end of a 25 μm-thick tantalum wire, welded onto a 1 mm-thick support, also made of tantalum, after heating it for 1 s in water. The isotherms reflecting the temperature changes inside and around the wire are shown in Fig. 1. Figure 1 shows only the first 5 mm of the wire. The FEM analysis showed that the end effects cover a distance of about 1 cm from the support. Therefore, keeping in mind that the two wires are welded at both ends and that the goal is to subtract the end effects and leave a finite section of wire long enough to be a finite part of an infinite wire (3 cm at least). Of course, the minimum difference of length between the two wires is determined by the sensitivity of the means of detecting its resistance change with time. It is worth noting that the use of a single 5 cm length wire without any endeffect correction, results in a 15% difference in the resistance of the wire from that of a wire with no end effects. CONSTRAINT #9 To avoid end effects, use 2 wires, a short one of length > 2 cm and a long wires of length > 5 cm. Table 1 shows a summary of the constraints discussed for proper operation of the Transient Hot-Wire technique with liquids and gases. These constraints ensure that the corrections to be applied to the temperature rise are always sufficiently small that they can be estimated with an accuracy comparable with the noise level in the measurements. This implies of course that the corrections must always be applied. Checking the Linearity of ΔΤ vs lnt At several points during the foregoing exposition, we have emphasized the need to derive the thermal conductivity of the fluid from the slope of the line between the ΔT id and lnt over a range of time determined by the operation of a variety of constraints. At the same time, we have been at pains to point out that the linearity of this plot is a means of demonstrating that the experiment operates in all respects in accord with the theory of it. For those reasons it is important to verify that the experimental plot is indeed linear for each and every measurement. It is common practice in the processes of regression fits using least squares processes to examine simply the regression coefficient R to determine whether one has a good fit to a linear function or not. It is a matter of experience that this is inadequate and inappropriate for the analysis of the THW experiments. This is because very small systematic curvature in the experimental data return a regression coefficient very close to unity. For that reason, best practice for use with such experiments is to construct a linear fit over say the first five data points at the shortest times in the interval studied and to record the slope and its standard deviation for such a fit. The fitting is then repeated after adding the next later time and the reported fitted coefficients recorded again. This is repeated until all the data points have been included. If the observations are truly linear and as more points are included the standard deviation estimated in the slope will fall and, within its estimated standard deviation, the slope of the line will remain constant. This is a very stringent test of the data but unless it is carried out small curvatures can be overlooked and the true value of the thermal conductivity depends on the particular time window chosen to evaluate the slope. It is regrettable that seldom is the detailed linearity of the plots reported and even less frequently is this method adopted. The FEM Approach A more recent, alternative approach to the approximate analytic solution described above, is to solve the full Fourier heat transfer equations in the wire and the test material, using numerical methods based upon the finite element method. In our case COMSOL Multiphysics Version 3.2, was employed but other readily available systems could be employed. It should be emphasized that this approach was In this approach Antoniadis et al. [21] employed a 2D heat conduction equation for wire, one for the fluid and a separate one for the insulation layer (in case of electrically conducting liquids). The numerical model consists of two subdomains: one which is a part of the circular cross-section of the hot wire and the other which is considered to be a fluid sample in which the wire is immersed. The wire is placed on the axis of the cylindrical sample so that one quadrant of the geometry need be considered which allows greater resolution in the solution for a given computational effort. In the finite element analysis, it is important that the model is able to represent accurately the local variation of all relevant quantities. The basic parameters that enable such accuracy are the spatial density of the mesh, and the size of the time steps used in the selected numerical solver. The optimized mesh employed includes 1,060 elements, with a higher density in regions where a higher temperature gradient is expected, (within and close to the metallic wire). Validation of the Transient Hot-Wire Technique As discussed above one of the necessary tests of any transient hot-wire measurement is that the observed temperature rise should conform with the theoretical description of it within the mutual error of theory and experiment. Figure 2 shows a typical temperature rise measured in water at 297.5 K and atmospheric pressure [21]. We devote some space here to a demonstration of the validity of measurements because it is exactly this that is omitted from most studies. The Analytic Solution We begin by examining the behavior of this experiment with respect to the approximate analytical solution for the temperature rise. For this experiment, a setup of two 25-μm-diameter Ta wires, differing only in length, 2 and 5 cm, anodized in situ, were employed. As was pointed out earlier we can consider only a range of time for such a comparison defined by the magnitude of the various corrections to be applied to the experimental temperature rise as a function of the time. In practice we have limited the magnitude of any correction to be no more than 0.5% of the temperature rise which means that times between 0.0025 s and 0.1 s have been disregarded (42 points) because for those very small times the heat capacity correction was larger than 0.5% of the temperature rise [14]. The outer boundary correction was always insignificant and the correction for the wire coating amounted to no more than 0.07% irrespective of time. Figure 3 shows the fractional deviations from a linear fit of it as a function of the logarithm of time. The agreement is within ± 0.05% (at the 95% confidence level), and the value of the thermal conductivity of water obtained is 605.6 mW·m −1 ·K. −1 at 297.5 K. The uncertainty of this value obtained by this technique, if a G.U.M. analysis is carried out [22], is 0.5% (at the 95% confidence level) and a full analysis is given by Charitidou et al. [9] It should be noted that the reference value for the thermal conductivity of water at 297.5 K proposed by the International Association for the Properties of Water and Steam (IAPWS) and the International Association for Transport Properties (IATP), is 605.5 ± 0.2% mW·m −1 ·K −1 [23] that is within 0.02% of the present measurement. Figure 4 shows the raw, uncorrected, experimental temperature rise in the same measurement as discussed above, without any corrections applied, and the temperature rise obtained by the FEM solution, over the entire range of measurements (0.0025 s to 0.1 s). In order to obtain the numerical solution we have employed as input values, the density of the metallic wire, its heat capacity, its thermal conductivity as well as the same quantities for the insulating coating. Finally, we need values for the radius of the wire and its coating and a density and heat capacity for the fluid. All of these quantities for the measurement in water are collected in Table 2. The thermal conductivity of the fluid is then determined as that value when the numerical solution shows the least-square deviation between the numerical solution and the experimental data. The residual percentage deviations between the optimum fit and the data are shown in Fig. 5 over the entire time range (0.0025 to 1 s). The following should be noted: The FEM validation A) The agreement between the experimental temperature rise points and those calculated by the FEM solution, is excellent over the entire time range from 0.0025 s to 1 s. We note that deviations are slightly larger at very short times, because in that range the slope of the temperature rise vs time curve, is considerably steeper. B) The value of the thermal conductivity obtained by FEM, is 604.0 mW·m −1 ·K −1 at 297.5 K. This value is obtained with an absolute uncertainty of 0.5%. The absolute uncertainty arises from uncertainties in the measurement of the wire diameter, the measurement of the heat flux and the FEM numerical error. C) The aforementioned value is within 0.2% of the values obtained from the application of the linearized analytical theory solution (605.6 mW·m −1 ·K −1 ) and the IAPWS-IATP reference value (605.5 mW·m −1 ·K −1 ) [23]. Finally, it is worth observing that if some of the properties of the observing system are not available directly, they maybe simultaneously derived by fitting the FEM solution to the experimental temperature rise, owing to the large number of data points available. Discussion The previous analysis clearly demonstrates the following: A) The full heat transfer set of equations governing the transient hot-wire experiment can be solved accurately by FEM, so that the temperature rises can be obtained within about 0.1% at worst, so that the thermal conductivity can be evaluated with an absolute uncertainty of about 0.5%. B) The approximate analytic solution can be employed without introducing additional errors, over the time range 0.1 s to 1 s, with an equivalent absolute uncertainty of about 0.5%. However, for this to be valid, the following conditions must be met. a) In both cases, it is absolutely essential that some means is adopted to eliminate the effects at the ends of the finite hot wires, or a three-dimensional heat transfer problem has to be solved. b) In order to be able to apply the analytical solution and thus determine the thermal conductivity from the slope of a line relating the ideal temperature rise to the logarithm of time each wire and any coating must be very thin, less than 25 μm diameter, so that the corrections to the line source model can be evaluated and applied without introducing significant error. c) Low values of the temperature rise, less than 4 K, must be employed to perturb the system as little as possible without losing resolution. d) Insulated wires should be employed for measurements in polar, or electricallyconducting liquids to avoid distortion of the electrical signals in the measurement system. These necessary conditions must be complemented by a demonstration that the expected congruence between experiment and theory is delivered. Designs of cells conforming to the design constraints can be found in a number of references [1,2]. Such cells have been applied to gases up to 750 K and pressures up to 70 MPa, and mostly organic liquids up to 400 K and 500 MPa. It has proved more difficult to sustain an uncertainty of a few parts in a thousand over a wide temperature range in gases. It is possible this is a result of the difficulty of maintaining a very thin platinum wire stationary and vertical when subjecting to transient heating [24,25]. Here we note that some commercial implementations as well as some research instruments of what is called the transient hot-wire technique depart considerably 1 3 85 Page 18 of 19 from that we have set out here and conform to very few of the essential constraints we have outlined. Some indeed make use of a rather complicated composite probe of many materials in a tube of mm diameter and yet apply Eq. 8 for analysis. Others employ wires of 100 μm diameter, or more, or single wires with no end effects correction. Although there may be reasons why these sensors could be applicable, they do not derive from the theory set out here or any published material. Equally, the Standard methodology set out by ASTM D7896 departs significantly in both theory and practice from what is described here. The consequences of these departures from the rigor advocated here cannot be determined. Conclusions In this paper, it has been confirmed that instruments operating according to the transient hot-wire technique can indeed produce excellent measurements when FEM is employed for the evaluation of the thermal conductivity of the fluid using the exact geometry of the hot-wire. It has furthermore been shown that an approximate analytic solution can be employed with equal success, over the time range 0.1 s to 1 s, with an equivalent absolute uncertainty of about 0.5%, provided a) two wires are employed, so end effects are canceled b) each wire is very thin (less than 25 μm diameter) so that the line source model and the corrections mentioned before are valid, but also its resistance change is accurately recorded. c) low values of the temperature rise, less than 4 K, must be employed in order to perturb the system as little as possible without losing resolution, and d) insulated wires are employed for measurements in electrically-conducting or polar liquids to avoid spurious electrical effects. Author Contributions All authors contributed equally. Funding Open access funding provided by HEAL-Link Greece. No funding was received for this work. Competing interests The authors declare no competing interests of a financial or personal nature. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2023-04-17T13:56:48.634Z
2023-04-17T00:00:00.000
{ "year": 2023, "sha1": "482f052a70e9dbd4c729f86bb350ad4529c82bd4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10765-023-03195-1.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "482f052a70e9dbd4c729f86bb350ad4529c82bd4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
115200822
pes2o/s2orc
v3-fos-license
Design and modeling of improved controller using DC source fed permanent magnet synchronous motor drive with enhanced DC-DC converter by reducing vibrations for industrial applications The proposed research involves Design and Modeling of Improved Controller (PID), Using DC Source Fed Permanent Magnet Synchronous Motor Drive with Enhanced DC-DC Converter by reducing vibration for Industrial Applications. It consists of a DC-DC converter, voltage-link module, fractional second order system PID controller. In this proposed improved controller, enhanced converter, DC-link switching is achieved by a bridged ripple voltage which results in the improved quality of output power, and it also reduces vibration, and noise in the drive without affecting mechanical properties. Reduction of switches makes the system more cost effective. A simulation of fractional second-order function, Buck-Boost converter is designed, and its concert is analyzed for various functioning factor conditions. The Three-Port Converter and B4-Inverter fed Permanent Magnet Synchronous Motor Drive focused on the industrial applications. An Ampere-hour unit powered converter is used at inductor terminal, which has been realized by minimizing the distortion and decreasing the Ampere-hour unit rating a new topology is proposed. As Comparing with previous system, the proposed system results in the reduced voltage fluctuation at the switches. Hence the proposed system proves power switches are compact, with reduced inrush current and improved output transfer function. Based on this a fulfilling closed loop validation has been performed for both simulation and experimental arrangement. Introduction The intention of the proposed research is to evaluate the performance to test the robustness of the Programmable Integral (PI) system.The different order integrator plus time delay system is taken for the performance analysis from which the system performance can be studied for the effect of the time delay and time constant over the system.Modeling of this system will be useful in order to perform the modeling of the power semiconductor devices for an industrial process using its transfer function.So, the proposed research output for a controller is applied to a low power semiconducting device to get the improved controller output.While coming to power converter, the hostility due to fossil fuel depletion [1] affect outputs.Clean green energy voltage is required.Generally, several single-input/single-output due to complexity their respective cost is high [2][3][4].The proposed research focuses on single-input/multi-output (SIMO) converter for increasing efficiency, gain.SIMO proposed in the research is a three-port converter (TPC) which is interfaced with renewable sources, storage elements and loads and simultaneously operates in two stage conversion which efficiency [4][5][6][7][8][9] is reduced, so the single-stage conversion is implemented.To reduce the power losses further, the TPC architecture is introduced by [10,11].The TPC advantage has single-stage conversion connecting any two of the three ports, greater system efficiency and smaller number of components, quick response and included voltage-current management among the ports with a middle power [12][13][14][15][16][17][18][19][20][21].The main objectives of the present research consist in analyzing the controller performance to check the robustness of used PID system.PID loop is applied for a power electronic device with integration of improved converter with less number of power-controlled devices which is regulated with a high quality, in all power flow situations, using various conditions of load for industrial applications.The different order integrator plus time delay system is taken for the performance analysis from which the system performance can be studied for the effect of time delay and time constant over the system.A control system is used in all processes in order to obtain the necessary output by monitoring the system output.A system should possess the following qualities: 1. High accuracy; 2. Sensitive to the input; 3. Bandwidth high and noise free; 4. Transient speed is low with less no. of oscillations; 5.The system remains stable.To get the accurate result, the control system should be closed, and its feedback is compared to that of the open loop system.Since, in a closed loop, the output is sent as a feedback to produce the desired result with high accuracy.Controllers are used for this purpose by tuning the system to produce the desired output from the input as well as to monitor the output.In general, there are three types of controllers: The Proportional controller (P), Integrator controller (I) and Derivative Controller (D) to tune the system output.In the P controller, the output is proportional to the difference between the error and set point signal.It makes the system stable.The P controller suffers from the offset problem and also increases the overshoot of the system.In the integral controller, the output will be the integral value of error with respect to time.This controller has the main advantage of reset back the system to its set point even when the system is disturbed.Hence, it is also called as reset controller.The main disadvantage of the reset controller consists in that it makes the system unstable.The derivative controller cannot be used alone because it produces saturation effects and also amplifies the noise signal produced in the system.Henceforth, the D controller is used as a combination of P, I or both.Tuning of the controller is an important step in designing the system, because it helps the system to reach its desired result.Zeigler and Nichols proposed a design for auto-tuning based on the critical gain and critical frequency of the plant.The problem with this method is that it makes the closed loop system to produce poor robustness.The robustness of the Ziegler's technique is improved by applying the tradeoff between the robustness and performance of the system by Astrom and Haggland technique. Materials and methods In this research, the tuning process of fractional order PI controller using the SBO algorithm is analyzed.To design a fractional order PI controller, it three conditions should be satisfied: 1. there should be a larger gain crossover frequency in order to have lower settling time in the closed loop; 2. The phase margin should be between 45 degrees and 60 degrees; 3. The system should possess iso-damping property. Based on these three conditions, the equations for designing the controller efficient ( , and ) are mentioned in [22] are given below: The transfer functions for analysis are taken from [23].These functions are unstable in nature hence the functions are converted using the Skogestad's Half-rule method and partial expansion method.The design condition values are obtained from the converted stable transfer functions, and those values are applied to the equations and are solved by the proposed Satin Bowerbird Algorithm to get controller coefficients.Then, the obtained coefficients are converted to transfer functions using FOMCON toolbox, the overall system is simulated, and the performance is evaluated using ISE, IAE and ITAE.In further sections, the proposed research is explained in detail. Processing on transfer functions The transfer functions for which the fractional order PI-controller is designed are given in the Table 1.In this table, five transfer functions are used, and these transfer functions contain combined integrating process and second order delayed functions (slightly and higher), Pure Integrating process, and first-order integrating process with delay. The above transfer functions are converted using the Skogestad's Half-rule method and partial expansion method.The steps which take place in partial expansion method for conversion are: 1) Expand the transfer function. 2) Apply the mathematical partial expansion technique to determine , and based on its order. 3) The constant part will be taken 'as is' because it will not decay with respect to time.4) Then the dominant pole is selected because it will not decay soon with respect to time.5) From this, the new stable first order approximated equation will be got. 6) The transformed functions are given in Table 2. Design conditions for fractional order PI The cut-off frequency is obtained from the magnitude plot of the transfer functions where the magnitude of the response is less than or equal to 1, and the phase margin is in the specified value.The sample for calculating the cut-off frequency and the phase margin are shown in Fig. 1. The iso-damping property of the system is obtained by differentiating the plant with respect to gain cross-over frequency.By obtaining all these three values and substituting them in the equations 1-3, an unsolved non-linear equation will be obtained. Satin bowerbird optimization The unsolved nonlinear equations are solved by using the SBO algorithm to obtain the fractional order controller coefficients.These coefficients are used for the further process to get the tuned response of the system.The Eq. ( 1) is used as an objective function to determine the value of the coefficient.The steps in Satin Bowerbird algorithm by [24] is taken as basic idea for this paper: Random bower generation In this, the population of bowerbirds is initialized based on the number of the population described as per the process needs.The values of bowerbirds are based on the lower bound and upper bound.In this paper, the number of population is 50, and the lower bound of controller coefficients is [0 0 0], and the upper bound is [100 100 2].The value should lie between 0 and 2. Hence, 2 is selected as the upper bound.The position of bowerbirds is initialized by the Roulette wheel Selection method. Probability of each bower: The chances of getting the best bower are estimated in this step through the cost function value of each bower based on the following Eq.( 4): where is the cost function of the problem, here is the Eq. ( 4), is the total number of population. Elitism Elitism is the process of finding the best solution at each stage of the optimization process.At each iteration, the position of bowerbird will vary hence the fitness value for the bower will also vary.Hence, in this process, a variable called elite which stores the best bower values in it is set.This value produces the highest fitness function compared to the other bower.The elite will vary for each iteration, and, at the final iteration, the best bower position is obtained from this elite function. Position updating The bower position is updated for each iteration based on the three parameters: 1. Attraction power of the bower; 2. Elite Positions; 3. Roulette wheel procedure for calculating .The attraction power of the bower denotes how likely that the particular bower can be elected as elite function.It is represented by : where the greatest step size has the value of 0.94 and denotes the chance of best solution of bower obtained by Eq. ( 4).The position of the bower is calculated by: SBO mutation The mutation of SBO refers that the stronger bower may suppress the weaker bower by stealing its values or even the weaker bower may not be considered.Due to this also, the bower position will vary.This is calculated by using the normal distribution (N) of the average value of old position and variance of the space width between the upper and lower limit. Motivated output result At each iteration, the population is calculated before and after the changes.The best value is obtained by sorting the population based on a cost function.The whole process will be terminated only when it satisfies the cost function condition till that the steps will be repeated. Calculate cost function value for all bowers; Sort and obtain the best compared value with elite; Output= elite; End while; Return Output.From the above algorithm, the FOPI coefficients are calculated for each process which is saved for the simulation purpose.Table 3 shows FO controller coefficients. Performance of controllers is carried out with a new algorithm where the comparisons of methods are presented, and its damping oscillations are reduced and applied to proposed low power converters.The TPC, which used a PMSM motor drive with a single stage conversion system, is illustrated in Fig. 6.The proposed system consists of ampere-hour unit powered MOBB converter, B4-Inverter fed PMSM motor and ampere-hour unit powered bidirectional converter which function as a backup supply for the load operating under demand condition.The proposed converter consists two diodes ( , ), swithes ( , ), inductor , Capacitor .The proposed converter topology of operation is shown in Fig. 1. Proposed converter: modes of operation Mode 0: When the switch ( ) is in "ON" condition, and the switch ( ) is in "OFF" condition, current in inductance ( ) drops (discharging).As a result, the intermediary capacitor ( ) gets (charging) energy from an input inductor ( ).Consequently, voltage across the intermediary capacitor boosts as given in Fig. 2(a). Mode 1: When the switch ( ) is in "ON" condition, current in supply voltage drops (discharging).Consequently, the energy is transmitted to the inductance ( ) (charging).Therefore, voltage across the intermediary capacitor ( ) boosts as given in Fig. 2(a).Simultaneously, current in the capacitor ( ) and supply current ( ) drop (discharging).As a result, the energy is transmitted to the capacitor ( ) (charging) with the help of diode .Consequently, voltage across the intermediary capacitor boosts as given in Fig. 2(b). Mode 2: When switches and are in "OFF" condition, current in inductance drops (discharging).As a result, the intermediary capacitor ( ) gets (charging) energy from input inductor ( ) with the help the diode Fig. 2(c). Closed loop system of proposed converter depends on PID controller.The PID controller examines to the desired set point through calculation for regulated voltage and output obtained in the desired level.Finally, it controls the process.PID controller algorithm can be run by the obtained Eq. ( 7): where means: () is set voltage -actual voltage. The desired output voltage is obtained by the control strategy of MOBB converter as shown in the Fig. 3, The value is 0.2.The switching frequency value is = 5 kHz; = 10 kHz, where , = 0.2 , = 4 represents the controller gain and generates saw-tooth signal ( ), that generates the gate signals ( , ), to MOBB converter switches ( , ).The output voltage equation is given by: Conduction losses of switch: , : Switch , losses during conduction , is expressed in Eq. ( 10): At instance of switching current ( ) is derived from Eq. ( 12): Oscillation of power devices current (∆ , ) is given as per Eq. ( 13): The power controlled devices losses are obtained from the Eq. ( 14) depending on the values of Fig. 4: Power controlled devices on and off losses are derived from the Eq. ( 16): Voltage-current losses of converter The total power losses of the converter using PID controller are given by Eq. ( 21): where is output power of MOBB converter (39 W) and refers power losses.A proposed TPC converter for the implementation of PMSM motor drive MOBB converter using four switch inverters in a single standalone are shown in Fig. 6.Power device present in converter at the proposed work is to extract the radiation from solar panel using the P&O MPPT algorithm.On the contrary, the regulation of the total output voltage to the required value is made by the power sharing switch ( ) present in the MOBB converter.In addition, when the generated energy is sufficient to drive the PMSM motor, the excess energy is utilized for charging the battery.When the power switch ( ) is turned on, the inductor L1 stores the energy, and thereafter there is an energy flow from the PV module to the battery.If the energy that is generated at the PV module is not adequate to drive the PMSM motor, then the power system acts as a boost converter, making the energy transfer from the battery to the PMSM motor.When the switch (s 1 ) is turned on, the inductor ( ) stores the energy from the battery, and, if the switch s 2 is turned off, the energy that is stored in the inductor is transferred to the PMSM motor.The command signals of the switches ( , ) are complementary.Proposed MOBB converter fed pmsm drive system using four switch inverter in a single standalone system operates in different power flow modes such as Battery Energizing unit (BEU), Solar Module (SM) and Battery Deenergizing unit (BDU). Battery energizing unit (BEU) Operation mode A. Meanwhile at Operation mode A controlled device gets triggered controlled devices ( , and ) are not triggered.Input inductor ( ) destores electrons, the current ( ) reduces, in between capacitor ( ) gets boosted. Operation mode B. Meanwhile at Operation mode B controlled device is triggered "ON", then controlled devices ( , and ) not triggered.Meanwhile ( ) stores energy from the input source, and electrons across inductor ( ) increases, then in between capacitor ( ) and solar unit act as series connected sources and start destores electrons to output ( ) through the diode ( ). Operation mode C. Meanwhile at Operation mode C controlled device is triggered, controlled devices ( , and ) not triggered.Ampere-hour module gets energy from source ( ) current through the diode ( ), and ampere hour module highly boosted ( ). Operation mode D. Meanwhile at Operation mode D controlled devices and are triggered, the controlled devices ( and ) are not triggered.The in between capacitor stores energy at inductor ( ) current, then ampere hour module gets stored the value from inductor ( ) current via the diode ( ).Operation mode G. Meanwhile at Operation mode G controlled device and and are not triggered, the electrons in ( ) destores the energy to the output capacitor through the diode. Battery deenergizing unit (BDU) Operation mode H. Meanwhile at Operation mode H Controlled device is triggered, so , and not triggered.Input ( ) destores the energy, meanwhile electrons ( ) decreases, in between capacitor ( ) energizes. Operation mode I. Meanwhile at Operation mode I Controlled device is triggered, controlled devices , and are not triggered.stores energy from the input source, and electrons across inductor ( ) increases, then in between capacitor ( ) and solar unit act as series connected sources and start deliver current electrons to output ( ) through the diode ( ). Operation mode J. Meanwhile at Operation mode J controlled devices , are triggered then controlled devices and are not triggered.Input ( ) stores from source, and electrons from ( ) energizes, ampere hour module destores to input ( ) through diode ( ).Then inbetween capacitor and input source destores the energy to the output capacitor .Fig. 8. Proposed MOBB converter fed PMSM drive system using four switch inverter in a single standalone system at BDU Results and discussions The following graph shows the outputs for the different-order integrator process with dead time transfer functions for load disturbance which is the regulatory response.The regulatory system is the system in which the response will get settled by itself in the set point without any external influence when the disturbance is given.The Error performance values for actual coefficients are shown in Table 4.The robustness of controller is analyzed by calculating the IAE, ISE and ITAE.These values are calculated for both positive and negative mismatch of the controller for both the existing and proposed method.The positive mismatch denotes 10 % increase of controller coefficient values, and the negative mismatch denotes decrease of 10 % from its values.It is shown in Tables 5, 6. Disturbance rejection property The system is disturbed by a disturbance = 0.1 at = 200.Then the system is analyzed by the rise time, settling time and overshoot values.For a good controller, the settling time and overshoot value should be less.The performance of the proposed method is compared with the Nelder-Mead and shown in Table 7. The Table 8 shows the simulation specifications.The characteristics of a 80 W solar module are simulated using the MATLAB tool based on the equivalent circuit model.The simulation of the controller is listed separately in order to denote the looping and the unit time constant of the controller. Simulation output voltage control Asymmetrical for proposed converter To verify the asymmetrical validation of proposed converter, source input fed is ( = 18 V) as determined in Fig. 13.Similarly, resistance (1 = 2 = 28.8Ω) are used load, asymmetrical condition.The output voltages of the proposed converter are desired to be regulated on ( = 30 V and = 18 V) from 0 to 0.4 Sec.The total powers and are drawn under asymmetrical are stated in the Fig. 13 The following Table 9 shows the values for the rise time, peak time, settling time and peak overshoot of the transfer function 1 for the load disturbance.These methods do not show the good response for few methods since it has little high time delay.10 shows the hardware specification.Step down transformer 230V/12V 3 Step down transformer 230V/5V 4 MOSFET SFH6325 5 PMSM motor 220 V, 100 mA, 60 rpm Experimental results of output voltage The below Fig. 16 is output voltage waveform.These PWM pulses are generated by the B4 inverter which these signals are fed through the PMSM motor drive.11. Prototype experimental setup Experimental setup of enhanced three-port converter fed PMSM motor are shown in Fig. 21.When the PMSM is driven by different switching frequency inverters, the acoustic noise spectrum has highly concentrated harmonic energy near the switching frequency and its multiples.The proposed research involves triangular, saw-tooth periodic switching frequency modulation patterns to transfer discrete power spectrum to much wider frequency ranges, so as to limit the acoustic noise at a low cost.To reduce the vibration under different switching frequencies on the PMSM motor, the motor shall be optimized.Then, by using controller techniques, the optimization is made on the motor.It is shown in Figs.22 and 23.To reduce the vibration under different switching pulses on the PMSM motor, the motor shall be optimized.Then, by using controller techniques, the optimization is made on the motor.It is shown in Figs.24 and 25.The various comparison parameters on vibrations are shown in Table 12. Conclusions An analysis is made for four different integrator time delay transfer functions for different time delays and shows the disturbance rejection and system performance for the different PID controllers for load disturbance, it shows good performance for the transfer functions with the delay less than 2. Compared to other methods, the proposed method results are good for the pure integrator process transfer function with the time delay 5. Thus, the pure integrator process having small time delay shows better response.Dead time compensators can be used for the process with a large time delay so that better response can be obtained.A simulation of fractional second order function, a Buck-Boost converter is designed, and its concert is analyzed for various functioning factor conditions.Proposed converter works in asymmetrical condition for harvesting the battery life.Similarly, two output capability is appropriated for B4-inverter which reduces the cost of this proposed system probably.The design along with a reduced number of components minimizes the current conduction losses and assists in increasing the service life of the switches.The Controller shows a significant part in controlling the system output based on the output need.In such a situation the controller has to be tuned one and also the operation of the controller should be an easy one.Hence, this paper proposed an SBO based tuned FOPI for a different process.The FOPI is chosen over the traditional PI because the fractional order requires only less no of computation when controllers are increased.For the tuning purpose, the SBO algorithm is employed because the position initialization of the bowers is based on the probability of best bower and mutation step.While in the Nelder-Mead of tuning the positions are randomly initialized which affects the tuning process of the controller.Based on the performance evaluation also, the proposed method outperforms the Nelder-Mead and produced the best tuning of FOPI controller for different transfer functions.The SBO tuning produces less overshoot as compared to the existing method.The proposed technique produces less variations in positive and negative mismatch.The values indicated in "-" denotes that the result produces the larger value which is not suitable for the tuning.The tuning of parameters based on SBO technique is suitable for the input without delay functions.In future, the SBO and Nelder-Mead tuning has to improve to produce the best results for delayed unit not only based on the stable region of the output as in the other methods. • Random bower generation, • The probability of each bower, • Elitism, • Position updating, • Mutation, • Best result obtained from all bower positions. Fig. 1 . Fig. 1.Cut-off frequency and phase margin of transfer function Fig. 6 . Fig. 6.Proposed MOBB converter fed PMSM drive system using four switch inverter in a single standalone system Fig. 7 . Fig. 7. Proposed MOBB converter fed PMSM drive system using four switch inverter in a single standalone system at BEU mode 2.4.5.Solar module (SM) Operation mode E. Meanwhile at Operation mode E controlled device gets triggered controlled devices ( , and ) are not triggered.Input inductor ( ) destores electrons, the current ( ) reduces, in between capacitor ( ) gets boosted.Operation mode F. Meanwhile at Operation mode F controlled device is triggered, the input inductance (L1) gets charging from the input DC supply.Thus, the current in input inductance increases.Simultaneously, the current in the intermediate capacitor ( ) and input supply act as series sources, and the energy of intermediate capacitor is transferred to the output capacitor ( ).Operation mode G. Meanwhile at Operation mode G controlled device and and are not triggered, the electrons in ( ) destores the energy to the output capacitor through the diode. Fig. 13 .Fig. 14 . Output voltage variations3.3.Experimental analysis of transfer function using TPC and b4-inverter fed PMSM motor driverExperimental Analysis of Transfer Function Using TPC And B4-Inverter Fed PMSM Motor Driver is conducted for examining the enhanced TPC and B4-Inverter fed PMSM motor drive.The hardware of transfer function output to TPC and B4-Inverter fed PMSM motor drive is presented in Fig.15.a) b) Performance of MOBB converter under symmetrical voltage and battery insolation condition TPC parameters under different insolation condition a) 200 W/m 2 , b) 1000 W/m 2 Fig. 15 . Fig. 15.Experimental illustration for TPC and B4-Inverterfed PMSM motor drive The hardware TPC and B4-Inverter fed PMSM motor drive consists of a 16-Bit digital signal Peripheral Interface Controller (DSPIC30F4011) for TPC control, a 32-Bit Mixed-Signal Microcontroller (MSP432P401R) for B4-Inverter control, a gate driver circuit for TPC and B4-Inverter MOSFET are employed.Table10shows the hardware specification. Fig. 16 . Output voltage C01 at 1000 w/m 2 : a) experimental, b) simulation3.5.Experimental results of PWM pulses of and The below Fig.17is the output waveform of PWM Pulses across the switches and .These PWM pulses are generated by B4 inverter which these signals are fed through the PMSM motor drive. Fig. 17 . Fig. 17.Output voltage waveform of PWM pulses of and 3.6.Experimental results of DC-DC converter output voltage Fig. 18 represents proposed MOBB converter generates the voltage of about 24 V, during solar insolation of 800 W/m 2 .Fig. 19 represents proposed MOBB converter generates the voltage of about 20 V, during solar insolation of 600 W/m 2 .Fig. 20 represents proposed MOBB converter generates the voltage of about 12 V, during solar insolation of 200 W/m 2 .Based on the new technical proposed on PI controller on the power devices the Performance Comparison of the conventional and proposed MOBB converters with various objects is shown in Table 11. Fig. 18 .Fig. 19 . Output voltage : a) experimental, b) simulation, c) chart at 800 W/m Output voltage waveform of DC-DC: a), b) experimental, c) simulation of converter, d) chart at 600 W/m 2 a) b) Fig. 20.a) Experimental output voltage waveform of DC-DC converter experimental, b) chart at 200 W/m 2 Table 4 . Error performance values for actual coefficients Table 5 . Positive mismatch of controller values Table 6 . Negative mismatch of controller values Table 7 . Comparison of disturbance rejection properties Table 8 . Simulation specifications for proposed converter using PI controller system S. No. Table 11 . Performance comparison of the conventional and proposed MOBB converter with various objects Table 12 . Comparison parameters on vibrations on PMSM
2019-04-16T13:29:30.581Z
2019-02-15T00:00:00.000
{ "year": 2019, "sha1": "7fe6ea338e984e104b0dd03a7aced869a2b7071f", "oa_license": "CCBY", "oa_url": "https://www.jvejournals.com/article/20014/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7fe6ea338e984e104b0dd03a7aced869a2b7071f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
245794187
pes2o/s2orc
v3-fos-license
Students’ Attitude towards Virtual Learning during Covid-19 in Chitwan The COVID-19 pandemic is rapidly accelerating the learning process. As a result, there has been a shift from face- to-face to virtual learning. The major purpose of this research was to identify the interest and attitude of students towards virtual learning during covid-19 pandemic. The study was to determine whether students are interested in virtual learning or not. This survey is based on primary data collected from students who are currently pursuing their degrees. Google form structured questionnaire was distributed via Messenger app to the students of Chitwan district colleges where virtual learning is going on using random sampling method. The result revealed that students are interested in virtual learning. The majority of students found it as an important means of making knowledge widerand bringing social changes despite hindrances (p= 0.001). Virtual learning is a powerful tool for teaching replacing face to face learning in any level as a quick solution to the crisis. However, successful implementation of virtual learning into curriculum requires a well thought-out strategy and equal access of all. Introduction A virtual learning environment is a web-based platform in educational technology that focus on the digital courses of study within educational institutions presenting resources, activities, and interactions within a course structure for the stages of assessment(Wikipedia). Personal computers and the Internet have revolutionized entire sectors of society. Facebook, twitter, YouTube, Skype, Whatsapp, WebEx, zoom, and other online communications media help billions of people around the world to share ideas in a matter of seconds in a cost effective way. Even then there are some problems because some people are unaware of how computers internet technology are transforming the way the students learn. However this emerging trend of virtual learning has the potential to improve students' achievement, educational access and so on. In the context of present ongoing Covid-19 pandemic, there is a forced immersion of learners into virtual learning replacing face -to-face learning method as a quick solution to the crisis (Abbasi & et.al., 2020). "The COVID-19 pandemic has created the largest disruption of education systems in history, affecting nearly 1.6 billion learners in more than 190 countries and all continents. Closures of schools and other learning spaces have impacted 94 per cent of the world's student population, up to 99 per cent in low and lower-middle income countries" (Natoins, 2020). The pandemic adds a further complexity in the field of higher education globally especially in the developing countries where there are unresolved challenges like growth without quality, inequality in access and achievement and the progressive loss of public financing. Particularly, those students on the verge of finishing high school and aspiring to begin tertiary education and undergraduates will have the immediate effect in accordance with their profiles, irrespective of their socioeconomic background and geographic situation (UNESCO, 2020). Amid the COVID-19 pandemic and the need for social distancing, this virtual learning platform has significantly reshaped and innovated how we teach and engage with our students. In addition, it has allowed us to continue to foster a sense of community that we hope can attenuate students' burnout and promote wellness in a time when isolation has become a part of everyday life. Program specific virtual learning platforms have the potential to play an importantand useful role in the teaching learning process (Almarzooq, Lopes, & Kochar, 2020). Though online learning is effective in digitally advanced societies it cannot produce desired results in under developed countries because a vast majority of students are unable to access the internet due to technical as well as financial problem (Adnan & Anwar, 2020). Statement of the Problem Even though a great number of studies and research projects in virtual learning have been conducted, the research on students' interest and attitude towards virtual learning during covid-19 pandemic especially in Chitwan has not been done yet. In this sense, there is a need for further research about perception of students of Chitwan including various colleges in order to identify the solution of the research question like: Are students interested in virtual learning? Does virtual learning improve their skills? Does virtual learning make their lockdown time a useful one? Significance of the Study This study will help to find out the students' attitude towards virtual learning during covid-19 pandemic. This study was done for the students who are studying in various colleges and universities and even schools of Chitwan with a view to gain additional information regarding the contribution of virtual learning during pandemic. Digital technology can be a good learning paradigm in educational institutions to enhance the students' knowledge and skills through digital technologies. Objectives of the Study This research aims to find out the global trend of virtual learning resources among Chitwan students. The following are the specific objectives: • To identify the interest and attitude of students towards using virtual learning resources. • To suggest prospects in using virtual learning resources by students. Review of the Literature Virtual learning has been the area of interest for many researchers and educators in order to enhance and improve student learning outcome while combating the reduction in resources particularly in higher education. The physical "brick and mortar" classroom is starting to lose its monopoly at the place of learning (Nguyan, 2015). Traditionally, learning environments are defined in terms of time, place and space. Conversely, virtual learning environment provides high level of student control, support of participant contact and interaction throughout the learning process. Moreover, it can potentially eliminate geographical barriers. It has got significant impact on the learning industry on a whole (Piccoli, Ahmad, & Ivs, 2001). An important extension of the system to add a module for knowledge level estimation of the students by using software agents that manage to provide a certain transparency of the physical allocation of the hosts in system (Kimovski, Trajkovic, & Davcev, 2001). Muhammad (2020) concluded 71.4% students reported that learning in the conventional classroom was more motivating than virtual learning. Even then the majority of the students can manage their time effectively online and can easily complete assignments in time. Abbasi (2020) found that mobile has become popular device among students for virtual learning as compared to laptops and tablets. Students have found it less appealing due to its limitation with respect to practical aspects of learning. Despite gaining immense popularity today digital technology has still not been embraced by the Medical and Dental students in teaching learning process. As per the World Economic Forum the Covid-19 pandemic also has changed the way how several people receive and impart education. Teachers have become habitual to traditional methods of teaching in the form of face-to-face lectures and they therefore try to avoid the change. But amidst this crisis, as there is no other alternative left over than adapting to the dynamic situation. It has become beneficial for the education sector and brought a lot of surprising innovations (Dhawan, 2020). "Virtual learning is an excellent option in education, particularly when there are hindrances to traditional learning situations" (Dhull & Sakshi, 2017). "The survey indicated that between 60 to 80 per cent of the syllabus has been covered by using online teaching methods according to a majority of students. Only one-fifth of the students stated that they could cover between 40 to 60 per cent of their syllabus during lockdown with the assistance of online teaching modes. Next, it came to light that the learners have encountered several problems in learning with online modes. The biggest among them is 'Poor internet connectivity problem' followed closely by 'the problem in choosing best source amongst many'. Non availability and non-affordability for e-learning resources, lack of technical skills and electricity issues are the other problems(Amita, 2020). Thus, above paragraphs indicate virtual learning is a temporary aid during difficult situation that we are facing. The Covid-19 pandemic has posed significant concern among students. The pandemic related challenges add additional layer of complexity because many students are from remote area with minimal access to electronic devices and reliable internet connectivity or stable electricity supply. The Covid-19 is rapidly accelerating the remote workplace especially there is the shift from classroom to virtual learning. The review of literature has shown various studies have been conducted to identify and assess perceptions and attitude of the e-learners towards e-learning. The research gap is found when it comes to study attitude of students towards virtual learning during Covid-19 pandemic situation in Chitwan. Research Methods This study is based on cross sectional survey method. The method of sampling technique was random covering the students of Chitwan district +2, bachelor and master degree colleges where students are currently attending online classes. The sample size was 224. Google form questionnaire incorporating 3 likert scale questions and demographic items was used to gather data about attitude of students regarding virtual learning during Covid-19 pandemic taking one month time period. The data were analyzed in frequency table, cross table and graphic representation. Figure no.1: Tools Used for Virtual Learning The above Figure no.1 reflects that 48%, 41.3%, 5.2%, 3.2%, 0.9%, 0.8% and 0.6% respondents used zoom, MS Teams, Google classroom, What Sapp, YouTube, mail, and others for learningpurpose during pandemic. Majority of the students used zoom app because it is popular, convenient and easy especially during this Covid-19 pandemic (Gallagher, 2020). The Table 3 reflects out of 224 respondents 93(41.5%), 70(31.3%) and 61(27.2%) are the responses of somehow agree, agree, and disagree respectively. The response of somehow is found to be highest. TheTable no.4 shows P-value in terms of gender is 0.227. It means gender doesn't determine the level of satisfaction in the use of virtual learning. In the same way, at the 10% level of significance P-value in terms of age is 0.075. But there is no association for the 5% level of significance.Here, it shows the higher level of students have more preference towards virtual learning. in somehow agree and 42(24.7%) have rated in disagree. 0.001 p-value presents the positive relation between virtual learning and skill improvement. Likewise, p-value 0.015 shows it is significant that virtual learning is useful in lockdown because of covid-19. Similarly, 0.003 p-value indicates those students who didn't face any problem during class have positive attitude towards virtual learning. In the same way, 0.001 p-value reflects virtual learning is a tool make knowledge wider. Even 0.010 p-value shows there positive relation between virtual learning and importance of web based teaching for students. Moreover, virtual learning is powerful means to bring social changes that is what p-value 0.001 reflects. There is no statistical difference between face to face and online learning in terms of opinion on the ability to increase knowledge(p = 0.46)" (Baczek & et.al., 2020). F. Student's Attitude towards Virtual Learning Here, this study shows p-value 0.001 in terms of virtual learning in making knowledge wider is quite significant comparing to the above result of Polish research. The cause of variation in result may be that unlike in other discipline, so many practical classes including clinical skills are essential along with direct contact with patients. Content Analysis: It was found conventional learning more preferable. Similarly, 57.7% students indicates the main reason for limited access is signal availability and strength problem and 30.5% reflects its high cost. Though the majority of students' preference is for face to face learning, 68.9% use Wi-Fi of their own, 17.3% use mobile data and 9.8% use neighbor's Wi-Fi during covid-19 pandemic. Due to the minimal access to electronic devices, unreliable internet connectivity and unstable power supply students have additional complexity in virtual learning. Conclusion and Recommendation After meticulous analysis of the data it was concluded that majority of students have positive attitude towards virtual learning during Covid-19 pandemic. Almost all of the respondents used zoom and Microsoft Teams for learning as they are popular, convenient and easy. Though students faced problem of poor signal availability, strength, high cost, they were interested in virtual learning and also found useful in improving skills for bringing social changes during Covid-19(p-value 0.0010). In the same way, almost all students were qualified to use electronic devices like laptop computer and were comfortable in communicating electronically. In the light of findings, online education is not a substitute but an appendage to classroom teaching. It is temporary aid during difficult situation that all are facing. It posed the great concerns among students and added additional layer of complexity in rural areas where thousands of students are lagging behind with minimal access to electronic devices, reliable internet connectivity or stable electric supply. In this regard, successful implementation of virtual learning into curriculum needs proper policy. The recommendation of the study is to further explore influencing factors of student's attitude towards virtual learning and to identify the perception of faculties regarding their experiences toward virtual learning during Covid-19 pandemic.
2022-01-07T16:12:04.922Z
2022-01-05T00:00:00.000
{ "year": 2022, "sha1": "e6c6dcc793783d4459bbf9cb06d20b55ee33ea3e", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/jbkc/article/download/42095/32028", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "224ed119e996f0536e3b27f26b968c7001c9a5c1", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
109534791
pes2o/s2orc
v3-fos-license
Innovation in Uruguay: New Perspectives on Public Policies and Education --This paper aims to present a general perspective of the actions developed by the Uruguayan Broad Front, which has been defined as an example of reformed left in Latin America. The Broad Front has governed Uruguay since 2005 and has implemented several alternative and innovative policies. Methodologically, the paper is empirically supported by data and information derived of interviews conducted in Montevideo, as well of the analysis of official documents. Among the results found out, it can be highlighting the following sample: 1) innovative programmes focused on most disadvantaged young children and their families like Uruguay Crece Contigo [Uruguay grows with you] have been successfully put in place and scaled up; 2) the institution of the Salary Councils (tripartite councils made up of government representatives, businesses and workers) stimulated the formalization of work and the rise in salaries, as well as the strengthening of unions; 3) the creation of the Ministry of Social Development (MIDES) as a new centralized social authority, which shares jurisdiction with the Social Security Bank (BPS) and the Ministry of Health; 4) in the face of the failure of the so-called drug war, the regulation of the cannabis market was approved during the term of President José Mujica. Conclusively, it’s affirmed, for instance, that the Broad Front perspectives and the policies it has implemented in Uruguay have instituted a new conception of the left in Latin America. Keywords---Uruguay, public policies, education, Broad Front, reformed left _________________________________________________________________________________________________ INTRODUCTION In 1940 Walter Benjamin wrote his Theses on the Concept of History provoked by the idea that society lived at that time in a moment of danger in countries like Germany and Italy (Benjamin, 2006).Today, it is probable that we also are living in a moment of danger, but it is a moment that is not produced only in the sphere of the State.In Benjamin's time, the danger was the rise of fascism as a political regime commanded by explicitly assumed dictators like Hitler and Mussolini.In our time, the danger is the rise of fascism as a societal regime in several countries. In such way, we can remember the case of Galdino Jesus, an Indian Pataxó from Northeast Brazil.Some years ago, he went to Brasilia to take part in march of the landless.The night was warm, and he decided to sleep on a bench at the bus stop.In the early morning hours, he was killed by three middle-class youths.As the youngsters confessed later on to the police, they killed the Indian for the fun of it. Societal fascism can be defined as a set of social processes by which large bodies of populations are irreversibly kept out or thrown out of any kind of social contract (Santos, 2001).So, they are rejected, excluded and thrown into a kind of Hobbesian state of nature, either because they have never been part of any social contract and probably never will; or because they have been excluded or thrown out of whatever social contract they had been part of before. In Europe, United States and Latin America, far-right populist political positions have defended ideas identified with the societal fascism.For example, racist ideas.On the other hand, neoliberal agendas around the world generate social exclusion.Far-right populists and neo-liberals are two sides of the same coin.The coin of conservative modernization.Both attack workers' right and institutions as the public school. In addition to the far-right populists and neo-liberals, the alliance that supports the conservative modernization is still constituted by two other groups: neo-conservatives and a particular fraction of the upwardly mobile middle class.However, neoliberalism is the most powerful ideological element within conservative modernization.As a rule, according to neoliberals, what is private is necessarily good and what is public is necessarily bad.Therefore, public institutions such as schools are presented by the neoliberalism as 'black holes' into which money is poured -and then seemingly disappears -, but which do not provide anywhere near adequate results (Apple, 2001). The conservative modernization, under leadership of the far-right populists and neo-liberals, has won the battle over common sense.They have skillfully stitched together different perspectives and commitments and have organized them under its own headship in matters dealing with social welfare, culture, economy and education.Probably, Brazilian government is today the most evident example of the alliance between far-right populists and neo-liberals. By conservative modernization, the economic rationality is more powerful than any other.In the educational field, such a position conceives the students only as human capital.It means that the world is extremely competitive economically, and the students as future workers must receive the requisite skills and dispositions to compete efficiently and effectively.Other crucial idea here is consumer (Apple, 2001).This is, the world is a big supermarket, and "consumer choice" is the guarantor of democracy.Consequently, education is seen as merely one more product like rice, cake and cell phone.In this way, in last instance, democracy is turned into consumption practices.The ideological effects of such point of view are momentous.Rather than democracy being a political concept, it is transformed into an entirely economic concept (ibidem). However, in this work, I'll develop an approach that is in opposite side to the conservative modernization.I'll focus the case of the Uruguayan reformed left governments and their perspectives about public policies, which can be defined as counterhegemonic perspectives, as we will see it.Methodologically, the work is the result of a postdoctoral research carried out in Montevideo, Uruguay.Empirically, it is supported by data and information derived from interviews and documentary analysis. THE URUGUAYAN REFORMED LEFT The left political is a concept that encompasses a wide variety of political currents, from communism to socialism, and from social democracy to progressive liberalism.In Uruguay, such reality is very perceptible.The Broad Front (Frente Amplio, in Spanish), as a left coalition party, brings together communists, socialists, popular-nationalists, social democrats and left Christians.The Broad Front was founded in 1971, and it was declared illegal during the 1973 military coup d'état.It arisen again in 1985 when democracy was restored in Uruguay. Except during the period of military rule, Uruguay had been governed since independence, in 1828, until 2004 by two partiesthe Colorado and National Party.Historically, both are heterogeneous political forces, with the Colorado Party affiliated with urban groups and the National Party representing rural areas (Hudson & Meditz, 1992).However, in 2004, the Broad Front won the presidential and congressional elections.Tabaré Vázquez, a doctor specialized in oncology, was elected President of the Republic and ended 170 years of political domination of the traditional parties. President Vázquez was inaugurated to a five-year term in March 2005, and Uruguay was just beginning to recover from the 2001-03 crisis, which was mainly caused by the spillover effects of the economic problems of Argentina and Brazil 1 .He adopted a political perspective that has been defined as reformed left (Swagerman, 2014).Such perspective seems to conceive the left as a "current of thought, politics, and policy that stresses social improvements over macroeconomic orthodoxy, egalitarian distribution of wealth over its creation, sovereignty over international cooperation, democracy (at least when in opposition, if not necessarily once in power) over governmental effectiveness" (Castañeda, 2006, p. 32). Therefore, President Vázquez opened the door to a new, bold political direction.He worked at stabilizing the economy, signed a three year $1.1 billion stand-by arrangement with the IMF that committed Uruguay to a substantial primary fiscal surplus, low inflation and a reduction in foreign debt.On the other hand, this agreement, combined with a mix of pro-investment policies and social programs, contributed to revitalize the Uruguayan economy in a short period of time."His $240 million National Plan to Address Social Emergency contributed to reduce poverty.He established wage councils made up of representatives from unions, business and government to negotiate wages for 100,000 firms and 600,000 workers.Hundreds of jobs were created under the Work for Uruguay Program, pushing unemployment down from 12.3 per cent to 7.3 per cent, its lowest level in decades" (Yoldi, 2010, p. 6).In addition to it, he reduced value added tax on basic food items, and created a personal income tax that exempts the poorest 60 per cent.President Vázquez also made efforts to decentralize government and encourage greater popular participation in politics. The Vázquez Administration's labor reforms increased considerably the of unions, playing a substantial role in the percentage of unionized workers more than doubling between 2005 and 2007 to approximately 24% of the labor force.President Vázquez and the Broad Front also overhauled the tax system to make it more progressive, reducing the valueadded tax and replacing the tax on wages with a personal income tax that exempted the poorest 60%. The Vázquez Administration was widely approved by the population, and this fact contributed to a second victory of the Broad Front in the 2009 elections.But the candidate had his own merits.It was José Mujica, and his suffering at the hands of the military gave him great credibility among voters, since he is a man who paid a high price for his ideas.Mujica was imprisoned for 14 years as a result of his activities as one of the leaders of the Tupamaro National Liberation Movement, a leftist urban guerilla group that operated in Uruguay during the 1960s and 1970s.Following the return to democracy, Mujica helped to create the Popular Participation Movement (MPP), which is the largest group within the Broad Front coalition.He was elected to Uruguay's lower house in 1994 and to the Senate in 1999, before serving as minister of livestock, agriculture, and fisheries during the Vázquez Administration.José Mujica was inaugurated to a fiveyear term in 2010. Although the former guerrilla is politically different from his predecessor, he is also a consensus-builder, such as Vázquez.Moreover, Mujica has made it clear that there is no contradiction between embracing revolutionary ideals and seeking conciliation, as well as assuming more moderate positions.He and Vázquez are the most representative faces of the reformed Uruguayan left. President Mujica became known for his modest lifestyle, as well as for his reflections on the human being and contemporary world.Hence, he also became known as the philosopher president.During his administration, he donated 90 per cent of his salary to charities for the poor and entrepreneurial start-ups, preferring to live on the farm owned by his wife just outside Montevideo rather than the president's official residence, and eschewing state limousines for his battered Volkswagen Beetle.He has not taken to the trappings of power. He continued the actions of his predecessor and developed bold policies that, for example, reduced the number of Uruguayans living below the poverty line and were responsible for a new approach to drugs, which regulated the cannabis market.So, Mujica concluded his term as a President popular, with a 65 percent approval rating.As a leader of the Uruguayan reformed left and as consequence of the public polices of his administration, he was described as the mentor of a calm revolution (Rabuffetti, 2014). In Uruguay, consecutive reelection is not allowed.Thus, in the 2014 elections, the Broad Front presented as candidate the former President Tabaré Vázquez.The success of the Broad Front's public policies, Mujica's calm revolution, and Vazquez's popularity ensured his victory in the second round.He defeated Luis Lacalle Pou, a leader of the conservative National Party.Therefore, the Uruguayan reformed left won its third consecutive term. President Vázquez was inaugurated to a five-year term in 2015.Overall, he has maintained the Broad Front's public policy guidelines.One of the facts that have marked his second administration is the victory in the so-called Philip Morris case.This is, in 2010, the multinational tobacco company Philip Morris filed a complaint against Uruguay, claiming that the Uruguayan smoking legislation devalued its cigarette and trademarks and investments in that country, and so demanded for compensation under the bilateral investment treaty between Switzerland and Uruguay.A fight between the world's largest cigarette manufacturer and a small South American country; a 'true David and Goliath story'.Such treaty provides that disputes are settled by binding arbitration before the International Centre for Settlement of Investment Disputes (ICSID). On 8 July 2016, the ICSID ruled in favor of Uruguay, and Philip Morris not only lost the case but was ordered by the court to pay South American country about $7 million in legal fees.It has been seen as a Vázquez's political victory and as an important moment for public health in the fight against the deregulation of the activities of the tobacco industry. However, on the other hand, in last times, the conservative opposite has intensified its criticisms and actions against the Vázquez administration and the Broad Front.Sometimes it proceeds like some Brazilian groups that deposed the former president Dilma Rousseff and made feasible the arrival of Jair Bolsonoro to the Presidency of the Republic.This is the case, for instance, of some segments of the so-call One Movement Only Uruguay.Such movement presents itself as an organization that aims to raise awareness to avoid 'complex situations' that were experienced in recent history and affirms that it feels the need to warn before it is to late (Un Solo Uruguay, 2018).This is the same far right-wing and populist rhetoric used in Brazil. President Tabaré Vázquez's term is nearing its end, and in 2019 there will be a new election.Such rhetoric is a demonstration of the kind of dispute that is likely to occur during the election campaign. PUBLIC POLICIES AND EDUCATION UNDER THE URUGUAYAN REFORMED LEFT ADMINISTRATIONS The Uruguayan reformed left administrations are a case of social democracy in contrast to other contemporary left experiences (Lanzaro, 2011).Such administrations are formed by an institutional left and conducted by left parties with a solid trajectory, within the framework of a plural and competitive party system.However, to a large degree Broad Front continues to hold ideologically left positions.In its bids for the Presidential elections, it launches a two-pronged strategy (ibidem): it incorporates the programmatic changes demanded by competition, but it keeps up an unyielding opposition towards the parties of the establishment and the neoliberal prospects.So, the ideological moderation, typical of any catchall party, is in this case limited, passing through an incessant intraparty dispute, with sectors that exhibit different degrees of assimilation or resistance to the liberal perspectives. The Broad Front administrations have developed a significant number of innovative public policies, for instance, in the area of human rights and with respect to economic and social context, cultivating a moderate reformism, but, at the same time, it is an audacious reformism that composed a counter-hegemonic social democratic agenda. The public policies carried out by the Broad Front administrations are an outcome of the political framework that it has been emphasized above.I will present from now on a sample of them, constituted of the educational policy, social policy, labor policy and human rights policy.I will put more emphasis on education, because there are dimensions about it that must be taken into account more largely.So, I am going to start by such topic. First of all, it is important to underline how the 'philosopher president' conceives the education, because José Mujica has presented reflections in this sense, and thus he has exercised some influence in the development of educational actions.At least during the period in which he was President 2 .Some of his ideas about education are as follows (Mujica, 2009;Martin McQuillan, 2015): 1.As we are going, knowledge repositories are not going to be inside our heads anymore, but outside of us, available to be searched for in the internet.It will be there all the information, all the data, everything that is already known.In short, it will be there all answers.What will not be are all the questions.The ability to ask questions is what will be important.The ability to ask deep questions that trigger new research and learning efforts.2. "The intelligence that contributes most to a country is the distributed intelligence.It is the one that is not only kept in the laboratories or in the universities, but the one that walks through the streets.The intelligence that is used to plant, to program a computer or to cook, it is the same intelligence.Some have climbed more steps than others, but it's the same ladder.The steps below are the same for both nuclear physics and field management.It's necessary a curious and nonconformist look, as well as an active position interested in new knowledge" (ibidem, p. 2). 3. "I have a dream in which parents show the grass to their children and ask them: do you know what is this?And they answer: it is an energy processing plant of the sun and minerals of the earth.Or yet that parents show to their children the starry sky, and make them think about the celestial bodies, the speed of light and the transmission of waves" (Mujica, 2009, p. 3). 4. Education has the transformative power to contribute to greater justice and less inequality in society.However, educational policy is not neutral, and so it needs to be guided towards a purpose by a political agenda. 5. Education, and especially the university, can be an instrument of division that serves the interests of elites, preserving or increasing inequality.Therefore, education cannot be separated from democracy, as a tool that must assist the most people to improve their condition in life.6. Universities must create another culture; a universal humanistic culture, above nations, that must create common values for the human beings.7. The universities are the place where we charge our batteries to create new ideas in society.8.In the modern world the abandonment of philosophy is the main cause for the loss of values beyond market economics.9.The modern world gives us so much to see, but we do not really see it without philosophy.10.Philosophy is not just something to be learned in a university, but it must be a permanent questioning.11.We can get to the point where studying, and learning is no longer an effort, it is pure pleasure.12. Humankind possesses the knowledge and technological innovation to solve its problems, but what is missing is a political mentality to do it. Overall, Broad Front administrations have been in agreement with the Mujica's approach to education.So, as the Economic Commission for Latin America and the Caribbean (ECLAC, CEPAL in Spanish) recognizes, education has also been one of the priorities of its governments (CEPAL, 2009).This is reflected in the significant increase that has occurred in public spending, which went up as a proportion of overall social spending and as a percentage of GDP.Public spending on education is projected to reach 6% of GDP in 2020. There have been important measures aimed at the development of science and technology, as the creation of the National Agency of Research and the consolidation of the National System of Researchers.Moreover, other important measures are, for instance, as follows: the program of "community teachers", supporting primary school students in vulnerable areas; the Program for the Universalization of Secondary Education, which seeks to overcome alarming education failures at the secondary level; and the Ceibal Plan (named after the Uruguay´s national flower), inspired by the international One Laptop per Child initiative, which seeks to generalize an early introduction to computing."Uruguay has been a pioneer in carrying out this democratizing initiative on a national scale, which was imposed widely and with great connectivity" (Lanzaro, 2011, p. 33). In a 2015 report, UNICEF highlighted the progress of the Uruguayan educational policies.It affirmed that in the last times an important progress was made regarding development and strengthening of public policies devoted to early childhood and to education in Uruguay.According to UNICEF (2015), innovative programmes focused on most disadvantaged young children and their families like Uruguay Crece Contigo have been successfully put in place and scaled up.At the same time, UNICEF emphasized (ibidem, p. 7) that the "school programmes aimed at improving learning achievements like 'Maestros Comunitarios' and the programme 'Aprender' were carried out in many schools throughout the country reaching an increasing number of children.At secondary schools, programmes developed to reduce dropout rates have provided a very valuable new approach to this issue." Public policies for labor relations maybe is one of the spheres where the historical identity of the Broad Front has been reaffirmed more significantly, as I have previously emphasized.The Uruguayan left has historically maintained a close relationship with labor unions."This kinship was a decisive factor in the 1960s political events, which led firstly to the unification of the national labor federation (1964) and afterward to the foundation of the Frente Amplio [Broad Front] (1971).In a typical social democratic path, a fundamental connection was forged between the unions and the left party, though each part also managed to maintain significant and shifting margins of autonomy" (Lanzaro, 2011, p. 34).Such relationship has strongly reflected in the Broad Front administrations.For instance, in the first term of Tabaré Vázquez, almost thirty members of the initial team were of union extraction. Therefore, the policy adopted by the Vázquez administration with respect to labor relations had the clear stamp of the left and reinforced the relationship between the Broad Front and the unions.Perhaps, the most noteworthy measure in this area was the reinstatement of the Salary Councils 3 .According to what I have already pointed out in this paper, these are tripartite councils made up of government representatives, businesses and workers, which institutionalize collective negotiations by branch of activity, in order to determine salaries and regulate labor relations. The institution of the Salary Councils stimulated the formalization of work.They include, in addition to the traditional workforce, public employees, rural laborers and domestic workers.The good results of the economy and the actions taken by the government have increased the value of the salaries.The Broad Front administrations sought to reverse the fall in real private salaries, which amounted to a reduction of 25% between 1998 and 2004.However, under the governments of the Uruguayan reformed left, salaries have been recovering year after year.For example, by the end of the period 2005-2010, they reached almost five points above the 1998 level, as it can seen in the below figure.In the context of social policy, the governments of the Uruguayan reformed left have developed several actions that reinforce the presence of the political dimension in the approach to social welfare, as well as reinforce the intervention of the State in this context.Some of such actions are as follows: 1. Development of a strategy that combines universal benefits and policies targeted at the most vulnerable sectors (children, young people and female heads of household).This included the launch of conditional transfer programs (CTPs).2. The creation of the Ministry of Social Development (MIDES) as a new centralized social authority, which shares jurisdiction with the Social Security Bank (BPS) and the Ministry of Health.3. The Equity Plan, a permanent program of social protection, which prioritizes young people and their parents, but it also covers other vulnerable groups, such as the elderly.It centers on cash transfers, making new contributions to old-age pensions and especially to family allowances.4. The "Work for Uruguay" program, which offers temporary employment and training courses, attempting to construct routes out of poverty. 5. Institutionalization of the social policy programs such as State programs, allowing social provisions to reach beneficiaries on the basis of rights and via bureaucratic channels rather than through clientelistic linkages. The priority given to social policy is reflected in the increase in social spending.According to the Economic Commission for Latin America and the Caribbean, between 2004 and 2008, overall public spending increased each year by 30% in absolute terms, and public social spending per capita, in turn, went up, both in absolute and relative terms, having an accumulated increase of 41% in real terms during this period, putting public social spending above the average percentage of GDP spent in Latin America (CEPAL, 2009). In the area of human rights, the policies of the Broad Front administrations have achieved a great projection.Such policies have positioned Uruguay on the leading edge of lesbian, gay, bisexual, and transgender (LGBT) rights in Latin America by allowing LGBT individuals to serve openly in the military, legalizing adoption by same-sex couples, allowing individuals to change official documents to reflect their gender identities, and legalizing same-sex marriage.Moreover, the Uruguayan reformed left, has approved the decriminalization of abortion and the regulation of the Cannabis market, from production to consumption. The regulation of the cannabis market is one of the boldest policies of the Broad Front administrations.It was a decision of President José Mujica taking into account two aims: on the one hand, to reduce the potential risks and harmful effects of smoking marijuana for recreational purposes, and on the other hand to take the cannabis market out of the hands of criminal networks and to separate the licit cannabis market from the illicit market of more harmful substances.The Uruguayan regulation instituted three ways of access to cannabis: homegrow, commercial purchase and cannabis clubs.The Law 19.172 established, for instance, the following rules for regulation: 1. Cultivation of hemp for industrial purposes (containing less than 1 per cent THC) falls under the responsibility of the Ministry of Livestock, Agriculture and Fisheries.2. Cultivation of psychoactive cannabis (containing more than 1 per cent THC) for medical purposes, scientific research or "for other purposes" requires prior authorization from the Institute for the Regulation and Control of Cannabis (IRCCA).3. Cultivation of cannabis for personal consumption or shared use at home is permitted up to six plants with a maximum harvest of 480 grams per year.4. Membership clubs with a minimum of 15 and a maximum of 45 members, operating under control of the IRCCA, are allowed to cultivate up to 99 cannabis plants with an annual harvest proportional to the number of members and conforming to the established quantity for non-medical use. Asian Journal of Applied Sciences (ISSN: 2321 -0893) Volume 07 -Issue 01, February 2019 5. IRCCA licenses pharmacies to sell psychoactive cannabis for therapeutic purposes on the basis of medical prescription, and for non-medical use up to a maximum of 40 grams per registered adult per month.6.Any plantation operating without prior authorization shall be destroyed upon the order of a judge. CONCLUSION In conclusion, I return to the initial point.The Broad Front perspectives and the policies it has implemented in Uruguay have instituted a new conception of the left in Latin America.It's a reformed left.Its public policies focus on institutional social democratic capacity-building with counter-hegemonic approaches and initiatives.In this way, both regulation of the cannabis market and the valorization of the State's role in the economic activity and social policies are two central evidences. The political success of the Broad Front can possibly be explained, according to what Lanzaro (2011) states, as a result of three factors: i) its development as a catch-all and electoral party, maintaining nevertheless a relative robust organization as well as its kinship with trade unions and social movements; ii) its structure as a coalition-party, unifying all left groups and having at the same time a wide electoral dragnet; and iii) its two-pronged strategy, combining opposition against neoliberal reforms and privatizations, in defense of the statist tradition, with trends towards ideological moderation. The large ideological spectrum of the Broad Front (including socialists, communists, popular-nationalists, ex Tupamaro guerrillas converted to electoral politics, Christian left, and even sectors split from the traditional parties) constitutes a structure that casts a wide electoral dragnet, making it a strong and competitive political force.On the other hand, the Uruguayan reformed left administration is a case of majority presidentialism, which includes strong presidential leadership and operates at the same time as a sort of cabinet government due to the FA's nature as a coalition-party. In short, the transformations of the Broad Front governments involve, for example, advances in human rights and education, tax reform in favor of a progressive income, reinstatement of the salary councils, social policies that target cash transfers, universal family allowances and improvements in health, as well as labor policies that favor the working class. 2019 will be a decisive year for the Broad Front.There will be a new presidential election in October and the coalition-party will try to win another term.It will face a large conservative and right-wing alliance that intends to repeal policies implemented by the Vázquez and Mujica administrations.But anyway, the tree of the reformed left has already flourished in Latin America.
2019-04-12T13:41:40.762Z
2019-02-22T00:00:00.000
{ "year": 2019, "sha1": "ac59f0bd41eeabac5631ff3aa40a633a8e986586", "oa_license": "CCBYNC", "oa_url": "https://www.ajouronline.com/index.php/AJAS/article/download/5714/2990", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9b99953024185f78c35d54ec98607c1f47e979cc", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
59336984
pes2o/s2orc
v3-fos-license
Food web topology and nested keystone species complexes Important species may be in critically central network positions in ecological interaction networks. Beyond quantifying which one is the most central species in a food web, a multi-node approach can identify the key sets of the most central n species as well. However, for sets of different size n , these structural keystone species complexes may differ in their composition. If larger sets contain smaller sets, higher nestedness may be a proxy for predictive ecology and efficient management of ecosystems. On the contrary, lower nestedness makes the identification of keystones more complicated. Our question here is how the topology of a network can influence nestedness as an architectural constraint. Here, we study the role of keystone species complexes in 27 real food webs and quantify their nestedness. After quantifying their topology properties, we determine their keystones species complexes, calculate their nestedness and statistically analyze the relationship between topological indices and nestedness. A better understanding of the cores of ecosystems is crucial for efficient conservation efforts and to know which networks will have more nested keystone species complexes would be a great help for prioritizing species that could preserve the ecosystem’s structural integrity. Introduction Understanding and predicting the robustness and vulnerability of complex ecological networks is a topic of increasing relevance.There is a general agreement that nodes in certain critical network positions may have disproportionately large effects on network functioning.The loss of these key nodes may easily generate cascading effects in the network, so their management is important.These cascading interactions are hard to predict, since secondary effects depend on the particular architecture of the network.Thus, the question of how network topology influences the systemic importance of critical nodes emerges.Focusing research on these key nodes can be one way on how to tame and handle complexity [1] and assess the relative importance of species in ecological communities [2][3][4]. Various network centrality measures can quantify and identify important network positions [5,6], and structural analyses [7][8][9] are increasingly supported by dynamical studies [10,11].The latter suggest that key positions may not be identified only by local indices (e.g., node degree).Instead, network measures considering the indirect neighbourhood (e.g., betweenness centrality) of nodes are needed.A number of experimental [12] and modelling [13] works support the importance of indirect effects in biological systems.There is growing interest in nonlocal, mesoscale network indices [5]. Apart from expanding the neighbourhood of focal nodes (increasing the distance for network effects), it has also been suggested that the number of local nodes may also be expanded from 1 to n.The centrality of node sets has been discussed [14,15] and applied in other fields of science (e.g., landscape ecology [16,17]).This approach suggests that the positional importance of network nodes may not be characterized independently, one by one, but rather simultaneously.Support for the relevance of multispecies vulnerability analyses comes from both empirical (e.g., keystone species complexes [18]) and modelling (multispecies fisheries [19]) directions.Recent attempts have been made to model and determine the identity of keystone species complexes in real ecosystems by network analysis [20][21][22]. Although the predominant view on network robustness is focused on local and single-node analyses (i.e., degree distribution [8,23,24]), here, we take a nonlocal, multinode approach to the problem.In this paper, (1) we quantify the macroscopic (network-level) topological properties of 27 real food webs, (2) we calculate the centrality of their node sets, (3) we quantify the nestedness of the highest centrality sets, and (3) we study the correlation between nestedness and topological network properties.We argue that large nestedness makes the network more predictable and manageable [25], so our results may have implications to the efficiency of conservation efforts. Materials and Methods 2.1.Food Webs.We used 27 food webs freely available from the NCEAS database (http://www.nceas.ucsb.edu/interactionweb).These describe various, mostly terrestrial ecosystems.For the complete species lists and more biological information, see the original source.Before the analyses, we deleted isolated nodes and small components from the networks and focused only on the giant component (this typically means the deletion of only 0-5% of the original nodes).Furthermore, nodes were recoded, so numbering starts with zero. 2.2.Network Analysis.We calculated nine global (macroscopic) topological properties for each network.The number of nodes (N) and the number of interactions (L) are trivial properties of every network.Their combination provides the connectance (C) (or density) of the network: where undirected interactions are considered with no selfloop.Based on individual node degree values, we can compute a macroscopic network measure, the average degree (avD), calculated for all nodes in the network.The clustering coefficient (CC i ) of node i equals the density of the subnetwork composed of the neighbours of node i.This is the probability that its two neighbours j and k will be directly linked to each other.It can be defined as where G i is the subgraph composed of the nodes that are directly linked to node i, E G i is the number of edges in this subgraph, and D i is the degree of node i. The whole network can be characterized by the average clustering oefficient calculated for all nodes (avCC), and this can be also weighted by the degree value of particular nodes (weighted clustering coefficient: wCC). The latter gives larger emphasis on clusters around more connected nodes.The distance between two nodes i and j in a network (d ij ) is the minimal number of links connecting them (i.e., the length of the shortest path length between i and j).The whole network can be characterized by the average of shortest path lengths (avSPL) and their maximum value (diameter, d).When a network is composed of more than one component, some distance values will be infinite (for nodes m and n belonging to different components).This makes it impossible to calculate distance-based network metrics.In these cases, the reciprocal distance between nodes i and j can be given as and this measure can be used also when a network consists of more than one component (since the reciprocal of infinity equals, by definition, zero).The distance-weighted fragmentation (DF) of the network can be calculated as which is the average reciprocal distance for each pair of nodes in the network.We selected these macroscopic network properties because they are simple, yet, they reflect several local 2 Complexity (degree-related), mesoscale (clustering-related) and global (distance-related) properties of the networks. Multinode Centrality. Apart from computing the centrality of individual graph nodes, one can define and quantify also the centrality of sets of nodes (see Figure 1).Multinode centrality analyses have already been performed for different types of ecological networks including food webs [26] and habitat networks [27]. The most central multinode sets of n = 1 to 4 nodes were identified for the 27 food webs, according to two different aspects of key player selection.First, how to best fragment (disrupt) the network by removing n key nodes (the "negative" version of the key player problem; KPP-Neg) and second, how to best send a message out from n nodes of the network to others (the "positive" version; KPP-Pos, see [15]).For KPP-Neg, we determined the most central node sets considering binary (F) and distance-weighted (FR) fragmentation centrality.For KPP-Pos, we determined the most central node sets considering binary m-reach centrality (Mm) and distance-weighted (DR) reachability with m = 1, 2 and 3 steps (M1, M2, and M3, respectively).Each of the four multinode centrality measures were computed for n = 1 to 4 nodes (n = 1 is clearly single-node).Multinode key sets were calculated using Pyntacle, our high-performance network analysis tool. 2.4.Nestedness.The nestedness of presence-absence ecological data [28] has a rich literature with well-developed methods ( [29,30]; for software, see [31]).The nestedness approach has also been extended to ecological interactions in binary networks [32,33].Here, we study the nestedness of ecological interaction networks in a very different way (see [15,20,25]), quantifying the set-subset relationships of central nodes in a network. We calculated the nestedness of central node sets (i.e., the overlap among the sets of size n = 1 to 4) using the Nrow metric [34].Nrow is the average percentage of nodes from smaller sets that are contained in larger sets, taking all possible pairs of sets.For example, for the food web demp au, the M2 key player sets for n = 1 to 4 nodes were {0} for n = 1, {0 2} for n = 2, {0 68 76} for n = 3, and {76 18 37 66} for n = 4.For n = 1 and n = 2, there is perfect overlap.For n = 1 and n = 3, there is partial overlap, since the smaller set (n = 1) is a subset of the larger one (n = 3).For n = 2 and n = 4, there is no overlap, since the two sets have no common elements.Averaging all the 6 overlaps, we have Nrow = 47.22,which is the nestedness value for M2 in the demp au food web (see the species identities for this food web in Discussion).The same was done for the remaining centralities (F, FR, M2, M3, and DR) and for all food webs.2.5.Statistical Analysis.We compared the 9 topological properties of the 27 food webs with their 6 nestedness metrics by Spearman correlation, because most topological properties were not normally distributed.We considered only correlations of 0.60 and above (as well as −0.60 and below).Correlations were calculated in R 3.3.0[35]. 3.2.Nestedness.Our question was if topology has any significant effect on the nestedness of keystone species complexes in the studied 27 food webs.Between 9 topological properties and 6 nestedness metrics for each food web, we analysed 54 correlations.Only 4 of them were significant (shown in Figure 2), and in each of these M2 was the nestedness index (F, FR, DR, M1, and M3 did not show any significant correlation).M2 correlated positively with DF and avSPL and 3 Complexity negatively with C and avD (N, L, d, avCC, and wCC did not show any significant correlation). Only a few topological features can be used as a proxy for assessing the nestedness of central node sets, but most of these show quite strong correlations.Our results suggest that in networks where shortest paths are shorter and density is higher, nestedness is lower, so systems-based conservation can be less predictive and efficient.One example is the Sutton tussock grassland in springtime (Figure 3(a), Supplementary Material (available here)).Here, the single most central organism in the network is unidentifiable detritus (#0, black in Figure 3(a)).The most central pair is the diatom Cocconeis sp. and the larvae of the riffle beetle Hydora nitida (#10 and #61, blue).The group of the three most central network positions is the red alga Audouinella sp., the diatom Navicula avenacea, and the caddisfly Pycnocentrodes spp.(#9, #30, and #70, red).The four most central organisms are the alga Epithemia zebra, the diatom Eunotia spp., the fishfly Archicauliodes diversus, and Chironomid type "Diamesid blond" (#18, #19, #49, and #52, orange).Hence, the increasing core of key organisms is perfectly unnested (M2 = 0, up to 4 groups).Accordingly, DF is low (0.51), C is high (0.14), avD is high (10.49),and avSPL is small (2.39).Apart from the single-node core (n = 1), the larger cores (n > 1) are always composed of both plants (e.g., diatoms) and animals (e.g., caddisfly). On the contrary, in less connected and less compact networks, nestedness is higher, so a multispecies view fairly reinforce the results of single-species analyses.One example is the Dempsters tussock grassland in autumn (Figure 3(b), Supplementary Material).Here, the single most central organism in the network is unidentifiable detritus (#0, black).The most central pair is unidentifiable detritus and terrestrial invertebrates (#2, blue).The group of the three most central network positions is unidentifiable detritus, Table 1: Topological properties and nestedness of multinode centrality sets for 27 food webs.The topological properties include the number of nodes (N), the number of edges (L), diameter (d), average degree (avD), average shortest path length (avSPL), connectance (C), average clustering coefficient (avCC), weighted clustering coefficient (wCC), and distance-based fragmentation (DF).Nestedness is always calculated for sets of n = 1 to 4 nodes, based on fragmentation (F), distance-based fragmentation (FR), weighted reachability (DR), and binary m-reach for m = 1 (M1), 2 (M2), and 3 (M3) steps.Here, the composition of the core is a little bit more nested (M2 = 47.22)and, accordingly, DF is somewhat higher (0.53), C is lower (0.12), avD is a little lower (9.88), and avSPL is longer (2.47).Supplementary Material show the nestedness patterns for each food web.The numbers are the codes for species, and these are generally not comparable for different networks.However, node #0 is almost always unidentifiable detritus (or some similarly large aggregated group, e.g., terrestrial invertebrate remains).In many networks, this is part of the key player complexes.Biologically speaking, this is an artefact: the detritus is clearly a well-connected component of food webs.Only other species in the key player complexes can be biologically interpreted.It is also noted that Unidentifiable detritus, even if it is frequently the key group for n = 1, is frequently missing from larger key player sets (e.g., for n = 4 in the demp au food web).So, even if it dominates the network structure in itself, its position is not significant anymore if we think in terms of a larger network core. Web Apart from the large aggregated groups typically being in the centre of the network, the four organisms that can be in the key position also in single-species cores (n = 1) are the diatom Fragilaria vaucheriae (#19 in the broad food web), the shore crab Hemigrapsus oregonensis (#45 in the carpinteria food web), the mayfly Deleatidium spp.(#34 in the north food web), and the diatom Rhoicosphenia curvata (#16 in the powder food web).Hemigrapsus appears in all of the four studied key player sets in the carpinteria food web (n = 1, 2, 3, 4).Some communities are described by several versions of the food web (e.g., seasonal versions like demp au, demp sp, and demp su).In some cases, these versions differ a lot in nestedness (demp and sutton), while in other cases, there is only a small difference between the versions (aka and cow). Discussion The dynamical behaviour of complex ecological systems can be dominated by a few critically important components.Finding these could dramatically increase our understanding, the predictability of models, and the efficiency of management efforts.We studied a comparable set of empirical food webs and identified the structurally most important n nodes in them.Whether these small sets were nested was correlated to some topological properties of these networks. Network features influencing nestedness can be regarded as topological constraints on the predictability and efficiency of management and systems-based conservation.It remains unclear to us how M2 and M3 can be negatively and positively correlated with avD, respectively. We need to much better understand the biology of the key groups and the ecology of nested vs. nonnested The coloured species are explained in the text.6 Complexity communities.If certain groups (e.g., zooplankton and diatoms) appear frequently in the core of food webs, these can be thought to be real keystone species.This is especially important if the core is nested: this means that the particular community is really dominated by a single species.We still know nothing about the kinds of communities (or the set of abiotic factors) that can be associated with nested patterns.Biologically speaking, this is the most promising future research line.All of our results are based on a set of 27 empirical food webs in the size range between 48 and 128 trophic groups.This is the typical size scale for food webs in the literature.All the webs were described by the same methodological standards, so they are comparable to each other.In order to see if these results are generalizable, research is needed in at least two directions. First, one wants to see if topological properties scale with network size.For this, much larger networks should be studied-and the topological properties studied here can be more and more relevant and interesting for larger graphs.The limitation here is that empirical networks are not larger.Much larger networks (N > 500) could be constructed by dramatically increasing the resolution of trophic groups (e.g., by adding bacteria and replacing trophic groups by biological species), but these networks would not be biologically comparable to the present ones (even if being mathematically more interesting). Second, the toy network of the same size range can be generated by various algorithms (already in progress), and empirical topologies could be compared to the theoretical distributions.This kind of randomization analysis is fairly straightforward in community ecology; however, it is not easy to see which generative algorithms give the most realistic results (e.g., [36] but see [37]).These studies could reveal if the reported relationships are universal properties of networks in general, or they are specific to only food webs for some biological (ecological) reasons (Capocefalo et al. unpublished).If the results are food web-specific, we need to understand the biological reasons.If the results will be shown to be of general nature, conclusions can be drawn also in other fields of research.For example, terrorist networks have been shown to have large average shortest paths and low density [38], properties suggesting that their efficient "management" is possible-in a security and defence sense. This paper is of mostly conceptual and methodological nature.We suggest that the search for the cores of ecosystem networks opens several research lines that could massively contribute to systems-based conservation biology and management, with applications ranging from marine fisheries to pollination systems. Figure 1 : Figure 1: Toy network illustrating the nonnested centrality of node sets.The number of nodes reachable from nodes a, b, c, and d in two steps (m = 2) equals 11, 9, 9, and 7, respectively.Thus, node a has the highest m-reach centrality in the network.Yet, from the (a, d) set of nodes only 12 and from the (a, b) or (a, c) set of nodes only 13, while from the (b, c) set of nodes, 14 other nodes are reachable in two steps.Thus, the (b, c) set is more central than the other sets, based on reachability.The highest centrality node (a) is not a subset of the highest centrality set of two nodes (b, c). Figure 3 : Figure 3: The food webs of the Sutton tussock grassland in spring (a; sutton sp) and the Dempster tussock grassland in autumn (b; demp au).The coloured species are explained in the text.
2019-01-31T15:49:49.011Z
2018-12-02T00:00:00.000
{ "year": 2018, "sha1": "b1337547be34b1519bf1856a29e2dd3c8cefaff9", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/complexity/2018/1979214.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6c2cbae76af679a488834ac9a5ab869ef77d96a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119337985
pes2o/s2orc
v3-fos-license
On refinement of certain laws of classical electrodynamics The problems considered refer to the material equations of electric- and magnetoelectric induction. Some contradictions found in fundamental studies on classical electrodynamics have been explained. The notion magnetoelectric induction has been introduced, which permits symmetrical writing of the induction laws. It is shown that the results of the special theory of relativity can be obtained from these laws through the Galilean transformations. The permittivity and permeability of materials media are shown to be independent of frequency. The notions magnetoelectrokinetic and electromagnetopotential waves and kinetic capacity have been introduced. It is shown that along with the longitudinal Langmuir resonance, the transverse resonance is possible in nonmagnetized plasma, and both the resonances are degenerate. A new notion scalar-vector potential is introduced, which permits solution of all present-day problems of classical electrodynamics. The use of the scalar-vector potential makes the magnetic field notion unnecessary. Introduction Until now, some problems of classical electrodynamics involving the laws of electromagnetic induction have been interpreted in a dual or even contraversal way. As an example, let us consider how the homopolar operation is explained in different works. In [1] this is done using the Faraday low specified for the "discontinuous motion" case. In [2] the rule of flow is rejected and the operation of the homopolar generator is explained on the basis of the Lorentz force acting upon charges. The contradictory approaches are most evident in Feynman's work [2] (see page 53): the rule of flow states that the contour e.m.f. is equal to the opposite-sign rate of change in the magnetic flux through the contour when the flux varies either with the changing field or due to the motion of the contour (or to both). Two options -"the contour moves" or "the field changes" are indistinguishable within the rule. Nevertheless, we use these two completely different laws to explain the rule for the two cases: [ ] B V r ŕ for the "moving contour" and t B E ¶ ¶ r r -= Ñ for the "changing field". And fur-ther on: There is hardly another case in physics when a simple and accurate general law has to be interpreted in terms of two different phenomena. Normally, such beautiful generalization should be based on a unified fundamental principle. Such principle is absent in our case. The interpretation of the Faraday law in [2] is also commonly accepted: Faraday's observation led to the discovery of a new law relating electric and magnetic fields: the electric field is generated in the region where the magnetic field varies with time. There is however an exception to this rule too, though the above studies do not mention it. However, as soon as the current through such a solenoid is changed, an electric field is excited externally. The exception seem to be too numerous. The situation really causes concern when such noted physicists as Tamm and Feynman have no common approach to this seemingly simple question. It is knowing [3] that classical electrodynamics fails to explain the phenomenon of phase aberration. As applied to propagation of light, the phenomenon can be explained only in terms of the special theory of relativity (STR). However, the Maxwell equations are invariant with respect to the covariant STR transformations, and there is therefore every reason to hope that they can furnish the required explanation of the phenomenon. It is well known that electric and magnetic inductivities of material media can depend on frequency, i.e. they can exhibit dispersion. But even Maxwell himself, who was the author of the basic equations of electrodynamics, believed that e and m were frequency-independent fundamental constants. How the idea of e and m-dispersion appeared and evolved is illustrated vividly in the monograph of well-known specialists in physics of plasma [4]: while working at the equations of electrodynamics of material, media, G. Maxwell looked upon electric and magnetic inductivities as constants (that is why this approach was so lasting). Much later, at the beginning of the XX century, G. Heavisidr and R.Wull put forward their explanation for phenomena of optical dispersion (in particular rainbow) in which electric and magnetic inductivities came as functions of frequency. Quite recently, in the mid-50ies of the last century, physicists arrived at the conclusion that these parameters were dependent not only on the frequency but on the wave vector as well. That was a revolutionary breakaway from the current concepts. The importance of the problem is clearly illustrated by what happened at a seminar held by L. D. Landau in 1954, where he interrupted A. L. Akhiezer reporting on the subject: "Nonsense, the refractive index cannot be a function of the refractive index". Note, this was said by L. D. Landau, an outstanding physicist of our time. What is the actual situation? Running ahead, I can admit that Maxwell was right: both e and m are frequency -independent constants characterizing one or another material medium. Since dispersion of electric and magnetic inductivities of material media is one of the basic problems of the presentday physics and electrodynamics, the system of views on these questions has to be radically altered again (for the second time!). In this context the challenge of this study was to provide a comprehensive answer to the above questions and thus to arrive at a unified and unambiguous standpoint. This will certainly require a revision of the relevant interpretations in many fundamental works. Equations of electromagnetic induction in moving coordinates The Maxwell equations do not permit us to write down the fields in moving coordinates proceeding from the known fields measured in the stationary coordinates. Generally, this can be done through the Lorentz transformations but they so not follow from classical electrodynamics. In a homopolar generator, the electric fields are measured in the stationary coordinates but they are actually excited in the elements which move relative to the stationary coordinate system. Therefore, the prin-ciple of the homopolar generator operation can be described correctly only in the framework of the special theory of relativity (STR). This brings up the question: Can classical electrodynamics furnish correct results for the fields in a moving coordinate system, or at least offer an acceptable approximation? If so, what form will the equations of electromagnetic induction have? The Lorentz force is (1.1) It bears the name of Lorentz it follows from his transformations which permit writing the fields in the moving coordinates if the fields in the stationary coordinates are known. Henceforward, the fields and forces generated in a moving coordinate system will be indicated with primed symbols. The clues of how to write the fields in moving coordinates if they are known in the stationary system are available even in the Faraday law. Let us specify the form of the Faraday law: (1. 2) The specified law, or, more precisely, its specified form, means that E r and l d r should be primed if the contour integral is sought for in moving coordinates and unprimed for stationary coordinates. In the latter case the right-hand side of Eq. (1.2) should contain a partial derivative with respect to time which fact is generally not mentioned in literature. The total derivative with respect to time in (1.6) which suggests that the motion in the magnetic field excites an additional electric field described by the final term in Eq. (1.6). Note that Eq. (1.6) is obtained from the slightly specified Faraday law and not from the Lorentz transformations. According to Eq. (1.6), a charge moving in the magnetic field is influenced by a force perpendicular to the direction of the motion. However, the physical nature of this force has never been considered. This brings confusion into the explanation of the homopolar generator operation and does not permit us to explain the electric fields outside an infinitely long solenoid on the basis of the Maxwell equations. and further: The first two terms in the right-hand side of Eq. (1.9) can be considered as the total derivative of the vector potential with respect to time: (1.10) As seen in Eq. (1.9), the field strength, and hence the force acting upon a charge consists of three components. The first component describes the pure time variations of the magnetic vector potential. The second term in the right-hand side of Eq. (1.9) is evidently connected with the changes in the vector potential caused by the motion of a charge in the spatially varying field of this potential. The origin of the last term in the right-hand side of Eq. (1.9) is quite different. It is connected with the potential forces because the potential energy of a charge moving in the potential field B r r describes the force just as the scalar potential gradient does. Using Eq. (1.9), we can explain physically all the strength components of the electronic field excited in the moving and stationary cooperates. If our concern is with the electric fields outside a long solenoid, where the no magnetic field, the first term in the right-hand side of Eq. (1.9) come into play. In the case of a homopolar generator, the force acting upon a charge is determined by the last two terms in the right-hand side of Eq.(1.9), both of them contributing equally. It is therefore incorrect to look upon the homopolar generator as the exception to the flow rule because, as we saw above, this rule allows for all the three components. Using the rotor in both sides of Eq. (1.10) and taking into account rot grad º 0, we obtain ¶ ¶ r r -= Ñ for the "varying field" are absolutely different laws is contrary to fact. The Faraday law is just the sole unified fundamental principle which Feynman declared to be missing. Let us clear up another Feynman's interpretation. Faraday's observation in fact led him to discovery of a new law relating electric and magnetic fields in the region where the magnetic field varies with time and thus generates the electric field. This correlation is essentially true but not complete. As shown above, the electric field can also be excited where there is no magnetic field, namely, outside an infinitely long solenoid. A more complete formulation follows from Eq. (1.9) and the relationship This suggests that a moving or stationary charge interacts with the field of the magnetic vector potential rather than with the magnetic field. The knowledge of this potential and its evolution can only permit us to calculate all the force components acting upon charges. The magnetic field is merely a spatial derivative of the vector field. As follows from the above consideration, it is more appropriate to write the Lotentz force in terms of the magnetic vector potential (1.17) (1.24) These equations present the law of magnetoelectric induction written in terms of the electric vector potential. To illustrate the importance of the introduction of the electric vector potential, we come back to an infinitely long solenoid. The situation is much the same, and the only change is that the vectors B r are replaced with the vectors D r . Such situation is quite realistic: it occurs when the space between the flat capacitor plates is filled with high electric inductivities. In this case the displacement flux is almost entirely inside the dielectric. The attempt to calculate the magnetic field outside the space illustration, let us analyze two parallel conducting plates with the electric field E r in between. In this case the surface charge r S per unit area of each plate is eЕ. If the other reference system is made to move parallel to the plates in the field Е at the velocity DV, this motion will generate an additional field DН = DVeЕ. If a third reference system starts to move at the velocity DV, within the above moving system, this motion in the field DН will generate DЕ = meDV 2 Е, which is another contribution to the field Е. The field E ¢ thus becomes stronger in the moving system than it is in the stationary one. It is reasonable to suppose that the surface charge at the plates of the initial system has increased by This technique of field calculation was described in [6]. If As an example, let us analyze how Eqs. (1.25) can account for the phenomenon of phase aberration which was inexplicable in classical electrodynamics. Assume that there are plane wave components H Z and E X , and the primed system is moving along the x-axis at the velocity V X . The field components with in the primed coordinates can be written as . , , (1.28) Hence, the Poynting vector no longer follows the direction of the y-axis. It is in the xy-plane and tilted about the y-axis at an angle determined by Eqs. (1.27). The ratio between the absolute values of the vectors Е and Н is the same in both the systems. This is just what is known as phase aberration in classical electrodynamics. Is there any dispersion of electric and magnetic inductivities in material media? It is noted in the introduction that dispersion of electric and magnetic inductivities of material media is a commonly accepted idea. The idea is however not correct. To explain this statement and to gain a better understanding of the physical essence of the problem, we start with a simple example showing how electric lumped-parameter circuits can be described. As we can see below, this example is directly concerned with the problem of our interest and will give us a better insight into the physical picture of the electrodynamic processes in material media. In a parallel resonance circuit including a capacitor С and an inductance coil L, the applied voltage U and the total current I S through the circuit are related as (2. 2) The term in brackets is the total susceptance s х of the circuit, which consists of the capacitive s с and inductive s L components (2.3) Eq. (2.2) can be re-written as is the resonance frequency of a parallel circuit. From the mathematical (i.e. other than physical) standpoint, we may assume a circuit that has only a capacitor and no inductance coil. Its frequency -dependent capacitance is (2.5) Another approach is possible, which is correct too. Eq. (2.2) can be re-written as (2.6) In this case the circuit is assumed to include only an inductance coil and no capacitor. Its frequency -dependent inductance is (2.7) Using the notion of Eqs. (2.5) and (2.7), we can write (2.9) Eqs ( i.e. ) ( * w C is the total susceptance of this circuit divided by frequency: (2.11) and ) ( * w L is the inverse value of the product of the total susceptance and the frequency. Amount ) ( * w C is constricted mathematically so that it includes C and L simultaneously. The same is true for We shall not consider here any other cases, e.g., series or more complex circuits. It is however important to note that applying the above method, any circuit consisting of the reactive components C and L can be described either through frequency -dependent inductance or frequency -dependent capacitance. But this is only a mathematical description of real circuits with constant -value reactive elements. It is well known that the energy stored in the capacitor and inductance coil can be found as ( 2.13) But what can be done if we have There is no way of substituting them into Eqs. (2.12) and (2.13) because they can be both positive and negative. It can be shown readily that the energy stored in the circuit analyzed is (2.16) Having written Eqs. (2.14), (2.15) or (2.16) in greater detail, we arrive at the same result: where U is the voltage at the capacitor and I is the current through the inductance coil. Below we consider the physical meaning jog the magnitudes e(w) and m(w) for material media. Plasma media A superconductor is a perfect plasma medium in which charge carriers (electrons) can move without friction. In this case the equation of motion is Here L k is the kinetic inductivity of the medium. Its existence is based on the fact that a charge carrier has a mass and hence it possesses inertia properties. where e 0 and m 0 are the electric and magnetic inductivities in vacuum, C j r and L j r are the displacement and conduction currents, respectively. As was shown above, L j r is the inductive current. As Eq. (2.24) shows, the inductivities of plasma (both electric and magnetic) are frequencyindependent and equal to the corresponding parameters for vacuum. Besides, such plasma has another fundamental material characteristic -kinetic inductivity. Eqs. (2.24) hold for both constant and variable fields. For harmonic fields (2.27) Taking the bracketed value as the specific susceptance s x of plasma, we can write (2.31) The e*(w) -parameter is conventionally called the frequency-dependent electric inductivity of plasma. In reality however this magnitude includes simultaneously the electric inductivity of vacuum aid the kinetic inductivity of plasma. It can be found as (2.32) It is evident that there is another way of writing s Х , * (2.34) L k *(w) written this way includes both e 0 and L k . Eqs. (2.29) and (2.33) are equivalent, and it is safe to say that plasma is characterized by the frequency-dependent kinetic inductance L k *(w) rather than by the frequency-dependent electric inductivity e*(w). Eq. (2.27) can be re-written using the parameters e*(w) and L k *(w) (2.36) Eqs. (2.35) and (2.36) are equivalent.Thus, the parameter e*(w) is not an electric inductivity though it has its dimensions. The same can be said about L k *(w). We can see readily that (2.38) These relations describe the physical meaning of e*(w) and L k *(w). Of course, the parameters e*(w) and L k *(w) are hardly usable for calculating energy by the following equations (2.40) For this purpose the Eq. (2.15)-type fotmula was devised in [7]: (2.41) Using Eq. (2.41), we can obtain 2 0 (2.42) The same result is obtainable from (2.43) As in the case of a parallel circuit, either of the parameters e*(w) and L k *(w), similarly to C*(w) and L*(w), characterize completely the electrodynamic properties of plasma. The case e*(w) = 0 L k *(w) = ¥ (2.44) corresponds to the resonance of current.It is shown below that under certain conditions this resonance can be transverse with respect to the direction of electromagnetic waves. It is known that the Langmuir resonance is longitudinal. No other resonances have ever been detected in nonmagnetized plasma. Nevertheless, transverse resonance is also possible in such plasma, and its frequency coincides with that of the Langmuir resonance. To understand the origin of the transverse resonance, let us consider a long line consisting of two perfectly conducting planes (see Fig. 2.1). First, we examine this line in vacuum. If a d.c. voltage (U) source is connected to an open line the energy stored in its electric field is , 2 The specific potential energy of the electric field is 2 0 (2.47) If the line is short-circuited at the distance z from its start and connected to a d.c. current (I) source, the energy stored in the magnetic field of the line is The specific energy of the magnetic field is 2 0 (2.50) To make the results obtained more illustrative, henceforward, the method of equivalent circuits will be used along with mathematical description. It is seen that C ЕS and L HS increase with growing z. The line segment dz can therefore be regarded as an equivalent circuit ( Fig. 2.2а). If plasma in which charge carriers can move free of friction is placed within the open line and then the current I, is passed through it, the charge carriers moving at a certain velocity start storing kinetic energy. Since the current density is the total kinetic energy of all moving charges is (2.52) On the other hand, where L kS is the total kinetic inductance of the line. Hence, (2.56) It is however obvious from calculation that the resonance frequency is absolutely independent of whatever dimension. Indeed, This brings us to a very interesting result: the resonance frequency of the macroscopic resonator is independent of its size. It may seem that we are dealing here with the Langmuir resonance because the obtained frequency corresponds exactly to that of the Langmuir resonance. We however know that the Langmuir resonance characterizes longitudinal waves. The wave propagating in the phase velocity in the z-direction is equal to infinity and the wave vector is 0 = (2.58) The group and phase velocities are For the plasma under consideration, the phase velocity of the electromagnetic wave is equal to infinity. Hence, the distribution of the fields and currents over the line is uniform at each instant of time and independent of the z-coordinate. This implies that, on the one hand, the inductance L HS has no effect on the electrodynamic processes in the line and, on the other hand, any two planes can be used instead of conducting planes to confine plasma above and below. Eqs. (2.58) , (2.59) and (2.60) indicate that we have transverse resonance with an infinite Qfactor. The fact of transverse resonance, i.e. different from the Langmuir resonance, is most obvious when the Q-factor is not equal to infinity. Then k z ¹ 0 and the transverse wave is propagating in the line along the direction perpendicular to the movement of charge carriers. True, we started our analysis with plasma confined within two planes of a long line, but we have thus found that the presence of such resonance is entirely independent of the line size, i.e. this resonance can exist in an infinite medium. Moreover, in infinite plasma transverse resonance can coexist with the Langmuir resonance characterizing longitudinal waves. Since the frequencies of these resonances coincide, both of them are degenerate. Earlier, the possibility of transverse resonance was not considered. To approach the problem more comprehensively, let us analyze the energy processes in loss-free plasma. The characteristic resistance of plasma determining the relation between the transverse components of electric and magnetic fields can be found from (2.63) The total specific energy thus amounts to 2 0 (2.64) Hence, to find the total specific energy accumulated in unit volume of plasma, it is not sufficient to allow only for the fields Е and Н. At the point w = w r W H = 0 (2.65) W E = W k , i.e. there is no magnetic field in the plasma, and the plasma is a macroscopic electromechanical cavity resonator of frequency w r .. At w > w r the wave propagating in plasma carries three types of energy -magnetic, electric and kinetic. Such wave can therefore be-called magnetoelectrokinetic. The kinetic wave is a currentdensity wave It is shifted by p/2 with respect to the electric wave. Up to now we have considered a physically unfeasible case with no losses in plasma, which corresponds to infinite Q-factor of the plasma resonator. If losses occur, no matter what physical processes caused them, the Q-factor of the plasma resonator is a final quantity. For this case the Maxwell equations become . 1 , 0 . The term s р.ef E r allows for the loss, and the index ef near the active conductivity emphasizes that we are interested in the fact of loss and do not care of its mechanism. Nevertheless, even though we do not try to analyze the physical mechanism of loss, we should be able at least to measure s р.ef . For this purpose, we choose a line segment of the length z 0 which is much shorter than the wavelength in dissipative plasma. This segment is equivalent to a circuit with the following lumped parameters There is of course some uncertainty in this approach because the surface impedance is dependent on the type of the field-current relation (local or non-local). Although the approach is simplified, the qualitative results are quite adequate. True, a more rigorous solution is possible. The wave propagating deep inside the medium decreases by the law In such a medium the wavelength is l g =2pd . (2.77) I.e. much shorter than the free-space wavelength. Further on we concentrate on the case l g >> l 0 at the point w = w р , i.e. V F ½ w = wр >>c. Discussion of results. We have found that e (w) is not dielectric inductivity permittivity. Instead, it includes two frequency-independent parameters e 0 and L k . What is the reason for the physical misunderstanding of There is however another reason for this serious mistake in the present-day physics [7] as an example. This study states that there is no difference between dielectrics and conductors at very high frequencies. On this basis the authors suggest the existence of a polarization vector in conducting media and this vector is introduced from the relation where n is the charge carrier density, m r r is the current charge displacement. This approach is physically erroneous because only bound charges can polarize and form electric dipoles when the external field overcoming the attraction force of the bound charges accumulates extra electrostatic energy in the dipoles. In conductors the charges are not bound and their displacement would not produce any extra electrostatic energy. This is especially obvious if we employ the induction technique to induce current (i.e. to displace charges) in a ring conductor. In this case there is no restoring force to act upon the charges, hence, no electric polarization is possible. In [7] (2.84) The interpretation of e(w) as frequency-dependent inductivity has been harmful for correct understanding of the real physical picture (especially in the educational processes). Besides, it has drawn away the researchers attention from some physical phenomena in plasma, which first of all include the transverse plasma resonance and three energy components of the magnetoelectrokinetic wave propagating in plasma. Below, the practical aspects of the results obtained are analyzed, which promise new data and refinement of the current views. Practical aspects. Plasma can be used first of all to construct a macroscopic single-frequency cavity for development of a new class of electrokinetic plasma lasers. Such cavity can also operate as a band-pass filter. At high enough p Q the magnetic field energy near the transverse resonance is considerably lower than the kinetic energy of the current carriers and the electrostatic field energy. Besides, under certain conditions the phase velocity can much exceed the velocity of light. Therefore, if we want to excite the transverse plasma resonance, we can put where I CT is the extrinsic current. Eq. (2.87) is the harmonic oscillator equation whose right-hand side is typical of two-level lasers [8]. If there is no excitation source, we have a "cold". Laser cavity in which the oscillation damping follows the exponential law (2.88) i.e. the macroscopic electric flow Ф E (t) oscillates at the frequency w р . The relaxation time can be round as (2.89) If this cavity is excited by extrinsic currents, the cavity will operate as a band-pass filter with the pass band Transverse plasma resonance offers another important application -it can be used to heat plasma. High-level electric fields and, hence, high change-carrier energies can be obtained in the plasma resonator if its Q-factor is high, which is achievable at low concentrations of plasma. Such cavity has the advantage that the charges attain the highest velocities far from cold planes. Using such charges for nuclear fusion, we can keep the process far from the cold elements of the resonator. Such plasma resonator can be matched easily to the communication line. Indeed, the equivalent resistance of the resonator at the point w = w р is It should be remembered that the choice of the resonator length z 0 must comply with the requirement z 0 << l g ½ w= wp . Development of devices based on plasma resonator can require coordination of the resonator and free space. In this case the following condition is important: (2.93) (2.94) Such plasma resonators can be excited with d.c. current, as is the case with a monotron microwave oscillator [9]. It is known that a microwave diode (the plasma resonator in our case) with the transit angle of ~5/2p develops negative resistance and tends to self-excitation. The requirement of the transit angle equal to 5/2p correlates with the following d.c. voltage applied to the resonator: where а is the distance between the plates in the line. It is quite probable that this effect is responsible for the electromagnetic oscillations in semiconductive lasers. Dielectric media. Applied fields cause polarization of bound charges in dielectrics. The polarization takes some energy from the field source, and the dielectric accumulates extra electrostatic energy. The extent of displacement of the polarized charges from the equilibrium is dependent on the electric field and the coefficient of elasticity b, characterizing the elasticity of the charge bonds. These parameters are related as It is appropriate to examine two limiting cases: w >> w 0 and w << w 0 . If w >> w 0 , (2.103) and the dielectric behaves just like plasma. This case has prompted the idea that at high frequencies there is no difference between dielectrics and plasma. The idea served as a basis for introducing the polarization vector in conductors [7]. The difference however exists and it is of fundamental importance. In dielectrics, because of inertia, the amplitude of charge vibrations is very small at high frequencies and so is the polarization vector. The polarization vector is always zero in conductors. The equivalent circuits corresponding to these two cases are shown in Figs. 2.3а and b. It is seen that in the whole range of frequencies the equivalent circuit of the dielectric acts as a series oscillatory circuit parallel-connected to the capacitor operating due to the electric inductivity e 0 of vacuum (see Fig. 2.3b). The resonance frequency of this series circuit is obviously obtainable from в -the whole frequency range. Lake in the case of plasma, w 0 2 is independent of the line size, i.e. we have a macroscopic resonator whose frequency is only true when there are no bonds between individual pairs of bound charges. Like for plasma, ) ( * w e ¶ is specific susceptance of the dielectric divided by frequency. However, unlike plasma, this parameter contains three frequency-independent components: e 0 , L k ¶ and the static permittivity of the dielectric In the dielectric, resonance occurs when Three waves-magnetic, electric and kinetic-propagate in it too. Each of them carries its own type of energy. It not is not problematic to calculate them but we omit this here to save room. Magnetic media. The resonance phenomena in plasma and dielectrics are characterized by repeated electrostatickinetic and kinetic-electrostatic transformations of the charge motion energy during oscillations. This can be described as an electrokinetic process, and devices based on it (lasers, masers, filters, etc.) can be classified as electrokinetic units. However, another type of resonance is also possible, namely, magnetic resonance. Within the current concepts of frequency-dependent permeability, it is easy to show that such dependence is related to magnetic resonance. For example, let us consider ferromagnetic resonance. A ferrite magnetized by applying a stationary field Н 0 parallel to the z-axis will act as an anisotropic magnet in relation to the variable external field. The complex permeability of this medium has the form of a tensor [10]: (2.117) The quantity ) 1 ( can be described as kinetic capacitance. What is its physical meaning? If the direction of the magneticmoment does not coincide with that of the external magnetic field, the vector of the moment starts precessional motion at the frequency W about the magnetic field vector. The magnetic moment m r has the potential energy B m U m r r × -= . Like in a charged condenser, U m is the potential energy because the precessional motion is inertialess (even though it is mechanical) and it stops immediately when the magnetic field is lifted. In the magnetic field the processional motion lasts until the accumulated potential energy is exhausted and the vector of the magnetic moment becomes parallel to the vector 0 H r . The equivalent circuit for this case is shown in Magnetic resonance occurs at the point w=W and mт*(w) ® -¥. It is seen that the resonance frequency of the macroscopic magnetic resonator is independent of the line size and equals W. is not a frequency-dependent permeability. According to the equivalent circuit in Fig. 2.4, it includes m 0 , m and С k. It is easy to show that three waves propagate in this case-electric, magnetic and a wave carrying potential energy of the precessional motion of the magnetic moments about the vector 0 H r . The systems in which these types of waves are used can also be described as electromagnetopotential devices. Conclusions Thus, it has been found that along with the fundamental parameters ee 0 and mm 0 characterizing the electric and magnetic energy accumulated and transferred in the medium, there are two more basic material parameters L k and C k . They characterize kinetic and potential energy that can be accumulated and transferred in material media. L k was sometimes used to describe certain physical phenomena, for example, in superconductors [11], C k has never been known to exist. These four fundamental parameters ee 0 , mm 0 , L k and C k clarify the physical picture of the wave and resonance processes in material media in applied electromagnetic fields. Previously, only electromagnetic waves were thought to propagate and transfer energy in material media. It is clear now that the concept was not complete. In fact, magnetoelectrokinetic, or electromagnetopotential waves travel in material media. The resonances in these media also have specific features. Unlike closed planes with electromagnetic resonance and energy exchange between electric and magnetic fields, material media have two types of resonance -electrokinetic and magnetopotential. Under the electrokinetic resonsnce the energy of the electric field changes to kinetic energy. In the case of magnetopotential resonance the potential energy accumulated during the precessional motion can escape outside at the precession frequency. The notions of permittivity and permeability dispersion thus become physically groundless though e*(w) and m*(w) are handy for a mathematical description of the processes in material media. We should however remember their true meaning especially where educational processes are involved. Magnetic field problem As follows from the transformations in Eq. (1.25) if two charges move at the relative velocity V r , their interaction is determined not only by the absolute values of the charges but by the relative motion velocity as well. The new value of the interaction force is found as [12] (3.2) We can denote this potential as "scalar-vector", because its value is dependent not only on the charge involved but on the value and the direction of its velocity as well. The potential energy of the charge interaction is Using these equations, it is possible to calculate the force of the conductor-current interactions and allow, through superposition, for the interaction forces of all moving and immobile charges in the conductors. We thus obtain all currently existing laws of electromagneticm. In terms of the present-day theory of electromagnetism, the forces of the interaction of the conductors can be found by two methods. 1. One of the conductors (e.g., the lower one) generates the magnetic field H(r) in the location of the first conductor. This field is (3.4) The field E¢ is excited in the coordinate system moving together with the charges of the upper conductor: (3.5) I.e. the charges moving in the upper conductor experience the Lorentz force. This force per unit length of the conductor is r c (3.6) Eq. (3.6) can be obtained in a different way. Assume that the lower conductor excites a vector potential in the region of the upper conductor. The z-component of the vector potential is (3.7) The potential energy per unit length of the upper conductor carrying the current I 2 in the field of the vector potential A Z is (3.8) Since the force is the derivative of the potential energy with respect to the opposite-sign coordinate, it is written as (3.9) Both the approaches show that the interaction force of two conductors is the result of the interaction of moving charges: some of them excite fields, the others interact with them. The immobile charges representing the lattice do not participate in the interaction in this scheme. But the forces of the magnetic interaction between the conductors act just on the lattice. Classical electrodynamics does mot explain how the moving charges experiencing this force can transfer it to the lattice. The above models of iteration are in unsolvable conflict, and experts in classical electrodynamics prefer to pass it over in silence. The conflict is connected with estimation of the interaction force of two parallel-moving charges. Within the above models such two charges should be attracted. Indeed, the induction В caused by the moving charge g 1 at the distance r is (3.10) If another charge g 2 moves at the same velocity V in the same direction at the distance r from the first charge, the induction В at the location of g 2 produces the force attracting g 1 and g 2 . (3.11) An immovable observer would expect these charges to experience attraction along with the Coulomb repulsion. For an observer moving together with the charges there is only the Coulomb repulsion and no attraction. Neither classical electrodynamics not the special theory of relativity can solve the problem. Physically, the introduction of magnetic fields reflects certain experimental facts, but so far we can hardly understand where these fields come from. In 1976 it was reported in a serious experimental study that a charge appeared on a short-circuited superconducting solenoid when the current in it was attenuating. The results of [13] suggest that the value of the charge is dependent on its velocity, which is first of all in contradiction with the charge conservation law. The author of this study has also investigated this problem [6,12,14] (see below). It is useful to analyze here the interaction of current-carrying systems in terms of Eqs. (3.1), (3.2) and (3.3) [12,14]. We come back again to the interaction of two thin conductors with charges moving at the velocities V 1 and V 2 (Fig. 3.2). g 1 + , g 2 + and g 1 -, g 2 are the immobile and moving charges, respectively, pre unit length of the conductors. g 1 + and g 2 + refer to the positively charged lattice in the lower and upper conductors, respectively. Before the charges start moving, both the conductors are assumed to be neutral electrically, i.e. they contain the same number of positive and negative charges. Each conductor has two systems of unlike charges with the specific densities g 1 + , g 1 and g 2 + , g 2 -. The charges neutralize each other electrically. To make the analysis of the interaction forces more convenient, in Fig. 3.2 the systems are separated along the z-axis. The negative-sign subsystems (electrons) have velocities V 1 and V 2 . The force of the interaction between the lower and upper con- By adding up the four forces and remembering that the product of unlike charges and the product of like charges correspond to the attraction and repulsion forces, respectively, we obtain the total specific force per unit length of the conductor (3.13) where g 1 where g 1 and g 2 are the absolute values of specific charges, and V 1 , V 2 are taken with their signs. It is seen that Eqs. (3.6), (3.9) and (3.13) coincide though they were obtained by different methods. According to Feynman (see the introduction), the e.m.f. of the circuit can be interpreted using two absolutely different laws. The paradox has however been clarified. The force of the enteraction between the current-carrying systems can be obtained even by three absolutely different methods. But in the third method, the motion "magnetic field" is no longer necessary and the lattice can directly participate in the formation of the interaction forces. This was impossible with the previous two techniques. In practice the third method however runs into a serious obstacle. Assuming g 2 + = 0 and V 2 = 0, i.e. the interaction, for example, between the lower current-carrying system and the immobile charge g 2 the interaction force is (3. 15) This means that the current in the conductor is not electrically neutral, and the electric field , 4 is excited around the conductor, which is equivalent to an extra specific static charge on the conductor (3.17) Before [13], there was no evidence for generation of electric fields by d.c. currents. When Faraday and Maxwell formulated the basic laws of electrodynamics, it was impossible to confirm Eq. (3.17) experimentally because the current densities in ordinary conductors are too small to detect the effect. The assumption that the charge is independent of its velocity and the subsequent introduction of a magnetic field were merely voluntaristic acts. In superconductors the current densities permit us to find the correction for the charge 2 2 1 c V g experimentally. Initially, [13] was taken as evidence for the dependence of the value of the charge on its velocity. The author of this study has also investigated this problem [6,12,14], but, unlike [13], in his experiments current was introduced into a superconducting coil by an inductive non-contact method. Even in this case a charge appeared on the coil [6,12,14]. The experimental objects were superconducting composite Nb -Ti wires coated with copper, and it is not cleat what mechanism is responsible for the charge on the coil. It may be brought by mechanical deformation which causes a displacement of the Fermi level in the copper. Experiments on non-coated superconducting wires may be more informative. Anyhow, the subject has not been exhausted and further experimental findings are of paramount importance to fundamental physics. Using this model, we should remember that there is no reliable experimental data on static electric fields around the conductor. According to Eq. (3.16), such fields are excited because the value of the charge is dependent on its velocity. Is there any physical mechanism which could maintain the interacting current-carrying systems electrically neutral within this model? Such mechanism does exist. To explain it, let us consider the current-carrying circuit in Fig. 3.3. This is a superconducting thin film whose thickness is smaller than the field penetration depth in the superconductor. The current is therefore distributed uniformly over the film thickness. Assume that the bridge connecting the wide parts of the film is much narrower than the rest of the current-carrying film. If persistent current is excited in such a circuit, the current density and hence the current carrier velocity V 1 in the bridge will much exceed the velocity V 0 in the wide parts of the film. This potential difference can appear only due to the charge density gradient in the parts d 1 and d 2 , i.e. the density of charge carriers decreases with acceleration and increases with slowing down. The relation n 0 > n 1 should be fulfilled, where n 0 and n 1 are the current-carrier densities in the wide and narrow bridge parts of the film, respectively. It is clear that some energy is needed to accelerate charges which have masses. Let us find out where this energy comes from. On acceleration the electrostatic energy available in the electrostatic field of the current carriers converts into kinetic energy. The difference in electrostatic energy between two identical volumes having different electron densities can be written as where Dn = n 0 -n 1 , e is the electron charge, r is the electron radius. Since We see that the change in the current-carrier density is quite small, but this change is just responsible for the existence of the longitudinal electric field accelerating or slowing down the charges in the parts d 1 and d 2 . Let us call such fields "configuration fields" as they are connected with a certain configuration of the conductor. These fields are available in normal conductors too, but they are much smaller than the fields related to the Ohmic resistance. We can expect that a voltameter connected to the circuit, like is shown in Fig. 3.3, would be capable of registering the configuration potential difference in accordance with Eq. (3.18). If we used an ordinary liquid and a manometer instead of a voltameter, according to the Bernoulli equation, the manometer could register the pressure difference. For lead films, the configuration potential difference is ~10 -7 В, though it is not observablt experimentally. We can explain this before hand. As the velocities of the current carriers increase and their densities decrease, the electric fields njrmal to their motion enhance. These two precesses counterbalance each other. As a result, the normal component of the electric field has a zero balue in all parts of the film. In terms of the considered, this looks like (3.27) Again, we have a relation coinciding with Eqs. (3.6) and (3.9). However, in this case the currentcarrying conductors are neutral electrically. Indeed, if we analyze the force interaction. For example, between the lower conductor and the upper immobile charge g 2 (putting g 2 + =0 and V 2 =0), the total interaction force will be zero, i.e. the conductor with flowing current is electrically neutral. If we consider the interaction of two parallel -moving electron flows (taking g 1 + =g 2 + =0 and V 1 =V 2 ) , according to Eq. (3.28) It is seen that two electron flows moving at the same velocity in the absence of a lattice experience only the Coulomb repulsion and no attraction included into the magnetic field concept. Physically, in this model the force interaction of the current-carrying systems is not connected with any now field. The interaction is due to the enhancement of the electric fields normal to the direction of the charge motion. The phenomenological concept of the magnetic field of correct only when the charges of the current carriers are compensated with the charges of the immobile lattice, the current carriers excite a magnetic field. The magnetic field concept is not correct for freely moving charges when there are no compensating charges of the lattice. In this case a moving charged particle or a flow of charged particles does not excite a magnetic field. Thus, the concept of the phenomenological magnetic field is true but for the above case. It is easy to show that using the scalar-vector potential, we can obtain all the presently existing laws of magnetism. Besides, the approach proposed permits a solution of the problem of the interaction between two parallel-moving charges which could not be solved in terms of the magnetic field concept. Problem of electromagnetic radiation Whatever occurs in electrodynamic, it is connected with the interaction of moving and immobile charges. The introduction of the scalar-vector potential answers this question. The potential is based on the laws of electromagnetic and magnetoelectric induction. The Maxwell equations describing the wave processes in material media also follow from these laws. The Maxwell equations suggest that the velocity of field propagation is finite and equal to the velocity of light. The problem of electromagnetic radiation can be solved of the elementary level using the scalarvector potential and the finiteness of propagation of electric processes. For this purpose, the retarded scalar-vector potential ( ) . Assume that at the moment c r t ¢ the charge g 1 is at the origin of the coordinates and its velocity is ¢ V (t) (Fig. 4.1). The field E y at point 2 is (3.32) This law of radiation from a moving charge is well known though its derivation is more complex [2]. All the problems of radiation can be solved at the elementary level using Eq. (3.32) . this equation is also the induction law assuming that the retardation time is very short. Conclusions It is surprising that Eq. (3.29) actually accounts for the whole of electrodynamics beause all current electrodynamics problems can be solved using this equation. What is then a magnetic field? This is merely a convenient mathematical procedure which is not necessarily gives a correct result (e.g., in the case of parallel-moving charges). Now we can state that electrocurrent, rather than electromagnetic, waves travel in space. Their electric field and displacement current vectors are in the same plane and displaced by p/2. In terms of Eq. (3.29), electrodynamics and optics can be reconstructed completely to become simpler, more intelligible and obvious. The main ideas of this approach were described in the author's publications [5], [6], [12], [14], [15]. However, the results reported have never been used, most likely because they remain unknown. The objective of this study is therefore to attract more attention to them. Any theory is dead unless important practical results are obtained of its basis. The use of the previously unknown transverse plasma resonance is one of the most important practical results following from this study. The author is indebted to V.D.Fil for helpful discussions, to A.N.Svenarev and A.I.Shurupov for their assistance in preparation of this manuscript.
2019-04-14T03:13:18.819Z
2004-02-17T00:00:00.000
{ "year": 2004, "sha1": "87bdf82804c3ad3fbf58d45229a0e2702860e34b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cc494980214dd958544ac15b9784dfdd2f0c43ec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10783700
pes2o/s2orc
v3-fos-license
Measurement of the Charge Asymmetry in $B\to K^* (892)^{\pm}\pi^{\mp}$ We report on a search for a CP-violating asymmetry in the charmless hadronic decay B ->K*(892)+- pi-+, using 9.12 fb^-1 of integrated luminosity produced at \sqrt{s}=10.58 GeV and collected with the CLEO detector. We find A_{CP}(B ->K*(892)+- pi-+) = 0.26+0.33-0.34(stat.)+0.10-0.08(syst.), giving an allowed interval of [-0.31,0.78] at the 90% confidence level. The Standard Model predicts that CP -violating phenomena are governed solely by the single imaginary parameter of the Cabibbo-Kobayashi-Maskawa matrix [1] of complex quark couplings. The first observations of CP violation in the neutral B system were recently reported [2], and they have been interpreted widely as induced by B 0 -B 0 mixing. To date, direct CP violation has only been observed in the neutral kaon system [3]. Direct CP violation in a given decay requires contributions from two or more amplitudes which differ in both CP -violating (weak) and CP -conserving (strong) phases. In the B system, these conditions are expected to be met in some charmless hadronic decays, and direct CP violation can occur at sizeable levels, depending on the magnitude of the strong phase difference [4,5] or on the presence of new physics [6]. Previous analyses, mainly focusing on two-pseudoscalar final states, have not observed direct CP violation in these decays [7,8]. In this Report, we present a search for direct CP violation in the vector-pseudoscalar decay B → K * (892) ± π ∓ . We express the difference between the decay rates forB 0 → K * (892) − π + and B 0 → K * (892) + π − in terms of an asymmetry, A CP , defined as We consider both K * (892) ± submodes, K * (892) ± → K 0 S π ± and K * (892) ± → K ± π 0 by analyzing the final states K 0 S π ± h ∓ and K ± h ∓ π 0 , where h ∓ denotes a charged pion or kaon. We perform a maximum likelihood fit in the K 0 S π ± h ∓ and K ± h ∓ π 0 Dalitz plots to distinguish B → K * (892) ± π ∓ from other intermediate resonances or non-resonant three-body decays. The CP -averaged branching fraction for B → K * (892) ± π ∓ has been measured by the Belle [9] and CLEO [10] Collaborations, and the work described in this Report is an extension of that previous CLEO analysis. The data sample used in this analysis was produced in symmetric e + e − collisions at the Cornell Electron Storage Ring (CESR) and collected with the CLEO detector in two configurations, known as CLEO II [11] and CLEO II.V [12]. It comprises 9.1 fb −1 of integrated luminosity collected on the Υ(4S) resonance, corresponding to 9.7 × 10 6 BB pairs, of which 6.3 × 10 6 were taken with CLEO II.V. An additional 4.4 fb −1 collected below the BB production threshold is used to study non-BB backgrounds. Of this latter luminosity, 2.8 fb −1 were collected with CLEO II.V. The response of the experimental apparatus is studied with a detailed GEANT-based [13] simulation of the CLEO detector, where the simulated events are processed in a fashion similar to data. In CLEO II, the momenta of charged particles are measured with a tracking system consisting of a six-layer straw tube chamber, a ten-layer precision drift chamber, and a 51-layer main drift chamber, all operating inside a 1.5 T superconducting solenoid. The main drift chamber also provides a measurement of specific ionization energy loss (dE/dx), which is used for particle identification. For CLEO II.V, the six-layer straw tube chamber was replaced by a three-layer double-sided silicon vertex detector, and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture. Photons are detected with a 7800-crystal CsI electromagnetic calorimeter, which is also inside the solenoid. Proportional chambers placed at various depths within the steel return yoke of the magnet identify muons. Charged tracks are required to be well-measured and to satisfy criteria based on the track fit quality. They must also be consistent with coming from the interaction point in three dimensions. Pions and kaons are identified by consistency with the expected dE/dx, and tracks that are positively identified as electrons or muons are not allowed to form the B candidate. We form π 0 candidates from pairs of photons with invariant mass within 20 MeV/c 2 (approximately 2.5 standard deviations (σ)) of the known π 0 mass. These candidates are then kinematically fitted with their masses constrained to the known π 0 mass. We also require the π 0 momentum to be greater than 1 GeV/c to reduce combinatoric background from low-momentum π 0 candidates. K 0 S candidates are selected from pairs of tracks with invariant mass within 10 MeV/c 2 or 2.5σ of the known K 0 S mass. In addition, K 0 S candidates are required to originate from the beam spot and to have well-measured displaced decay vertices. We identify B meson candidates by their invariant mass and the total energy of their decay products. We calculate a beam-constrained mass by substituting the beam energy (E b ) for the measured B candidate energy: where p B is the B candidate momentum. Performing this substitution improves the resolution of M by one order of magnitude, to about 3 MeV/c 2 . We define are the energies of the B candidate daughters. For final states with a K 0 S and two charged tracks, the ∆E resolution is about 20 MeV for CLEO II and 15 MeV for CLEO II.V. A π 0 in the final state degrades this resolution by approximately a factor of two. ∆E is always calculated assuming the h ∓ is a pion. Therefore, the ∆E distribution for pions is centered at zero, while that for kaons is shifted by at least −40 MeV. We accept B candidates with M between 5.2 and 5.3 GeV/c 2 and with |∆E| less than 300 MeV for K ± h ∓ π 0 and 200 MeV for K 0 S π ± h ∓ . This region includes the signal region and a high-statistics sideband for background normalization. We reject candidates that are consistent with the exclusive b → c transitions B → Dπ, where D → Kπ, and B → ψK 0 , where ψ → µ + µ − and the muons are misidentified as pions. The main background in this analysis arises from e + e − → qq, where q = u, d, s, c. To suppress this background, we calculate the angle θ sph between the sphericity axis [14] of the tracks and showers forming the B candidate and that of the remainder of the event. Because of their two-jet structure, continuum qq events peak strongly at | cos θ sph | = 1, while the more isotropic BB events are nearly flat in this variable. By requiring | cos θ sph | < 0.8, we reject 83% of the continuum background while retaining 83% of signal B decays. Additional separation of signal from qq background is provided by a Fisher discriminant [15] F formed from eleven variables: the angle between the sphericity axis of the candidate and the beam axis, the ratio of Fox-Wolfram moments H 2 /H 0 [16], and the scalar sum of the visible momentum in nine 10 • angular bins around the candidate sphericity axis. In the likelihood fit, we also make use of the angle between the B candidate momentum and the beam axis, θ B . Angular momentum conservation causes B mesons produced through the Υ(4S) to exhibit a sin 2 θ B dependence, while candidates from continuum are flat in cos θ B . In both the K 0 S π ± h ∓ and K ± h ∓ π 0 topologies, the h ∓ refers to the faster of the two tracks, which typically has momentum above 1 GeV/c. Because dE/dx still provides limited separation of pions and kaons above 1 GeV/c, we make use of the dE/dx information in the likelihood fit. In B → K * (892) ± π ∓ decays, this higher-momentum track is the one that recoils from the K * (892) ± , more than 99.99% of the time. Its charge uniquely distinguishes B 0 → K * (892) − π + from B 0 → K * (892) + π − . Thus, the charge asymmetry A +− , formed using the charge of this higher-momentum track, is essentially the same as A CP . Our loose selection criteria result in samples consisting primarily of background events and containing 11893 candidates for K 0 S π ± h ∓ and 28589 for K ± h ∓ π 0 . To extract yields and CP asymmetries, we perform an unbinned maximum likelihood fit using the observables M, ∆E, F , cos θ B , the two Dalitz plot variables in each topology, and the dE/dx of the h ∓ (the faster of the two primary tracks). At high momentum, charged pions and kaons are statistically separated by their dE/dx and by ∆E, each of which provides discrimination at the 2.0σ level (1.7σ for CLEO II), and we fit for both π and K hypotheses simultaneously. Charged pions and kaons with momentum below 1 GeV/c are cleanly identified by dE/dx consistency at the 3σ level. The free parameters in the fit are yields (N) summed over charge states, N h + + N h − , and charge asymmetries, The probability for a candidate to be consistent with a given component is the product of the probability density function (PDF) values for each of the input variables (neglecting correlations). The likelihood for each candidate is the sum of probabilities over the components in the fit, with relative weights determined by maximizing the total likelihood of the sample, which is given by the following expression: where the ± refers to the charge of h ± in each candidate. The P ijk are the per candidate PDF values, and the f j and A j +− are the free parameters optimized by the fit. The products f j (1 ± A j +− )/2 are constrained to sum to the fraction of candidates in the fit with the appropriate charge of h ± . Since the PDFs are normalized to unit integral over the fit domain, the f j can be interpreted as component fractions. The parameters of the dE/dx PDFs are measured from D → K ± π ∓ decays in data. For all other variables, the signal and the background b → c PDFs are determined from high-statistics Monte Carlo samples, and the continuum PDFs are determined from data collected below the BB production threshold. The impact of correlations among the input variables is reduced by determining the PDFs as a function of the event location in the Dalitz plot, for coarse bins in the M 2 (Kπ)-M 2 (ππ) plane. We use Monte Carlo simulation to estimate the systematic uncertainty associated with neglecting any remaining correlations. Events from B → K 0 S π ± h ∓ and B → K ± h ∓ π 0 , including B → K * (892) ± π ∓ , are modeled in the fit as follows. We consider various B decay channels with intermediate resonances (K * (892), K * 0 (1430), ρ(770), and f 0 (980)) as well as non-resonant phase space decay. The Dalitz plot PDFs include our knowledge of the helicity structure in these decays. We neglect interference among these signal processes and assign a systematic uncertainty estimated from Monte Carlo simulation. The decays B → K ± h ∓ , where K ± denotes K * (892) ± or K * 0 (1430) ± , are accessible through different K ± submodes in both the K 0 S π ± h ∓ and K ± h ∓ π 0 topologies. To maximize our sensitivity to these decays, we fit these two topologies simultaneously, with the branching fraction and charge asymmetry for each K ± h ∓ decay constrained to be equal in its two K ± submodes, which are related by isospin. The charge symmetry of the CLEO detector, the track reconstruction software, and the dE/dx measurement has previously been verified [7,17]. The charge asymmetries of the fit samples are 0.010 ± 0.009 for B → K 0 S π ± h ∓ and 0.006 ± 0.006 for B → K ± h ∓ π 0 . Detection efficiencies and crossfeed among the charge-summed signal yields are measured from Monte Carlo simulated events and are accounted for in the fit for A +− . We find the charge asymmetry of the detection efficiencies to be consistent with the expected null result. Crossfeed among different charge states is not included in the fit, and its effect is estimated with Monte Carlo simulation. We perform the fit with differing combinations of intermediate resonant and non-resonant states, with up to twelve signal components. The fitted value of A CP does not depend heavily on the number of signal components in the fit, and we include a systematic uncertainty for these variations. We also allow for four background components: pion and kaon hypotheses for h ± for continuum background and background from b → c decays. We do not fit for charge asymmetries in the background components, but we measure them to be consistent with zero. The B → K * (892) ± π ∓ event yields were measured to be [10] 12.6 +4.6 −3.9 for K * (892) ± → K 0 S π ± and 6.1 +2.2 −1.9 for K * (892) ± → K ± π 0 with a combined statistical significance of 4.6σ. In the fit, these yields are corrected for efficiency and crossfeed from other modes, and the CP asymmetry in B → K * (892) ± π ∓ is measured to be A CP = 0.26 +0.33 −0.34 +0.10 −0.08 , where the uncertainties are statistical and systematic, respectively. The dominant contributions to the latter are statistical uncertainties in the PDFs and variations in the fitting method. We determine the dependence of the likelihood function on A CP by repeating the fit at several fixed values of A CP . By convoluting this function with the systematic uncertainties and integrating the resultant curve in the physical region, we construct a 90% confidence level interval of −0.31 < A CP < 0.78, where the excluded regions on both sides each contain 5% of the integrated area. Figure 1 shows the likelihood function given by the fit and the effect of including systematic uncertainties. In summary, we have measured the CP asymmetry in B → K * (892) ± π ∓ using a simultaneous maximum likelihood fit to the B → K 0 S π ± h ∓ and B → K ± h ∓ π 0 topologies. We obtain the value A CP = 0.26 +0.33 −0.34 +0.10 −0.08 , which is consistent with the theoretical predictions [5] of −0.19 to 0.47. We also establish a 90% confidence level interval of [−0.31, 0.78]. We gratefully acknowledge the effort of the CESR staff in providing us with excellent luminosity and running conditions. M. Selen thanks the Research Corporation, and A.H.
2017-05-28T01:14:52.915Z
2003-04-24T00:00:00.000
{ "year": 2003, "sha1": "f4290f36092fbc60f8f8637d79c30eef710df9bc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ex/0304036", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "851c6a35346274c2eca432d8fc58f58bad296f02", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54567099
pes2o/s2orc
v3-fos-license
Improving the calling of non-invasive prenatal testing on 13-/18-/21-trisomy by support vector machine discrimination With the advance of next-generation sequencing (NGS) technologies, non-invasive prenatal testing (NIPT) has been developed and employed in fetal aneuploidy screening on 13-/18-/21-trisomies through detecting cell-free fetal DNA (cffDNA) in maternal blood. Although Z-test is widely used in NIPT NGS data analysis, there is still necessity to improve its accuracy for reducing a) false negatives and false positives, and b) the ratio of unclassified data, so as to lower the potential harm to patients as well as the induced cost of retests. Combining the multiple Z-tests with indexes of clinical signs and quality control, features were collected from the known samples and scaled for model training using support vector machine (SVM). We trained SVM models from the qualified NIPT NGS data that Z-test can discriminate and tested the performance on the data that Z-test cannot discriminate. On screenings of 13-/18-/21-trisomies, the trained SVM models achieved 100% accuracies in both internal validations and unknown sample predictions. It is shown that other machine learning (ML) models can also achieve similar high accuracy, and SVM model is most robust in this study. Moreover, four false positives and four false negatives caused by Z-test were corrected by using the SVM models. To our knowledge, this is one of the earliest studies to employ SVM in NIPT NGS data analysis. It is expected to replace Z-test in clinical practice. Introduction On the basis of the discovery of cell-free fetal DNA (cffDNA) in maternal plasma and serum [1] as well as the advance of next-generation sequencing (NGS) technology [2], Non-invasive prenatal testing (NIPT) has been developed in 2008 [3,4] and applied in clinical use recent years for fetal aneuploidy detection mainly on Down's syndrome, Edward's syndrome and Patau's syndrome, respectively corresponding to 21-trisomy(T21), 18-trisomy(T18) and 13-trisomy(T13) [3][4][5]. Before the application of the NGS-based NIPT, there were mainly two PLOS methods to detect 13-/18-/21-trisomies in clinical practice. One is the non-invasive serological test with high rate of false positive and false negative [6]; the other is the golden standard-the invasive amniocentesis with a rate of 1/250 inducing abortion [7]. Comparatively, NIPT is much more accurate than serological test and safer than amniocentesis. The International Society for Prenatal Diagnosis [8], the National Society of Genetic Counselors [9], the American College of Obstetricians and Gynecologists and the Society for Maternal-Fetal Medicine [10] had published committee opinions stating that such a cffDNA testing could be offered to pregnant women at high risk for fetal aneuploidy as a screening option after counseling. Except those employing deep sequencing or array-based methods, most NIPTs were performed using the low-coverage next-generation sequencing (NGS) platforms such as Verifi [11], Materni21 [12], panorama [13] and NIFTY [14]. Similar to copy number variation analysis, the sequencing reads from a test sample were mapped and counted as depth in bins of a certain size, following by a measurement of deviation from negative control. Since the triploid fetus has 2-5% more cffDNA than diploid fetus, Z-test was frequently employed in deviation measurement [3,15]. Statistically, Z score indicates the significance of deviation from the baseline, e.g. Z > 3 means that the test data approximates the baseline with P < 0.001 and hence is likely to be from a triploid sample. Types of Z-tests were employed in different NIPT studies, such as Chiu et al. using average of negatives as baseline [3] and Zhang et al. using internal reference as baseline [15]. However, these one-Z-test based approaches have many problems in clinical practice. First, only single Z score is insufficient to give accurate prediction on different samples due to read distribution bias among individuals. Further, fetal fraction has been proven to be crucial in trisomy determination [16]; however, it was not involved in one-Z-test based approach in NIPT NGS data analysis. These problems could result in inaccurate prediction, high cost of retesting and delay of treatment. As shown in Fig 1, the distributions of Z scores of negatives and positives overlapped in a certain intervals, where the cutoff Z = 3 was unable to discriminate. A simulation shown in S1 Fig indicates that small portion of negative samples could have Z > 3 while small portion of positive samples could have Z < 3, especially when fetal fraction is around or less than 5%. In clinical practice, it is guaranteed that any sample with Z score in an interval (1.96, 4), called "grey zone", requires a retest. It is because that only using Z = 3 as cutoff to separate positives from negatives may result in inaccurate results. Therefore, it is meaningful to develop a more precise method for NIPT data analysis. The support vector machine (SVM) is an excellent tool for this purpose. It is a supervised machine learning (ML) algorithm that identifies an arbitrarily defined framework for discriminating query data using a model build from training dataset with selected features [17]. SVM has already shown high robustness and accuracy in fields [18], such as cancer subtype classification [19], splice site prediction [20] and single nucleotide polymorphism (SNP) prediction [21]. For NIPT on 13-/18/-21-trisomies, it has been reported that positive samples are much fewer than negatives [22,23]. Referred to a clinical experience from~150,000 pregnancies in mainland China [23], the positive rates of 13-/18-/21-trisomies were respectively 0.045%, 0.15% and 0.52%. The large difference in number of positive and negative could lead to class imbalance in ML model training if all data were employed in research. However SVM could reduce the effect of class imbalance by selecting the support vectors from all given input data. Further, feature co-linearity would not affect the SVM model in discrimination. Therefore in this study, SVM is employed to improve the prediction on NIPT NGS data with a purpose to replace the one-Z-test based approach in current clinical practice. Combining multiple Z values with indexes of clinical signs and quality control, SVM model was trained for each dataset of 13-/18-/21-trisomy to accurately discriminate the samples, especially the "grey zone" NIPT results and those falsely predicted before. Specimen source This study was a retrospective analysis on the NIPT NGS data obtained from March to July 2016 at Guangzhou DaAn Clinical Laboratory Center. Informed consent was obtained from all participants. Information such as gestational week and maternal age was obtained while the names of participants have been masked so as to protect their privacy. The trisomy samples were validated by amniocentesis. The NIPT experiments were based on the semi-conductor sequencing platform same as in Liao et al 's paper in 2013 [24]. The reported results were output through a CFDA-certified standard operation protocol (SOP) and a DaAn Gene's compiled bioinformatics plugin named "Seqboost" developed on the basis of Liao et al 's paper [24] that described a one-Z-test based approach for NIPT prediction. Data summary In total 5518 NIPT data were collected during the period from two semi-conductor sequencers located in the NIPT lab in Guangzhou (Table 1). There are forty-six data from triploid samples with one data labeled "#5267" is positive in both T18 and T21 (S1 Table). Hence there are forty-seven triploid cases, respectively five for T13, fifteen for T18 and twenty-seven for T21. Average age of pregnant mother with negative results was 31.83 (95% CI: 15-51), slightly larger than the average age of ones with positive results (31.70, 95% CI: 17-47). Another 500 negative samples were recruited as reference negative control for NIPT calling. As shown in S1 Table, a series of values were listed to demonstrate the information of these data, including "Z_run" as the Z scores output by "Seqboost" in one's run, "Real_state" as the results confirmed by prenatal or postnatal diagnosis, fetal fraction predicted using SeqFF [25], peak value of read length, maternal age and gestational week. According to CFDA's NIPT policy and DaAn Gene's SOP, Z score = 3 is the cutoff to distinguish negatives and positives. Negatives and positives are shown in dark and red respectively. Green dash indicates the cutoff of Z = 3 that was frequently used as a criterion in discrimination. Blue dashes shows the "grey zone" interval between Z = 1.96 and 4, which means failure in discrimination using Z = 3, and requires a retest. Hence in routine NIPT, the data with "Z_run � 3" would be regarded as positive, meaning it's significantly deviated from the baseline of reference dataset; while those with "Z_run < 3" would be regarded as negative. Hence, the data predicted as positive with "Real_state = -1" as negative were false positives; those predicted as negative with "Real_state = 1" as positive were false negatives. Of these 5518 data, 766 data with unique reads fewer than 3,000,000 or predicted fetal fraction less than 5% were labeled as "QC-filtered" on the basis of quality control (QC) according to the SOP. The remaining "QC-pass" 4752 data were categorized into three groups for specified chromosomes on the basis of the principle of statistics: Group "N" as those with Z scores smaller than 1.96, meaning not significantly higher than baseline of reference dataset (P > 0.05); Group "P" as those with Z scores larger than 4, meaning significantly higher than baseline of reference dataset (P < 0.0001); Group "Unclassified" as those with Z scores between 1.96 and 4, meaning retest is required for double check in nowadays' NIPT. For each specified chromosome, data in Groups "N" and "P" were employed to train models and conduct internal validation in this study. Data in Group "Unclassified" and "QC-filtered" were used in performance test. We also employed the trained model to correct the four false positives and four false negatives caused by Z-test in previous NIPT reports. Feature selection and data reanalysis Reads generated from semi-conductor Ion Proton Sequencer (Life Technologies) were trimmed and mapped to human genome 19 (hg19), following by recalibration and realignment through the automated pipeline of the supporting Ion Torrent Suite Software (Life Technologies). Then reads were filtered using SAMtools' command [26] 'samtools view-F 1024 -q 10' to remove PCR duplicates and low quality (mapping quality smaller than 10) reads. Thus, the remaining high-quality unique reads were used for the following analysis. Similar with the CFDA-certified DaAn Gene's SOP, read-depth for each contiguous 20kb bin was calculated using the genome-CoverageBed program in BEDtools [27]. To remove the bias of read-depth distribution caused by data volume difference, GC content and casual sequencing bias respectively, three types of normalization were applied in four steps: 1) Intra-run normalization was used to eliminate the difference between each data; 2) Winsorization that was a transformation reducing the influence of outliers by moving observations outside a certain fractile in the distribution to that fractile [28], was employed to reduce the extreme read-depth among each contiguous window consisting of 15 bins of 20 kb; 3) LOESS was employed to remove GC-bias same as in Chiu et al. 's paper [3]; 4) Intra-run normalization again due to steps 2) and 3) could induce bias of data size. Mean and standard deviation (s.d.) of read-depth of each chromosome were calculated for further statistical analysis. The normalized read-depth of each bin was added up every 15 bins to smooth the readdepth signal. Then the mean and standard deviation of merged read-depth on each chromosome was calculated to statistical analysis for fetal aneuploidy evaluation. For each data, six Z scores were called as described by the following formula: where Z_baseline_vs_n means the Z score normalized to the average of reference negative samples on chromosome i, and ref. means the normalized read-depth values of reference negative samples. where Z_baseline_vs_p means the Z score normalized to the average of predicted reference positive data, fetal% means fetal DNA fraction. The predicted reference positive data is equal to the mean value of reference negative data multiplied by a factor (1+fetal%/2) based on the assumption that half of fetal fraction would be increased when trisomy happens. where Z_chr_vs_n means the Z score normalized to the internal reference autosome value that is the median of all averages of normalized read-depth in each autosome of this sample, which was similar in Lau's paper [15]. Similarly, we have: where Z_chr_vs_p means the Z score normalized to the predicted positive internal reference autosome value that is the median of predicted positive averages of normalized read-depth in each autosome of this sample. Z sample vs n i ¼ À ðmeanðref : i Þ À mean i Þ S m � MAD i . ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where Z_sample_vs_n means the Z score normalized to the average of sample data, MAD means the median absolute deviation of read-depth, window means the number of windows on the chromosome i, and S m is a factor equal to 1.4826 and makes S m � MAD i . ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi window i p approximate to the standard deviation of read-depth of sample data. where Z_sample_vs_p i means the Z score normalized to the mean value of predicted positive sample data. SVM discrimination Six Z scores together with fetal fraction, peak value of read length, maternal age and gestational week, were collected for support vector machine classification. For the ten features selected for SVM classification model training, the six Z score-based features were essential because their distributions between negatives and positives were significantly different (Wilcox Rank Sum test, P < 2.2×10 −16 ), while the other four features were not biasedly distributed ( Table 2). Ten features were collected from the data in Groups "N" and "P" for model building on specified chromosomes. The six Z scores obtained from formula (1) to (6) do not require scaling because they were already normalized, while the other 4 features including fetal fraction, peak value of read length, maternal age and gestational week, would be normalized to same scale ranging from 0 to 3 by the command 'svm-scale-l 0 -u +3'. For the training dataset, '-s' was used to save the scaling range, and '-r' was used to restore the saved scaling range on test data. Then, the SVM model was constructed by 'svm-train' and employed to do prediction by 'svm-predict' in LBSVM package. Despite SVM was quite efficient in handling sparse data, we also evaluated its performance by assigning two class weights (using '-wi' in training) that were inversely correlated with their instance number, aiming to improve the accuracy in unbalanced data. We trained SVM models using those ten selected features from chromosomes 13, 18 and 21 datasets separately. Let us denote class labels as y i 2 {-1, 1} for normal state and trisomy of each specified chromosome, respectively. Given a set of training data {x i , y i }, i = 1, 2, . . ., n, the SVM returns a maximum margin separating hyper-plane with w and an offset b using subject to: . . . ; n: where w are feature weights representing the hyper-plane, ε i � 0are slack variables designed to allow misclassified data points, and C > 0 is the penalty parameter for misclassification. By solving for the Lagrangian dual of formula (7), we could obtain a simplified optimization problem This dual problem could be efficiently solved using quadratic programming or sequential minimal optimization (SMO) algorithm. Here, the solution for w in formula (7) is also given by Once the optimal solution for α i,. . ., α n is found, the optimal b is then determined using the maximum margin condition: The decision function for any new point x is then with f > 0 assign to class 1 and f < 0 assign to class -1. The inner product ϕ x i ð Þϕ x j � � in formula (8) could also be represented as a kernel function Here, we applied two kinds of kernel functions for our data: linear kernel function and radial basis function (RBF). The linear kernel function is based on inner products of input features between any two samples, so we could verify if our data are linearly separable. The feature space of the RBF kernel, on the other hand, enable us to learn a nonlinear classification by transforming input features into an implied feature space with an infinite number of dimensions. The RBF kernel is defined as: where γ is a kernel parameter controlling the sensitivity of the kernel function. For trainings with linear kernel function, only one hyper-parameter C needs to be adjusted to select an appropriate model through k-fold cross validation. A low C makes the decision boundary smooth, while a high C could select more samples as support vectors and classify more training samples correctly, which thus is prone to be over-fitting. For the RBF kernel, besides C, there is another kernel parameter γ, defining the extent of influence for those support vectors, with high values meaning a narrow range of influence. To select optimal parameters (C and γ for RBF kernel; C for linear kernel), we employed a grid search approach with 0.1 as step and 5-fold cross validation by using grid.py from LIBSVM [17] and expand.grid from 'caret' package [29]. And to prevent over-fitting, C values were carefully checked to avoid solutions with large values. Other discrimination methods Other discrimination methods such as linear discriminant analysis (LDA) [28], quadratic discriminant analysis (QDA), decision tree (Dtree) were also tested on the same NIPT dataset in this study. Both of LDA and QDA assumed samples drawn from a multivariate normal distribution Nðμ; ΣÞ with mean vector μ and covariance matrix Σ. The probability of k class was given by: where p k was the prior class probability of k classes. LDA arises when the covariance matrix for each classes were assumed to be the same, in which case the discrimination boundary could be simplified to: In QDA analysis, the covariance matrix from each class is different and its discrimination boundary is: where S k was the covariance matrix for k class; LDA and QDA were trained by the 'lda' and 'qda' methods in 'MASS' package, respectively. For Dtree, we directly applied the 'ctree' method in the R package 'party', which utilized a binary recursive portion approach to rank and select those input variables according to their association with the input classes. We also employed AdaBoost to create a highly accurate prediction rule using the 'caret' package [29]. We implemented AdaBoost.M1 with decision trees as weak learners. The final classifier of AdaBoost was a weighted combination of weak classifiers, Where h t , β t were the induced weak classifiers and their assigned weights, respectively. The AdaBoost model was also trained with the same input as SVM and 5-fold cross validation to avoid the chance of over fitting. Performance tests The performances of SVM models trained using different hyper-parameter settings were compared respectively for chromosomes 13/18/21. Generally, four types of SVM models were compared: RBF kernel without class weight, RBF kernel with class weight, linear kernel without class weight and linear kernel with class weight. Firstly for the data in Groups "N" and "P" on specified chromosomes, an internal validation was done using the model built based on these data themselves. Importantly, the trained models were applied to predict the data in Group "Unclassified", which was the most meaningful application in this study. As well, the models were applied to predict the data in Group "QC-filtered". Similarly, we tested the SVM models trained using different parameter settings. Performances of models trained using two kernel functions were compared. Also, model trained using class weight was compared to those trained without using class weight. Secondly, other ML models such as LDA, QDA and Dtree, were tested in prediction of 13-/ 18-/21-trisomies on the three groups of datasets. The performances of other ML models were compared with the performances of SVM models mentioned above. We also conducted a comparison between SVM models with optimal parameters and Adaboost. For visualization of performances of ML models, four ML models trained using feature D1 from formula (1) and feature D3 from formula (3) were tested in internal validation of chromosome 21. The two-dimension hyper-planes for discrimination were plotted using 'contour' in R package 'graphic'. We also visualized the performance of SVM models using three features (features D1, D3 and D7 as fetal fraction) in a 3-D plot and corresponding three 2-D plots. Inaccuracy of one-Z-test approach in NIPT prediction We employed all of six Z-tests from formulas (1) to (6) to demonstrate their distributions of Z scores on respectively chromosomes 13/18/21 in all QC-pass NIPT NGS data. As shown in Fig 2, none of Z-test could clearly distinguish positives from negatives. Using Z = 3 as cutoff, Z scores from formulas (1), (3) and (5) were able to identify all the true positives but a number of false positives existed especially formula (3). Using Z = -3 as cutoff, Z scores from formulas (2), (4) and (6) had both false positives and false negatives in discrimination. Though these six Z scores were significantly biased in distributions between positives and negatives ( Table 2, Wilcox Rank Sum test, P < 2.2×10 −16 ), the simple discriminating method that was based on a certain cutoff value would always give false positive or negatives in NIPT calling. Except these six Z scores, the other four indexes were not significantly biased in distribution between positives and negatives ( Table 2). This result suggests that the one-Z-test based approach could not guarantee the prediction accuracy of NIPT NGS data because of the simplicity of discriminating rule. Hence the calling of NIPT demands a more comprehensive approach that could combine different vectors to improve the accuracy. Performance of SVM models As described in the pipeline (Fig 3), we trained the SVM models using the known datasets from Group "N" and "P" for chromosomes 13/18/21 respectively. Firstly, models were respectively trained using different hyper-parameter settings for chromosomes 13/18/21. To compare the performances between kernel functions, both linear and RBF kernel functions were employed to build the models. Further, parameter '-wi' was used to adjust C for class weight. Parameter optimization was employed to find the best C and g using a grid search method with 0.1 as step and 5-fold cross-validation (See Methods). On one hand, the models with using RBF kernel performed better than the ones with linear kernel ( Table 3). The models with RBF kernel achieved 100% accuracy in both internal (Group "N" & "P") and external validation (Group "Unclassified") for all three specified chromosomes. However, the models with linear kernel did not predict the positives in Group "Unclassified" well. On the other hand, the models using class weight performed as good as those not using. It was because that the models not using class weight were sufficiently accurate in the datasets of this study (S2 Table). Taken together, the SVM models with using RBF kernel function and class weight were selected in the following analysis. In internal validations, the SVM models predicted the training dataset with 100% accuracies on all three chromosomes. Importantly, as in external validation, the SVM models performed good in predicting the QC-pass data that used to be predicted as "Unclassified" (1.96 � Z score � 4) by Z-test. For chromosome 21, all 61 data in "grey zone" (1.3% of all QC-pass data) were accurately predicted using the training model, including four positives and fifty-seven negatives. It was noted that two of the four positives having Z score smaller than 3 (2.44 and 2.52 respectively), however, the SVM model could correct these false negatives. This result indicated that false negatives would be induced if only using Z score = 3 as discrimination cutoff that was employed by previous studies [3,24]. Fortunately, SVM model trained by known dataset could uncover such false negatives. For chromosome 18, all 48 data in "grey zone" (1.0% of all QC-pass data) were accurately predicted using the trained SVM model, including 4 positives and 44 negatives. For chromosome 13, all 42 data in "grey zone" (0.9% of all QC-pass data) were accurately predicted as "negative". In summary, all of the data that could not be classified using Z-test (nearly 3% of all QCpass data), were precisely predicted using the corresponding trained SVM models. This result suggested that SVM model could save around 3% of resource in retests. In consideration of millions of NIPTs were taken each year, such a reduction of cost is meaningful in clinical practice. Surprisingly, the SVM models also acted effectively in predicting the 766 QC-filtered data (S2 Table). For chromosome 21, the model precisely predicted all the QC-filtered data, including 4 positives and 762 negatives. For chromosome 18, 763 out of 766 data were correct (99.61%). One positive was wrongly predicted as "negative" with prediction probability 64%, while two negatives were incorrectly predicted as "positive" with predicted probabilities 88.6% and 97.6% respectively. For chromosome 13, 765 out of 766 data were correct (99.87%). Only one positive data that was regarded as "negative" with Z score = 2.79, was also incorrectly predicted as "negative" by the SVM model. This demonstrated that the SVM model could perform well in most of QC-filtered data but could not uncover all false negatives, suggesting that quality control is still necessary to guarantee the accuracy of NIPT. Comparison with other ML models Compared with other ML models, SVM models performed relatively better in the datasets of this study. SVM models obtained 100% accuracy in both internal validation and prediction on "Unclassified" dataset (Table 4) Table 3. Performance of SVM models on NIPT prediction using different parameter setting. Chr21 Group "N" & "P" Group "Unclassified" Four types of SVM models were compared in both internal and external validation for each of chromosome 13/18/21. e w means employing class weight to adjust parameter C; opt means employing optimization for parameters C and gamma in cross validation. f Sens. is short for sensitivity; Spec. is short for specificity. https://doi.org/10.1371/journal.pone.0207840.t003 three chromosomes. We also compared Adaboost that employed Dtree in model training with SVM models in prediction on training dataset (S2 Fig). Both SVM models using different kernels have similar high accuracy with Adaboost in chromosome 21 and 18, while SVM models performed slightly better in chromosome 13. This result may indicate that SVM models performed comparatively well in NIPT prediction under this dataset. However, other ML models also have the potentials in improving calling. Like Adaboost employing Dtree to create a high accurate prediction rule can enhance the accuracy. Similarly, neuronal network could also employ a class of model to create a high accurate prediction rule. Table 4. Performance of different discrimination models on NIPT prediction using ten selected features. SVM-based NIPT We took chromosome 21 as an example to visualize how the SVM models worked (Fig 4). Using two out of ten features (D1 from formula (1) and D3 from formula (3)), nearly all of four ML models illustrated good discrimination lines on the training dataset (Groups "N" and "P"), except that LDA has one false negative. A 3-D plot and its three 2-D plots were also given to show how SVM model works in discriminating negatives and positives (S3 Fig), using feature D1, D3, and D7 (fetal fraction). These results were for visualization of how the ML models discriminated the data, whereas all of ten features were used in model training. Correcting the previous false callings caused by Z-test We also employed the trained SVM models to predict eight cases that were wrongly predicted by Z-test before. As shown in Table 5 (2), (4) and (6)) also significantly lower than -3. Similarly, four false negatives also showed ambiguous values of features D1 to D6, suggesting that none of these six Z scores were reliable to do prediction independently. This further demonstrated that the SVM model was better than the commonly used one-Z-test based approach. In summary, SVM model has shown its potential in discrimination of NIPT results in this study, especially compared with the current one-Z-test based method. The SVM models using RBF kernel achieved 100% accuracies in trisomy detection of 13/18/21 in both internal and external validation. With this improvement, it is expected to reduce the cost of retests on samples in grey zone as well as the cost caused by false positives and false negatives. As shown in Fig 3, we expect that the SVM model could be further improved if 1) more known data were validated and added up to model-training; 2) more impacted features were discovered and added up to model-training. Some other clinical signs such as the values from serological test could be employed together with NIPT data to do prediction. In future, we plan to validate and optimize our ML method for trisomy prediction in a larger dataset. Discussion Due to the inaccuracy of serological testing and the potential harm of amniocentesis, NIPT is suggested to be adopted in nowadays' prenatal screening with purpose to detect 13-/18-/21-trisomies. In clinical practice, positive result of NIPT would be suggest to take amniocentesis while negative result does not require. Therefore, a false negative NIPT report would result in wrong diagnosis and delay of treatment, while a false positive NIPT report would require the patient to take an unnecessary amniocentesis with a risk of abortion. Some may argue that the current one-Z-test based NIPT prediction approach is precise enough, however, let us do a simple calculation: assume that 1,000,000 women come take NIPT and 1% of them have trisomy fetuses, which means there are 990,000 negatives and 10,000 positives; based on the accuracy of one-Z-test approach provided by Chiu et al. [3] (Sensitivity 97.9% and Specificity 99.7%), there would be 210 false negatives and 2970 false positives; this means 210 women would be wrongly diagnosed and give birth to trisomy fetuses and 2970 women would take an unnecessary amniocentesis with 12 of them would miscarry the normal fetus when taking amniocentesis. In fact, the number of newborn in China was 4 million in 2015 [30] but it is believed to increase in future since the implementation of two-child policy. Therefore we are motivated to improve the accuracy of prediction on NIPT NGS data, which the one-Z-test based approach could not satisfy. Recently, many groups have noted that the one-Z-test approach cannot satisfy the accurate prediction on NIPT NGS data. Bayindir et al. supplemented a Meta Z-test, which means the Z test of Z score, in discrimination [31]. Yu et al. improved the count-based analysis by supplementing another size-based approach [32]. Using more cutoff values is a good way to ensure the prediction accuracy of positives and negatives, however it would also increase the number of unclassified samples and hence demand more retests. Further, the fetal fraction was a key [24]. BGI's NIFTY employed a logarithmic likelihood odds ratio between binary hypotheses that took fetal fraction in consideration [14], but it still relies on a single-dimension cutoff to predict the result. Other information like maternal age is also important in NIPT NGS data prediction [33]. In this study we showed that ML method is a good way to solve the problems above. Combining multiple Z-tests and other features, the trained SVM models achieve extremely high prediction accuracy and decrease the number of unclassified data. In fact the enhancement is instantaneous as there is few steps in parameter optimizing. Both linear and RBF kernels can achieve same high accuracy in prediction. This suggests that positives and negatives have significant differences in the distributions of selected features. We also noted that other ML methods could achieve similar improvement. Since the effectiveness of ML depends on the selected features and dataset, it is uncertain that SVM definitely performs best in NIPT NGS data analysis. However we have achieved our goal of improving the NIPT prediction accuracy to an extremely high level by using SVM models. We also tested some other ML methods using the same features in this study. For LDA and QDA, collinearity between features could be one of reasons of lower accuracy in prediction, while SVM allows collinearity between features. Besides SVM, Adaboost also had high accuracy in prediction. It is worth to keep testing these machine-learning algorithms if there are more features and more data in future, since our objective is to find the best approach for clinical use. Temporarily, SVM showed the most robustness according to this study. For the ten features selected for current SVM model training, the four non-Z-score features actually were not significantly biased in distributions between negatives and positives, though IONA's paper reported that maternal age was useful in correcting its NIPT results [33], which might be due to the differences in sample composition. In conclusion, we developed an accurate SVM-based approach and showed its potential in trisomy prediction on chromosomes 13/18/21. Compared with the one-Z-test approach, it has advantages in prediction accuracy and effectiveness, resulting in lower rate of false result and lower cost of retest. Other MLs could also improve the prediction accuracy on NIPT NGS data, and SVM is suggested according to this study. Such a ML approach could also have potential in detection of aneuploidy of other chromosomes or even micro-duplication and deletion, which would be included in our program if sufficient diagnosed cases were available. For further validating and optimizing our ML methods, we are planning to gather a larger and more comprehensive dataset in future. We suggest that the ML methods would be employed in NIPT prediction instead of the one-Z-test based approach in clinical practice. S1 Fig. Z value distributions of current one-Z-test based NIPT in simulation. Each of the three normal distributions were simulated by bootstrapping 10,000 times for negative samples (green line), positive samples with fetal fraction 5% (cyan line) and positive samples with fetal fraction 10% (red line) respectively. Yellow dash line means Z score equal to 3. Dark dash lines show the interval of grey zone. When fetal DNA fraction is around 5% that is possible to happen in real, it became difficult to distinguish positives and negatives from samples in grey zone. Table. Detailed information of performance test of SVM models on NIPT prediction using different parameter setting. Column of Group "N"&"P" is the result of internal validation; column of Group "Unclassified" is the result of external validation for the QC-pass data that could not be classified by one-Z-test method; column of Group "QC-filtered" is the result of external validation for the QC-filtered data. The rows of SVM models using RBF kernel with class weight and optimal parameter setting are bold. (XLSX) S3 Table. Detailed information of performance test of different ML models on NIPT prediction. Column of Group "N"&"P" is the result of internal validation; column of Group "Unclassified" is the result of external validation for the QC-pass data that could not be classified by one-Z-test method; column of Group "QC-filtered" is the result of external validation for the QC-filtered data. The rows of SVM models are bold. (XLSX) Validation: Jianfeng Yang.
2018-12-12T19:54:06.087Z
2017-11-10T00:00:00.000
{ "year": 2018, "sha1": "c8507d03d09e820102c586a3b12e74dd57af98da", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0207840&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8507d03d09e820102c586a3b12e74dd57af98da", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology", "Computer Science" ] }
245922374
pes2o/s2orc
v3-fos-license
Time-On-Task Effects on Working Memory Gating Processes—A Role of Theta Synchronization and the Norepinephrine System Abstract Performance impairment as an effect of prolonged engagement in a specific task is commonly observed. Although this is a well-known effect in everyday life, little is known about how this affects central cognitive functions such as working memory (WM) processes. In the current study, we ask how time-on-task affects WM gating processes and thus processes regulating WM maintenance and updating. To this end, we combined electroencephalography methods and recordings of the pupil diameter as an indirect of the norepinephrine (NE) system activity. Our results showed that only WM gate opening but not closing processes showed time-on-task effects. On the neurophysiological level, this was associated with modulation of dorsolateral prefrontal theta band synchronization processes, which vanished with time-on-task during WM gate opening. Interestingly, also the modulatory pattern of the NE system, as inferred using pupil diameter data, changed. At the beginning, a strong correlation of pupil diameter data and theta band synchronization processes during WM gate opening is observed. This modulatory effect vanished at the end of the experiment. The results show that time-on-task has very specific effects on WM gate opening and closing processes and suggests an important role of NE system in the time-on-task effect on WM gate opening process. Introduction The feeling of fatigue induced by long-time work is prevalent in daily life and is always accompanied by an impaired performance. This phenomenon has been frequently studied using time-on-tasks in laboratory conditions (Lorist et al. 2000;Kato et al. 2009;Möckel et al. 2015) and is often observed in tasks requiring executive functions (Falkenstein et al. 2002;Yu et al. 2021). A likely reason why executive functions or cognitive processes depending on prefrontal cortical structures are affected by fatigue has been outlined in the opportunity cost model by Kurzban et al. (2013) as outlined further below. When considering executive functions, inhibitory control, cognitive f lexibility processes, and working memory (WM) processes (e.g., Miyake et al. 2000;Lehto et al. 2010) are important (Diamond 2013). WM is one of the best-studied cognitive functions in humans. Nevertheless, astonishingly little is known about how time-on-task affects central processes determining WM dynamics-that is, how information supposed to enter WM is controlled or gated. The concept of "WM gating" has been widely used to describe the mechanism of WM flexibly switching between two main functions/states: maintenance and updating (O'Reilly and Frank 2006). When the gate is open, new information can enter WM, and WM information is updated; when the gate is closed, distracting or novel information cannot enter WM, and the stored information is maintained. The dynamics of WM gating processes can be studied using the so-called reference-back paradigm (see Materials and Methods). In this task, gate opening and gate closing can be calculated between different trial types (Kessler and Oberauer 2014;Rac-Lubashevsky and Kessler 2016a). Using the reference-back task, WM opening and closing processes can be measured in various ways, such as response times (Rac-Lubashevsky and Kessler 2016a; Verschooren et al. 2021), event-based eye-blink rate (Rac-Lubashevsky et al. 2017), and neurophysiological measures (Rac-Lubashevsky and Kessler 2018). This is possible because the reference-back task includes comparison trials without requiring the process of WM updating, which, when compared with the classic nback task (Gevins and Cutillo 1993), provides a baseline for comparison and thus enables the identification of the gating processes and calculation of related costs (Rac-Lubashevsky and Kessler 2016a). Higher reaction time (RT) cost was always reported in gate closing than gate opening Oberauer 2014, 2015;Kessler 2016a, 2016b;Rempel et al. 2021). Considering that the gate closed state is a default WM gating mode as suggested by the prefrontal cortex (PFC), basal ganglia working memory (PBWM) model (Hazy et al. 2006), the process of gate closing is a switch from WM demanding status (updating) to default status (maintenance). Therefore, the considerable RT cost in gate closing fits the previous finding that switching to an easier task takes longer than switching to a more difficult task (Gilbert and Shallice 2002;Schneider and Anderson 2010). As opposed to gate closing, gate opening represents a switch from the WM default status (maintenance) to demanding status (updating). Neurophysiological evidence showed that gate opening, but not gate closing, was associated with strong basal ganglia, thalamic, and fronto-parietal activations (Nir-Cohen et al. 2020). This suggests, as a selective process driven by specific stimulus, gate opening requires more intentional control than gate closing. This is of particular relevance considering time-on-task effects. According to the before-mentioned account by Kurzban et al. (2013), the costs of performing a task are represented while performing a task and increase during time-on-task. Costs are exceptionally high when cognitive operations require intentional control. Moreover, WM gating functions require taskswitching and cognitive f lexibility processes (Kessler and Oberauer 2014;Rac-Lubashevsky and Kessler 2016a), which have recently been shown to indicate strong timeon-task effects (Yu et al. 2021). Therefore, it is reasonable to hypothesize that time-on-task effects are stronger during WM gate opening processes than WM gate closing processes. We examine this hypothesis with particular emphasis on neurophysiological processes. Regarding neurophysiological processes, we primarily focus on theta-band dynamics. Theta oscillations play a primary role in WM control (Klimesch 1999;Başar et al. 2001;Sauseng et al. 2010;Karakaş 2020) and in its orchestration with cognitive control and response selection processes (Chmielewski et al. 2016(Chmielewski et al. , 2017Dippel et al. 2017;Adelhöfer and Beste 2020;Takacs et al. 2020). Importantly, theta band activity (TBA) seems to be particularly relevant during the sequential encoding of WM items (Roux and Uhlhaas 2014) for which computational models suggest that input-gating mechanisms regulate these dynamics (Chatham and Badre 2015;Rac-Lubashevsky and Kessler 2016a). We assume that TBA during gate opening is expected to decrease with time-on-task. Using electroencephalography (EEG)beamforming methods (Gross et al. 2001), we delineate which functional neuroanatomical structures are associated with theta band time-on-task effect during WM gate opening. Here, we expect prefrontal regions to show modulations because the regions are particularly prone to time-on-task effects (Kurzban et al. 2013;Yu et al. 2021), playing an essential role in WM processes and WM gate opening in particular (Nir-Cohen et al. 2020). However, several lines of evidence suggest that TBA during cognitive control encodes multiple levels of information; that is, information about the stimulus being presented, information on how to respond to a stimulus, and information specifying the motor process itself (Chmielewski et al. 2017;Dippel et al. 2017;Mückschel et al. 2017;Giller et al. 2020;Pscherer et al. 2020). These insights were made possible by applying residue iteration decomposition (RIDE) (Ouyang et al. 2011(Ouyang et al. , 2015 on singletrial EEG data, time-frequency-transformed after that. RIDE yields three functionally distinct activity clusters: 1) the S-cluster captures perceptual and attentional selection mechanisms, 2) the C-cluster contains information specifying how to map a stimulus on the appropriate response, and 3) the R-cluster reflects processes of the motor execution. In principle, all of this information is central to control during WM gating processes. Therefore, all of these TBA clusters are likely to show time-on-task effects. However, according to the model by Kurzban et al. (2013), especially effortful decision processes depending on prefrontal structures are prone to fatigue or time-ontask effects. Since these processes are reflected by the Ccluster (Ouyang et al. 2011(Ouyang et al. , 2015(Ouyang et al. , 2017, it is possible that especially C-cluster TBA shows time-on-task-effects. Based on this assumption, we were interested in probable modulatory processes associated with the timeon-task effects on the WM gating, especially on gate opening. WM strongly depends on the PFC, where various neurotransmitters modulate WM (Motley 2018). Among these neuro-modulators, norepinephrine (NE) has been suggested to strongly impact WM functions in the PFC (Arnsten et al. 1996;Zhang et al. 2013). Specifically, NE within the PFC exerts an inverted-U-shaped modulation of WM performance. Moderate NE levels promote WM performance by decreasing distractibility. Low or exaggerated NE levels impair WM performance (Arnsten et al. 1996;Robbins and Arnsten 2009). This inverted-Ushaped modulating function of the NE was also described in the adaptive gain theory combining two NE modes: phasic and tonic modes (Aston-Jones and Cohen 2005). It was suggested that particularly the phasic mode of the NE system is driven by task-related decision processes, and a strong phasic NE response indicates high task engagement (Aston-Jones and Cohen 2005). Phasic NE arousal was observed to amplify perception and memory (Mather et al. 2016) and attentional performance (Howells et al. 2012). All these processes are necessary for gate opening as a stimulus-driven process. According to this evidence, gate opening, which demands high PFC control on inhibiting distracting information and attention switching, is likely modulated by phasic NE activities. By contrast, gate closing requires less cognitive control and is, thus, less modulated by phasic NE activities. In this study, we record pupil diameter data as a representation for NE release. Evidence shows that the pupil diameter covaries with the NE system and is used as a reliable indicator of NE activities in many studies (Hou et al. 2005;Gilzenrat et al. 2010;Jepma and Nieuwenhuis 2010;Murphy et al. 2011;Hong et al. 2014;Hopstaken et al. 2015). Mainly, baseline-corrected pupil size represented phasic NE activities (Gabay et al. 2011;Joshi et al. 2016;Reimer et al. 2016;Wolff et al. 2018). To examine the modulatory role of the NE system in WM gate opening in the context of time-on-task effects, we correlate the time series of phasic pupil diameter and task-related theta activity in PFC. The interaction between pupil diameter and task-related theta activity is expected to exhibit at a high level at the beginning of the experiment, showing a strong NE modulation effect. However, the modulatory effects of phasic NE activities are unlikely to remain at a consistently high level in WM gate opening according to frequently observed decrease of phasic NE activities in time-on-task indicating task disengagement (Hopstaken et al. 2015). According to the opportunity cost model, the increase of opportunity cost/effort of high-demanding WM gate opening in time-on-task reduces engagement in the primary task and increases engagement in task alternatives (Kurzban et al. 2013). In line with the adaptive gain theory, the neural correlates of the assumed performance decline of the WM gate opening (expectedly at the end of the experiment) might be related to the decrease of phasic NE activities that the modulatory effect phasic NE activities may diminish accordingly. Thus, the controlrelated activities in the PFC were merely driven by the NE system. To conclude, we expected a time-on-task effect, particularly on the WM gate opening, and this time-ontask effect could be observed through the existence of a strong correlation between the phasic pupil diameter and the task-related prefrontal theta activity at the beginning of the experiment and the decrease/disappearance of the correlation at the end of the experiment. Participants The n = 38 healthy volunteers (13 male, mean age: 25.24 ± 2.84) participated in the experiment. Among them, n = 31 participants (12 male, mean age: 25.74 ± 2.53, all right-handed) completed the experiment and were included for data analysis. All participants had a normal or corrected-to-normal vision. They were required to consume no caffeine beverages in the morning before the experiment, which started around 9 AM. All participants provided written informed content before the experiment and were reimbursed with 35 euros after the experiment. The Ethics Committee of the Medical faculty of the TU Dresden approved our study, and the experiment was conducted following the Declaration of Helsinki. Task and Procedures We adapted the reference-back paradigm (Rac-Lubashev sky and Kessler 2016a) to a time-on-task. A capital letter ("X" or "O") framed by a colored square (in blue or red) was presented in each trial. Participants were required to decide whether the presented letter was identical to the red-framed letter displayed previously. The right "Ctrl" button should be pressed when the letters are identical, and the left "Ctrl" button if the letter is not identical. The reference-back task required participants to consistently store the previously red-framed letter in WM and update it when a new red-framed letter showed up. Accordingly, trials with red-framed letters were reference trials, and the ones with blue-frames were letters in comparison trials, that is, they were only used for comparison with the previous red-framed letter. Trials with a frame in the same color as the previous trial were no-switch trials, while the ones with a frame in a different color as the previous one were switch trials. Moreover, the required response differentiated trials into match (identical) trials and mismatch (not identical) trials. An example of the reference task and the trial definition is presented in Figure 1. Each trial started with a fixation cross presented for 600-1000 ms, followed by the stimulus presentation for 1500 ms or till a response was made. The screen turned blank for 1000 ms afterward. Before the experiment, a detailed introduction and a 30-trial exercise were provided to familiarize participants with the task. The formal experiment consisted of 3600 trials and took around 2.5 h. Thirty percent of the trials were switch trials. The frame color and the required response were assigned in a balanced manner. The order of stimulus presentation was randomized but was the same for all participants. To measure the pupil size in resting status, a 1-min fixation period was assigned before and after the experiment, and a 10-s fixation period was assigned every 180 trials during the experiment. Participants were required to stare at the fixation cross in the display center without any reaction during the fixation period. Previous studies (Lim et al. 2016;Yu et al. 2021) showed that short breaks around 10 s did not affect time-on-task effects. No more extended break was included during the entire experiment to avoid cognitive performance recovering from a break lasting for several minutes (Möckel et al. 2015). The starting trial after each fixation period was in a red frame and was not counted in the 3600 trials. Computation of Gating Indices All trials (n = 3600) were equally divided into four sessions, with 900 trials each for each participant. In each session, trials were categorized into eight conditions according to three features: reference/comparison, switch/no-switch, and match/mismatch. In each session, gate opening and gate closing indices were calculated using trials with correct responses according to the previous study of Rac-Lubashevsky and Kessler (2016a) as given in formulas 1 and 2, respectively: "Gate opening = (switch_match_reference + switch_ match_reference)-(no-switch_match_reference + no-switch_mismatch_reference)" Formula 1. "Gate closing = (switch_match_comparison + switch_ match_comparison)-(no-switch_match_comparison + no-switch_mismatch_comparison)" Formula 2. As shown in the formulas, gate opening is calculated as the switching cost of reference trials, while gate closing is the switching cost of comparison trials. In reference trials, participants were required to update the reference letter; in comparison trials, participants needed to maintain the previous reference. Accordingly, the switching processes in WM gating were defined in terms of updating and maintenance. Switch reference trials represent a change from close to open status of WM gate, while no-switch reference trials represent consistent open status of WM gate and can be seen as a baseline of the updating process. Therefore, the contrast between them describes the action of opening the gate. Following the same logic, the contrast between switch comparison trials and no-switch comparison trials represents the gate closing action which turns the active reference updating process to a default WM status of maintenance. Pupil Diameter Recording and Processing Pupil diameter data were recorded by a RED 500 eye tracker using the software iView X (SensoMotoric Instruments GmbH) at a sampling rate of 256 Hz. The eyetracker was attached underneath the display around 60 cm away from the participants. After the recording, raw pupil diameter data were synchronized with EEG data from the same participant according to the identical start and end markers in both datasets using the EYE-EEG extension of EEGLab (http://www2.hu-berlin.de/eye tracking-eeg/). After that, high-frequency activities were removed by a low-pass filter of 20 Hz, and a median filter removed pupil spikes. Artifacts, such as eye movements, were linearly interpolated using an open-source toolbox developed by Kret and Sjak-Shie (2019). After preprocessing, pupil diameter data from both eyes were averaged for each participant. Pupil diameters during the resting status were segmented according to corresponding markers and were averaged across time for each fixation period. Task-related pupil diameters were divided into four sessions, and 900 trials were segmented for each session according to the stimulus markers. Each trial started from 1000 ms before stimulus onset to 2000 ms after stimulus presentation. All trials in each session with correct responses were categorized into eight conditions as outlined in the "Computation of Gating Indices." For each condition and session, task-related pupil diameter was averaged across all "corresponding" trials and were baseline-normalized using the averaged pupil diameter between −200 and 0 ms relative to stimulus onset. EEG Recording and Processing The EEG was recorded from 60 equidistantly positioned Ag/AgCl electrodes. The coordinates of ground and reference electrodes were theta = 58, phi = 78, and theta = 90, phi = 90, respectively. EEG data were recorded simultaneously with the pupil data recording during the experiment using BrainVision Recorder software package (Brain Products, Inc.) with a sampling rate of 500 Hz. After recording, the raw EEG data were preprocessed in BrainVision Analyzer 2 software package (Brain Products, Inc.) with the following steps. First, the EEG signals were down-sampled to 256 Hz. Then, infinite impulse response filters from 0.5 to 40 Hz at a slope of 48 dB/oct, and an additional notch filter of 50 Hz was applied. After that, we discarded the defective electrode channels and applied a new reference calculated from the remaining channels. Furthermore, the regular artifacts, such as eye movements and pulses, were removed by independent component analysis (infomax algorithm), and irregular artifacts such as technical noises were removed via a manual raw data inspection. Subsequently, previously discarded channels were interpolated by spherical spines using neighboring electrodes. After EEG preprocessing, the continuous EEG data were segmented into single trials for four sessions as in pupil diameter data. The time length of each trial was 4000 ms, with 1000 ms before the stimulus and 3000 ms after the stimulus. After the stimulus, a long time window was set to avoid the edge effects in further time-frequency decomposition processes. For each trial, an automatic artifact rejection was processed to remove the residual artifacts with the following criteria: a maximal value difference above 150 μV in an interval of 200 ms, minimal amplitude <−100 μV or maximal amplitude >100 μV, or an activity (max-min) <0.5 μV in an interval of 200 ms. All trials in each session with correct responses and without artifacts were categorized into eight conditions as described in the Computation of Gating Indices section. Residue Iteration Decomposition The RIDE was implemented with the "RIDE toolbox" developed by Ouyang et al. (2011) using segmented single-trial EEG data. Baseline correction was applied on each trial from −200 to 0 ms relative to stimulus presentation before RIDE. For each session and condition defined previously, three RIDE clusters were decomposed for every single trial: an S-cluster related to stimulusrelated processes, a C-cluster representing central activities of stimulus-response translation, and an Rcluster related to motor response execution. We used the following time windows to calculate the S-, C-, and R-clusters: S-cluster: −200 to 900 ms relative to stimulus onset, C-cluster: 200-900 ms relative to stimulus onset, and R-cluster: −300 to 300 ms relative to response, respectively. The C-cluster latencies are iteratively updated by applying L1-norm minimization during RIDE. More details about RIDE-cluster computation can be found in previous publications (Ouyang et al. 2015). Each trial was decomposed into three clusters with the same length as in the original trial (i.e., 4000 ms starting from 1000 ms before the stimulus and 2000 ms after the stimulus). Time-Frequency Decomposition For each RIDE cluster in each session and condition, we decomposed the time-frequency representations of theta oscillations (4∼7 Hz) using the FieldTrip toolbox and wavelet time-frequency transformation (Oostenveld et al. 2010). Morlet wavelets in the time domain for theta frequency in a step of 0.5 Hz were calculated, where the wavelet duration was three and the number of wavelet cycles was 5.5. After that, the time-frequency representation of theta oscillations was normalized by baseline activities between −200 and 0 ms relative to stimulus onset; that is, a decibel conversion calculated as P dB = 10 × log 10 (P toi /P baseline ) (P is power, and toi refers to "time of interest") was performed. For each participant and session, baseline-normalized theta powers for gate opening and gate closing were calculated following Formulae 1 and 2 (see above). To examine when the time-on effects (i.e., the difference between the first and the last sessions) were observed in gate opening and gate closing theta activities, cluster-based permutation tests comparing sessions S1 and S4 using time-frequency representations were applied for gate opening and closing separately. This step revealed a significant difference between session S1 and S4 around 0.5-1.5 s for all RIDE clusters and only for the gate opening condition (see Results). Hence, further analyses were based on theta powers between 0.5 and 1.5 s only for gate opening theta power. We applied cluster-based permutation tests to identify the electrodes showing significant difference between sessions S1 and S4 in this time window to the averaged frequency representation between 0.5 and 1.5 s of each RIDE cluster. All cluster-based permutation tests were based on the dependent t-tests on each electrode (and time points). The Monte-Carlo method was used to compute the reference distribution of the permutation test with 500 random draws. The threshold for the sample-specific t-tests was 0.05. The cluster-level t-values were computed using the sum of all t-values within electrodes (and time points). The minimum number of electrodes (or time points) forming a cluster was 1. Source Estimation The neuroanatomical source activities showing the S1 − S4 difference of task-related theta band activities were estimated using dynamic imaging of coherent sources beamformer (Gross et al. 2001) for each RIDE-cluster only for gate opening condition according to the clusterbased permutation tests using sensor-level data. For each participant, theta activities of gate opening in sessions S1 and S4 were selected between 0.5 and 1.5 s after stimulus presentation. The corresponding baseline theta activities were selected from −1 to 0 s relative to stimulus onset for the abovementioned trials. Individual theta frequency power and the cross-spectral density matrix were calculated for these conditions using a single Hanning taper frequency transformation. After that, a spatial filter was constructed using all baseline and activity conditions with a regulation parameter of 5%. We took the same number of trials in all conditions to construct the spatial filter to avoid spurious noise-related sources (Handy 2009) (determined by the condition with the least number of trials). The trials in each condition used for constructing the spatial filter were randomly selected. This spatial filter was further applied to the individual power to estimate the source. Afterward, the source power of each session and condition was baselinednormalized in decibel as P dB = 10 × log 10 (P toi /P baseline ) (P is power). Based on the decibel power of each condition, the source power of gate opening for each session was calculated following Formula 1. The average gate opening source theta powers were then computed by averaging across participants for each session and were mapped on FieldTrip head model template "standard_mri." After that, we selected the top 1% voxels showing positive S1 − S4 difference to construct the neuroanatomical clusters of interest using the "DBSCAN" algorithm. The minimum number of voxels to form a cluster was seven. We then reconstructed the activities in the region of interest through a linearly constrained minimum variance beamformer (Van Veen et al. 1997). This was conducted in each anatomical cluster for corresponding RIDE clusters. For the gate opening condition and each participant and session, a covariance matrix was computed using corresponding RIDE-decomposed single trials to generate a spatial filter which was then applied on the RIDE-decomposed data to reconstruct the time series of each corresponding source indices. The time series was then averaged across employed indices and was further time-frequency-decomposed using Morlet wavelets as sensor-level time-frequency decomposition. Time-frequency representations of gate opening taskrelated theta powers were calculated in the same way as in the previous steps. Statistical Analysis The statistical analysis on behavioral data utilized two parameters: accuracy and RT calculated for each session, condition, and participant. RT data were only calculated from trials with correct responses. To examine the time-on effects on behavioral performance, repeated measures ANOVAs using within-subject factor "session" (S1, S2, S3, and S4) were applied on accuracy and RT parameters for each condition. After that, the accuracy and RTs of gate opening and closing were computed separately using Formulas 1 and 2 for each condition and participant. Repeated measures ANOVA using withinsubject factors "session" (S1, S2, S3, and S4) and "gating" (opening and closing) were calculated for RT and accuracy data. The statistical analyses were separately conducted on task-related pupil activities and resting status pupil baselines for pupil diameter data. For the task-related pupil diameter, gate opening and closing data were calculated for each session using Formulas 1 and 2. Paired-samples t-tests were separately applied on gate opening and closing pupil diameter for each time point from 0 to 2 s after stimulus presentation comparing sessions S1 and S4. For pupil baselines during the fixation periods, a paired-sample t-test was applied to compare the averaged pupil baselines between the start and end, and repeated measures ANOVA was applied on pupil baselines during the experiment, respectively. For all repeated measures ANOVAs, Greenhouse-Geisser correction was applied when necessary and post hoc tests were Bonferroni-corrected. To examine the possible interaction between NE dynamics and cortical TBA and to investigate how their interaction change with time/sessions, the correlation between task-related pupil diameters and source-level theta activities was conducted for each RIDE cluster in each anatomical cluster. The correlation analysis was applied only on gate opening conditions for sessions S1 and S4 because time-on-task effect was only observed in the behavioral performance of gate opening. Taskrelated source-level theta activities were selected from 0 to 1.5 s, and baseline-normalized pupil diameter was selected from 0 to 2 s. Task-Related Theta Activities The RIDE-decomposed theta activities in the sensor level are shown in Figure 3. For all RIDE clusters (S, C, and R), significant differences (P ≤ 0.05) between sessions S1 and S4 were observed for gate opening but not for gate closing (Fig. 3A,E,I). The time windows showing significant differences of task-related gate opening theta powers were centered around 1 s for all RIDE clusters. Therefore, we selected gate opening task-related theta powers between 0.5 and 1.5 s for further analyses. From 0.5 to 1.5 s after stimulus presentation, a significant difference between sessions S1 and S4 was observed at bilateral electrode sites for all RIDE clusters and was also observed at frontal sites for the C-cluster. The time-frequency representations of task-related gate opening theta activities at the electrode sites showing significant differences for each RIDE cluster (Figs 3C,D,G,H,L) indicate that the gate opening effects were quite strong in session S1 but negligible in session S4. . Task-related theta activities at the sensor level. Plot (A) shows the electrodes and the time points of the difference of gating effects between sessions S1 and S4 for the RIDE S-cluster data. Only the data points with significant differences (P ≤ 0.05) are presented. The color bar indicates the P value. Plot (B) shows the topography of the theta power difference of gate opening between sessions S1 and S4 from 0.5 to 1.5 s for the RIDE S-cluster data. The color bar shows the t-values. "×" and " * " represent the significance of P ≤ 0.05 and P ≤ 0.01, respectively. Plots (C) and (D) present the time-frequency decomposition of gate opening theta oscillations of RIDE S-cluster using electrodes with significant S1-S4 difference as in plot (B) for sessions S1 and S4, respectively. Theta frequency and the time of the interest (0.5-1.5 s) are marked by a red rectangle. The topography of the task-related gate opening theta power between 0.5 and 1.5 s for each session was presented on the upper-right side. The color bar indicates the task-related theta power in dB. Plots (E-H) and plots (I-L) correspond to the descriptions of plots (A-D) for RIDE-C and RIDE R-clusters, respectively. The anatomical regions of the highest positive S1 − S4 difference of task-related gate opening theta powers are presented in Figure 4. For the RIDE S-cluster, the voxels formed two anatomical clusters showing the highest positive S1 − S4 difference (Fig. 4A). The largest anatomical cluster (S-source 1) was located in the left dorsolateral, medial, and orbital superior frontal gyrus (BA9, 10, and 46) and extended to the left middle frontal gyrus (BA46). A relatively small cluster (S-source 2) was also observed in the left postcentral gyrus (BA3, 1, and 2). For the RIDE C-cluster, three anatomical clusters were evident (Fig. 5B). The largest anatomical cluster (C-source 1) was left-lateralized in the dorsolateral and medial superior frontal gyrus (BA9 and 46) and the middle frontal gyrus (BA46). The other clusters (C-source 2) were mainly located in the left precentral and postcentral gyri (BA4, 3, 1, and 2) and extended to the left middle frontal gyrus (BA46). For the RIDE R-cluster (Fig. 4C), the difference was centered in one anatomical cluster (Rsource 1), which was located in the left dorsolateral and medial superior frontal gyrus (BA9 and 46) and the left middle frontal gyrus (BA46). Figure 5A and B represented the baseline-corrected pupil diameter of gate opening and closing, respectively. A significant difference between the first and the last sessions (S1 and S4) was observed between 1.32 and 2 s for gate opening, showing a larger pupil dilation in the first session and between 0.18 s and 0.73 s for the first session gate closing that pupil size in session S4 was larger than in S1. The baseline-normalized pupil dilation in each condition and pupil sizes during the resting times are presented in the Supplementary Figure S2. Behavioral performance for each condition is shown in Supplementary Figure S1. Correlation between Source-Level Theta Activities and Pupil Diameter The correlations between source-level theta activities and pupil activities in each RIDE cluster in respective anatomical clusters are illustrated in Figure 6. Significant positive correlations were observed for all RIDE clusters in all anatomical sources in the first session (S1) but not in the last session (S4). In session S1, the source-level task-related theta power of RIDE S-cluster in the frontal cortex (i.e., Ssource 1) around 0.5-1.2 s was correlated with baselinenormalized pupil diameter around 0.3-1 s (46 738 data points, mean R = 0.40, and mean P = 0.029). Two large correlation clusters were observed for the source-level task-related theta power of the RIDE S-cluster in the precentral and postcentral gyri (i.e., S-source 2) in session S1. One was around 0.2-1.3 s for theta activities and around 0.2-1 s for pupil activities (61 789 data points, mean R = 0.43, and mean P = 0.020). Another one was evident around 0.6-1.5 s for theta activities and around 1-2 s for pupil activities (46 517 data points, mean R = 0.40, and mean P = 0.028). For the RIDE C-cluster in session S1, a strong positive correlation was observed between the source-level theta activities in the frontal cortex (i.e., C-source 1) around 0.5-0.8 s and pupil activities around 0.5-1.5 s (12 821 data points, mean R = 0.38, mean P = 0.035). The correlations between the source theta activities of C-source 2 and phasic pupil diameter were also significant, and they formed two adjacent correlation clusters. Together, these two correlation clusters were located around 0.5-1 s for task-related theta activity and around 0-0.7 s for pupil diameter (23 030 data points, mean R = 0.41, and mean P = 0.024). For the RIDE R-cluster in session S1, the correlation between its source theta power in the frontal regions (R-source 1) and pupil activities was observed between 0.4 and 0.7 s for theta activities and between 0.5 and 1.6 for pupil activities (32 464 data points, mean R = 0.41, and mean P = 0.027). In session S4, no significant positive correlation clusters with over 5000 data points were observed. Discussion The primary goal in the present study was to examine the time-on-task effects on WM gating functions (i.e., gate opening and closing), including its neurophysiological basis and functional neuroanatomy. To achieve this, we utilized a reference-back paradigm in a time-ontask setting, recorded EEG signals, and tracked the pupil diameter as indirect measures for NE system dynamics. This was based on our assumption that the NE system may modulate the time-on-task effects on WM gating functions. To examine which subprocesses of WM gating functions were prone to time-on-task effect, the RIDE method was employed to distinguish cognitive subprocesses relevant to stimulus, response, and the transitional processes between stimulus evaluation and responding. We also applied beamforming techniques to extract the anatomical source of the time-on-task effect for each subprocess. The results revealed an evident time-on-task effect. There were time-on-task effects for WM gate opening but not on WM gate closing. Corroborating previous studies, the switching costs in RT during WM gate closing were higher than in WM gate opening throughout the entire experiment, ref lecting the different cognitive processes of gate opening and closing Oberauer 2014, 2015;Kessler 2016a, 2016b;Rempel et al. 2021). The difference of RT costs between gate opening and closing fits the PBWM model that WM gate opening is a more active process than gate closing (Hazy et al. 2006). Though rare studies used the switching cost of accuracy as the indicator of the WM gating, our study demonstrated a consistent accuracy cost in gate opening but not in gate closing. This corroborates that WM gate opening was a more difficult task than WM gate closing. The most important finding of the behavioral data was that the switching cost in the accuracy data increased with time during gate opening, while no significant effects were obtained during gate closing. This dissociation shows that WM gate opening but not closing processes are affected by time-on-task effects. This dissociation of time-on-task effects between gate opening and closing is also ref lected in theta band dynamics showing effects during WM gate opening but no significant effects during WM gate closing. Timeon-task effects were seen for all of the isolated RIDE clusters, which suggests that all aspects of information coded in the theta signal are affected by time-on-task effects during WM gate opening processes. This result is reasonable considering that all information about stimulus identity, stimulus-response relations, and the motor response is essential for goal-directed behavior (Rac-Lubashevsky and Kessler 2016b). For gate opening, the difference of task-related theta activities between sessions S1 and S4 was mainly observed between 0.5 and 1.5 s after stimulus onset, starting when the response was executed (i.e., between 457 and 692 ms) and ending before the subsequent trial. In this time window, the task-related theta synchronization during WM gate opening in session S1 was particularly strong at frontal electrodes sites. Thus, there was a robust theta synchronization at the beginning of the experiment where gate opening processes were most efficient, as indicated by the behavioral data. Previous studies suggested that theta synchronization processes are essential during the encoding and retrieval of contextual information (Klimesch et al. 1997(Klimesch et al. , 2001. Especially in the PFC, theta synchronization processes promote WM performance (Benchenane et al. 2011;Alekseichuk et al. 2016). Interestingly, the results in the beamforming analysis revealed that differences in the degree of theta synchronization processes between sessions S1 and S4 were associated with the left dorsolateral prefrontal cortex (DLPFC). The decrease in theta synchronization in this region can explain emerging difficulties in WM gate opening processes as reflected at the behavioral level. The DLPFC plays an essential role in context updating for cognitive control (O'Reilly 2006;O'Reilly and Frank 2006;Badre 2012;Nee and Brown 2013). A few studies interpreted the activation of PFC after response as a process of refreshing just-activated representation for prospective utilization (Johnson et al. 2005;Raye et al. 2007). Thus, at the beginning of the experiment, the highly activated theta synchronization in the DLPFC might indicate the strong control of WM gate opening to guarantee a successful updating process. These specific WM processes are then affected by time-on-task. According to the opportunity cost model (Kurzban et al. 2013), executive functions in PFC are prone to time-on-task effects. The decline of theta synchronization in the PFC suggests an impaired WM gate opening processes, leading to the increased error rate in gate opening performance. The results corroborate predictions of the opportunity cost model in terms of functional neuroanatomical predictions. However, the data also qualify the opportunity cost model by showing that only specific prefrontal cortical functions (i.e., WM gate opening processes) are affected by timeon-task. However, besides the DLPFC effects in theta synchronization processes between sessions S1 and S4, the pre-and postcentral gyri for RIDE S-and C-clusters represented stimulus-related process and transitional process between stimulus and stimulus and response, respectively (Ouyang et al. 2011). The pre-and postcentral gyri is part of the motor-somatosensory cortical network associated with sensory and motor processing (Bigbee 2011). Altogether, the high task-related theta activity in the first session suggests an active "encoding" processes of reference stimulus in WM gate opening, and its decline shows that the time-on-task effect also impaired the "encoding" process of a new reference. Most importantly, task-related theta band effects during gate opening were strongly correlated with phasic pupil dynamics in the first session, but these significant correlations vanished in the last session. In the first session, the task-related theta oscillations were activated at a relatively high level than in the last session. Meantime, the phasic pupil amplitude in gate opening was also evident, indicating a strong phasic NE activation relevant to the task engagement and mental effort invested (Beatty 1982;Aston-Jones and Cohen 2005;Gilzenrat et al. 2010;Eckstein et al. 2017;van der Wel and van Steenbergen 2018;da Silva et al. 2021) in WM gate opening. When closing the gate, no difference of phasic pupil peaks between sessions S1 and S4 was also observed, suggesting the allocated mental effort stayed at the same level. Likely, the phasic NE activity in the first session plays an essential role in the PFC gate opening functions, as suggested by the strong positive correlations between pupil diameter and theta band dynamics at the source level. These correlation matrices in the session S1 appeared relatively early for phasic pupil activation and late for theta synchronization, suggesting that the high NE activation likely modulated gate opening-related processes in cortical regions. This early NE modulation might be driven by novel reference information (Foote et al. 1980) as gate opening is more stimulus-driven. Task-related source-level theta band dynamics in all RIDE clusters were correlated with phasic pupil dynamics from 500 ms after stimulus onset. The finding that correlations between source-level theta band dynamics and pupil diameter were evident for all RIDE clusters suggests that the NE system modulates stimulus information, stimulus-response inhibition, and motor-response related information equally. This suggests that different informational contents coded in theta band dynamics are modulated simultaneously and that the degree of this modulation is similar for the different informational contents coded in the signal. It has been argued that the NE system modulates neural processes during task-relevant decision points (Aston-Jones and Cohen 2005). It is possible that in the first session, task-related decision processes during gate opening are strongly modulated by the NE system for all examined coding levels in the theta band dynamics. The opportunity cost model (Kurzban et al. 2013) suggests that effort, which is also reflected by the pupil diameter data and related to the NE system (Hopstaken et al. 2015), is modulated with time-on-task. The strong correlation between theta band dynamics and pupil diameter during WM gate opening in the first session suggests that the effort was primarily allocated in the WM gating opening control, and these processes were facilitated through phasic NE release, which enhances high-priority information (i.e., the updated reference) while suppressing the rest (Mather et al. 2016). In the last session, the pupil diameter became smaller, and taskrelated TBA decreased significantly. Moreover, also their correlation faded. This indicates that the investment of mental effort was gradually withdrawn with time or that the modulation of task-related decision processes during WM gate opening faded with time-on-task in prefrontal cortices. In addition, the pupil size diameter baselines gradually increased during the experiment, reflecting an enhanced tonic mode of NE systems (Gilzenrat et al. 2010). This increase in the tonic mode suggests that the overall perceived mental effort increased (Howells et al. 2010). Therefore, the high demand/cost in gate opening may lead to the withdrawal of effort, which was instead deployed to alternative tasks with lower opportunity cost (Brehm and Self 1989;Kurzban et al. 2013). This can also explain the dissociation between phasic pupil diameter and the task-related theta dynamics in the last session, suggesting that the NE activities and gate opening-related control processes became independent of each other under the effect of time-on-task. Conclusion In conclusion, our study showed that WM gate opening, which requires more active control processes in the PFC, is more prone to time-on-task effect than WM gate closing processes. Based on the opportunity cost model (Kurzban et al. 2013), the performance decline of WM gate opening was likely because the high cost of gate opening control does not benefit in the long run; thus, the effort is allocated in alternative tasks. Our study also suggests that the NE system plays a critical role in this shift of effort allocation. In the early phase of WM gate opening, strong phasic NE release facilitates the prefrontal WM control processes. However, in the late phase, when the phasic NE activity wanes, its modulation on the cortical activities also fades with the increased disengagement on WM gate opening. Supplementary Material Supplementary material can be found at Cerebral Cortex Communications online. Data availability All data can be obtained from the corresponding author upon reasonable request. Notes We thank all participants. Conf lict of Interest: None declared.
2022-01-14T16:13:26.637Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "85cd6ef58fbbc3c3e92aca18db6a2597c354a470", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1093/texcom/tgac001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5290eda9bca911e0a711c884852e0c9e6d0f6f2f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225500490
pes2o/s2orc
v3-fos-license
Influence of solvent mixture on nucleophilicity parameters: the case of pyrrolidine in methanol–acetonitrile The course of organic chemical reactions is efficiently modelled through the concepts of “electrophiles” and “nucleophiles” (meaning electron-seeking and nucleus-seeking reactive species). On the one hand, an advanced approach of the correlation of the nucleophilicity parameters N and electrophilicity E has been delivered from the linear free energy relationship log k (20 °C) = s(N + E). On the other hand, the general influence of the solvent mixtures, which are very often employed in preparative synthetic chemistry, has been poorly explored theoretically and experimentally, to date. Herein, we combined experimental and theoretical studies of the solvent influence on pyrrolidine nucleophilicity. We determined the nucleophilicity parameters N and s of pyrrolidine at 20 °C in CH3OH/CH3CN mixtures containing 0, 20, 40, 60, 80 and 100% CH3CN by kinetic investigations of their nucleophilic substitution reactions to a series of 2-methoxy-3-X-5-nitrothiophenes 1a–e (X = NO2, CN, COCH3, CO2CH3, CONH2). Depending on the resulting solvation medium, the N parameters range from 15.72 to 18.32 on the empirical nucleophilicity scale of Mayr. The nucleophilicity parameters N first evolve linearly with the content of acetonitrile up to 60% CH3CN by volume, but is non linear for higher amounts. We designed a general computation protocol to investigate the solvent effect at the atomistic scale. The nucleophilicity in solvent mixtures was evaluated by combining classical molecular dynamic (MD) simulations of solvated pyrrolidine and a few density functional theory (DFT) calculations of Parr nucleophilicity. The pyrrolidine theoretical nucleophilicity 1/ω obtained in various CH3OH/CH3CN mixtures are in excellent agreement with Mayr's nucleophilicity (N) parameters measured. Analyses of the molecular dynamic trajectories reveal that the decrease of the nucleophilicity in methanol rich mixtures arises predominantly from the solvation of the pyrrolidine by methanol molecules through strong hydrogen bonds. Last, we proposed a simple model to predict and accurately reproduce the experimentally obtained nucleophilicity values. Introduction The course of organic chemical reactions is efficiently modelled through the concepts of "electrophiles" and "nucleophiles". 1 These important concepts were quickly embraced by the chemical community, and empirical approaches were proposed by many authors both experimentally 2-4 and theoretically 5,6 to obtain quantitative scales. In particular, the nucleophilicity parameters N and electrophilicity E, as introduced by Mayr group, allows to quantitatively predict the rate constant between an electrophile and a nucleophile based on a linear free energy relationship eqn (1) (ref. [4][5][6][7][8][9][10] in which k corresponds to the second-order rate constant, N and s are nucleophile-specic parameters, and E is the electrophilicity parameter. While the electrophilicity of a molecule is almost independent of the solvent, the nucleophilicity can strongly inuenced by it. 11 For example, the nucleophilicity parameter N of 4-(dimethylamino)pyridine jumps from 13.19 in water to 15.80 in dichloromethane. 12 Being quantitative, Mayr's approach (eqn (1)) is an attractive entrance to study the general inuence of solvent mixtures which are classically employed in preparative synthetic chemistry and was recently used for the synthesis of nanolympiadane. 13 Previous studies have shown that the dependence of the nucleophilicity parameter N on the solvent mixture can be greatly non linear 14 so that it appears difficult to predict the rate of a nucleophile-electrophile reaction in a new solvent or a new solvent mixture. Therefore, a careful study of solvent inuence appeared to us indispensable for providing further advances to the general effort of scaling and predicting nucleophilic and electrophilic parameters. As a starting point to develop a general approach, we choose to study electrophile/nucleophile reactivity of thiophenes and pyrrolidine into two miscible solvents (acetonitrile, methanol), prepared in mixture at various ratio. On one hand, pyrrolidine and imidazolidinone derivatives are the object of current interest as organocatalysts used in contemporary organic synthesis. 15 Their intrinsic reactivity has been examined in relation with bioactive compounds of natural or synthetic origin, which incorporate such nucleophilic scaffolds. [16][17][18][19] Recently, Mayr group investigated the kinetics of the reactions of several pyrrolidine derivatives and they integrated the resulting constitutive parameters of pyrrolidines into the socalled Mayr's nucleophilicity scale. 16 On the other hand, thiophenes are heterocycles of industrial interest, 20,21 and substituted thiophenes are scaffold found within pharmaceuticals, conductive polymers, photochromic molecular switches, liquid crystals, etc. 20 Regarding the general studies on nucleophilic and electrophilic parameters of heterocycles, Boubaker group investigated the reactions of 2-methoxy-3-X-5-nitrothiophenes electrophiles 1a-e (where X ¼ NO 2 , CN, COCH 3 , CO 2 CH 3 and CONH 2 ) with a variety of N-based nucleophiles, in different solvents at 20 C. 22,23 The derived second-order rate constants have been employed to determine the reactivity parameters of these series of thiophenes 1a-e, according to the Mayr's linear free energy relationship. The electrophilic parameter E values of 1a-e have been found ranging from À19.09 to À15.26, going respectively, from 1e (X ¼ CONH 2 ) the least reactive thiophene derivative, to 1a (X ¼ NO 2 ), the most reactive specimen. This set of results and collected data has been of high interest for the understanding of both individual intrinsic organic reactivity and mutual interaction of pyrrolidine and thiophene derivatives. 17,22,23 The present work aimed at determining the nucleophile specic parameters N and s of pyrrolidine, as a model, in CH 3 OH/CH 3 CN mixtures. The changes in nucleophilicity parameters N values as a function of acetonitrile content was used to investigate the solvent-mixture overall effect at the atomistic level by combining experimental and theoretical approaches. As methanol and acetonitrile dielectric constant are rather close, a continuum model is not sufficient to properly describe the solvation effect on the nucleophilicity. To appreciate the effects of solvent on the parameters of nucleophilicity of pyrrolidine, we performed classical molecular dynamic simulations of a pyrrolidine molecule solvated by a mixture of methanol and acetonitrile solvent molecules. Our studies conrmed the strong dependence of the nucleophilicity (and thus reaction rate) on the nature of the solvation medium. In the present case it comes from a gradual methanol desolvation to pyrrolidine involving four to zero molecules when the acetonitrile amount is gradually increased. Our study provides a relevant model for the more systematic inclusion of varied solvent into the reactivity studies of valuable electrophile/nucleophile organic reagents such the present heteroaromatics. Materials The thiophenes 1a-e were prepared as previously described. [24][25][26] Pyrrolidine was received from commercial source and distilled before use. Acetonitrile and methanol HPLC grade >99.9% were used without further purication. Kinetic measurements The kinetic study was performed using a spectrophotometer (UV-1650 Shimadzu) equipped with a Peltier temperature controller (TCC-240 A), which is able to keep constant temperature within 0.1 K. The reactions were carried out under pseudo-rst order conditions in which the pyrrolidine concentration (6 Â 10 À4 to 8 Â 10 À1 mol L À1 ) was at least 20 times greater than the substrate concentration (about 3 Â 10 À5 mol L À1 ). The rstorder rate constants measured, k obsd , values, together with detailed reaction conditions, are summarized in Tables S1-S6 in the ESI. † Reproducible kinetics constants were measured from several consistent experimental runs within AE3-5% standard deviation ( Table S7 in the ESI †). Theoretical models and computational details All quantum calculations were performed in the framework of density functional theory (DFT) by using the Gaussian 09 soware package. 27 Energies and forces were computed with the B3LYP functional 28,29 empirically corrected for dispersion effects using the D3 scheme of Grimme with the Becke-Johnson damping. 30 The bulk effect of the solvent was described using a polarizable continuum model as implemented in Gaussian09. Geometry optimizations without symmetry constraints and the corresponding frequency calculations were conducted with the 6-311+G(d,p) basis set for all atoms. 31 Methanol and acetonitrile dielectric constant are fairly close with 3 r ¼ 32.613 for methanol and 3 r ¼ 35.688 for acetonitrile, thus a continuum model is not sufficient to properly describing the solation effect on the nucleophilicity. To more precisely appreciate the effects of solvent on the parameters of nucleophilicity of pyrrolidine, we performed molecular dynamic simulations with the Amber simulation suite of a pyrrolidine molecule explicitly solvated by a mixture of methanol and acetonitrile solvent molecules. 32 Pure methanol, pure acetonitrile and mixture containing 9%, 20%, 40%, 50%, 60%, 80% and 90% of acetonitrile in methanol were simulated. The force eld parameters and the technical details of the simulation are given in ESI Fig. S2 and S3. † For each simulation, we computed the distribution methanol molecules in the rst solvation sphere or pyrrolidine. All calculation results were grouped in Tables S11 and S12 in the ESI. † These simulations showed that the rst layer of solvation contains between zero and four methanol molecules. We computed the Parr nucleophilic indices u À1 of pyrrolidine surrounded by n ¼ 0, 1, 2, 3 and 4 explicit methanol molecules while the bulk solvation effects were described by a continuum. According to Parr, 5 the nucleophilicity index is the inverse of the electrophilicity index u that can be estimated using: In which m is the chemical potential and h the global hardness 6 of pyrrolidine. Both can be evaluated in the context of DFT: The pyrrolidine nucleophilicity are given in Table 1. These values are averaged using the distribution of methanol molecules in the rst solvation layer of pyrrolidine obtained in the MD simulations (see ESI Table S10, Fig. S4 and S5 †). Kinetics of the reactions of thiophenes 1a-e with pyrrolidine The kinetics of the reactions of the series of thiophenes 1a-e employed as reference electrophiles with pyrrolidine (Scheme 1) are collected in Table 2. 22,23 The reactions of 2-methoxy-3-X-5-nitrothiophenes 1a-e with pyrrolidine were followed spectrophotometrically by monitoring the formation of the products 3a-e at their absorption maxima (432-540 nm). An illustrative example is given in Fig. 1, which shows the set of UV-visible absorption spectra describing the progressive conversion of 1e to the product 3e (X ¼ CONH 2 ) resulting of the nucleophilic addition of pyrrolidine. In all experiments, only one relaxation time was observed when the substitution products 3a-e were generated in the presence of a large excess of pyrrolidine. Typical results are given in Fig. 2a. The observed rst-order rate constants k obsd determining (Scheme 1). 22,23,[33][34][35] The second-order rate constants k 1 (mol À1 L s À1 ) for all reactions, which are listed in Table 3, can be readily derived from eqn (5). This behaviour might rst seem in contradiction with the fact that this reaction is known to proceed via a dipolar transition state (TS I, Scheme 2), which should be more stabilized in methanolic solutions. 36,37 However, as will be further shown later, it results from the progressive desolvation of pyrrolidine with decreasing CH 3 OH content upon addition of CH 3 CN. 38,39 A comparable effect has been reported by Mayr and co-workers in the reactions of benzhydrylium cation (Scheme 2) with OH À in H 2 O-CH 3 CN mixtures. 40 We further examined the solvation medium effect on the nucleophilic reactivity in terms of changes in solvent nucleophilicity parameters N 1 using CH 3 OH/CH 3 CN mixtures. We employed the linear dependence of the solvent nucleophilicity N 1 on the amount of % CH 3 CN volume to interpolate the values of N 1 for 40% and 60% of acetonitrile in methanol from the values reported in literature 41 (see ESI †). N 1 values are collected in Table 4. In contrast to the nonlinear relationships observed between log k 1 and %CH 3 CN volume, excellent correlation coefficients (R 2 > 0.9931) were found in all systems when the log k 1 values were plotted versus the solvent nucleophilicity parameter N 1 , 41 for solutions having N 1 > 6.04 (i.e. less than 80% vol. CH 3 CN, Fig. 4). This result implies that the effect of solvent nucleophilicity is practically independent of the electronic nature of the substituent X. Most importantly, these linear correlations can be used to predict the unknown second-order rate constants, k 1 , values for reactions of thiophenes 1a-e with pyrrolidine in a given CH 3 OH/CH 3 CN mixture. This is particularly the case in CH 3 OH/CH 3 CN mixtures containing 9, 33, 50 and 67% CH 3 CN where the values of k 1 for all thiophenes 1a-e have thus been obtained by extrapolation of the corresponding N 1 data reported by Minegishi and co-worker 41 (see Table S8 in the ESI †). Mayr's nucleophilicity (N) parameters of pyrrolidine in methanolic acetonitrile solutions The electrophilicity parameters E for the ve thiophenes 1a-e, 22,23 were employed to quantify the nucleophilicity parameters N and s of pyrrolidine in methanol/acetonitrile solutions. Using the data given in Tables 1 and 2, plots of log k 1 versus electrophilicity parameters E of 1a-e have been constructed. As can be seen in Fig. 5, linear correlations were obtained, which yield the nucleophilicity parameters N and s, as dened by the linear free energy relationship (1). The nucleophilicity parameters N and s for pyrrolidine in various methanol/acetonitrile mixtures at 20 C are reported in Table 5. The N value of 18.32 experimentally found in pure acetonitrile is consistent with the value of 18.64 reported by Kanzian and co-workers in the same solvent. 42 It appears that adding This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 28635-28643 | 28639 Paper acetonitrile to a methanolic solution of pyrrolidine resulted in only small changes in nucleophile specic parameter s. The transfer from CH 3 CN to CH 3 OH corresponds to a relatively important decrease in the nucleophilicity of pyrrolidine (Table 5, DN ¼ 2.60). This decrease of the nucleophilicity values in the CH 3 OH rich solvation medium could be attributed to a stronger solvation of pyrrolidine in this protic solvent, as conrmed below. Literature data indicated a nucleophilicity parameter for piperidine of 17.35 in CH 3 CN, 42 whereas the piperidine appears more nucleophilic in pure water (N ¼ 18.13). 43 For morpholine the N value is essentially the same in pure CH 3 CN and H 2 O: 42 The changes in nucleophilicity parameter of pyrrolidine observed herein were also examined as a function of the variation in volume fraction of acetonitrile (%CH 3 CN). As seen from the Fig. 6, N linearly correlates with the vol% of CH 3 CN up to approximately 60% vol. CH 3 CN, following the eqn (6) with R 2 ¼ 0.9972: Eqn (6) validity is supported by comparing with the experimental N value in the mixture of 91% CH 3 OH/9% CH 3 CN as reported by Phan et al., 44 to the interpolated value according eqn (6), as shown in Table 5. There is an excellent agreement between experimentally determined and calculated values with the average absolute error being only 0.12 N units. Theoretical nucleophilicity (1/u) parameters of pyrrolidine in methanolic mixtures with acetonitrile In the simulation process the number of methanol molecules n varies, and we found that on average one pyrrolidine molecule is surrounded by two molecules in pure methanol. This number decreases to zero when the content of the acetonitrile mixture is increased (Fig. S5 †). In Table 6 is collected the theoretical nucleophilicity 1/u of pyrrolidine in various methanolic CH 3 CN solutions, together with the Mayr's nucleophilicity (N) parameters measured in this study. A very good correlation was found, R 2 ¼ 0.9988, between theoretical 1/u and experimental N values (Fig. 7), 45-47 which is dened by eqn (7). Chamorro and co-workers have observed a linear correlations between the Mayr's nucleophilicity (N) parameters for a series of primary and secondary amines and the theoretical nucleophilicity index (u À ) obtained at the DFT level. 45 The validity of eqn (7) was checked by estimating the nucleophilicity parameters N of pyrrolidine in CH 3 OH/CH 3 CN mixtures containing 9 and 91% CH 3 CN that had been experimentally measured. 44 The detailed results are listed in Table 6. These results clearly show that the predicted values of N are in excellent agreement with the experimental data. Modelling inuence of the solvation Our simulations reveal that the evolution of the pyrrolidine nucleophilicity on the composition of the methanol/ acetonitrile mixture mainly comes from the fact that methanol forms a strong hydrogen bonds with the nitrogen lone pair and the NH bond of the pyrrolidine. Solvated molecules are much less nucleophilic: bare pyrrolidine nucleophilicity is 18.32 while it is only 15.67 for the bis-methanol pyrrolidine and 15.01 for the pyrrolidine solvated by ve methanol. The proportion of pyrrolidine solvated by 0, 1, 2, 3 and 4 methanol molecules is shown in Fig. 8 as a function of the acetonitrile molar fraction. As the amount of methanol in the mixture is decreased, the amount of bare pyrrolidine molecule increases and so does the observed nucleophilicity. In particular, for molar fraction greater than 60%, the bare pyrrolidine is more abundant than the bis-methanol pyrrolidine. As a consequence, the average pyrrolidine nucleophilicity parameter N rises steeply, following the amount of unsolvated pyrrolidine. It is interesting to note that solvated molecules have similar nucleophilicities, so that we could model the solvation process as a two states system: The equilibrium constant of eqn (8) ðNðPyrÞ À NðM À PyrÞÞ (9) Results from N mod are reported in Table 6. The very good agreement with the experimental values conrmed that the inuence of the solvent on nucleophilicity can be modelled by our two states approach. Conclusions The reactions of 2-methoxy-3-X-5-nitrothiophenes 1a-e with pyrrolidine were studied kinetically by UV-visible spectroscopy in CH 3 OH, CH 3 CN and various CH 3 OH/CH 3 CN vol/vol. mixtures at 20 C. The nucleophilicity parameters N and s for pyrrolidine in CH 3 OH/CH 3 CN mixtures of different compositions as dened by Mayr Equation log k (20 C) ¼ s(E + N) have been determined and found to cover a range from 15.72 to 18.32. We have shown that the nucleophilicity parameters N for pyrrolidine are linearly related to the amount of acetonitrile (in % CH 3 CN volume) for ratio less than 60%. Finally, with the N and s values determined, it becomes possible to make predictions of second-order rate constants for reactions of pyrrolidine with others electrophiles of known E parameters. The nucleophilicity index 1/u for pyrrolidine in solvent mixture of acetonitrile and methanol have been determined by combining DFT resampling of classical MD simulations. The theoretical values agree with the experimental ones, and the experimental dependence of the nucleophilicity parameters of pyrrolidine on mixture solvent methanol/acetonitrile was conrmed and supported by the simulation. The net dependence of the nucleophilicity parameter of pyrrolidine on solvent is mainly explained by a gradual methanol desolvation when the amount of acetonitrile is increased. The correlation between the experimental and theoretical values obtained herein indicates that this theoretical approach could be further processed to predict the nucleophilicity parameters of other nucleophiles in solvent mixtures and ultimately an automated tool for tuning electrophilic-nucleophilic reactivity as a function of a precise mixture of binary solvent. Further extension might concern ternary mixture of solvents. Conflicts of interest There are no conicts to declare.
2020-08-06T09:04:14.626Z
2020-08-03T00:00:00.000
{ "year": 2020, "sha1": "2a896dd1408fbb141216c9d840f2896bbe3e2e9c", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra06324j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "baa80d5965b62b5bcf065f722c78ebbf0af6c472", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
261125692
pes2o/s2orc
v3-fos-license
High-throughput phenotyping for non-destructive estimation of soybean fresh biomass using a machine learning model and temporal UAV data Background Biomass accumulation as a growth indicator can be significant in achieving high and stable soybean yields. More robust genotypes have a better potential for exploiting available resources such as water or sunlight. Biomass data implemented as a new trait in soybean breeding programs could be beneficial in the selection of varieties that are more competitive against weeds and have better radiation use efficiency. The standard techniques for biomass determination are invasive, inefficient, and restricted to one-time point per plot. Machine learning models (MLMs) based on the multispectral (MS) images were created so as to overcome these issues and provide a non-destructive, fast, and accurate tool for in-season estimation of soybean fresh biomass (FB). The MS photos were taken during two growing seasons of 10 soybean varieties, using six-sensor digital camera mounted on the unmanned aerial vehicle (UAV). For model calibration, canopy cover (CC), plant height (PH), and 31 vegetation index (VI) were extracted from the images and used as predictors in the random forest (RF) and partial least squares regression (PLSR) algorithm. To create a more efficient model, highly correlated VIs were excluded and only the triangular greenness index (TGI) and green chlorophyll index (GCI) remained. Results More precise results with a lower mean absolute error (MAE) were obtained with RF (MAE = 0.17 kg/m2) compared to the PLSR (MAE = 0.20 kg/m2). High accuracy in the prediction of soybean FB was achieved using only four predictors (CC, PH and two VIs). The selected model was additionally tested in a two-year trial on an independent set of soybean genotypes in drought simulation environments. The results showed that soybean grown under drought conditions accumulated less biomass than the control, which was expected due to the limited resources. Conclusion The research proved that soybean FB could be successfully predicted using UAV photos and MLM. The filtration of highly correlated variables reduced the final number of predictors, improving the efficiency of remote biomass estimation. The additional testing conducted in the independent environment proved that model is capable to distinguish different values of soybean FB as a consequence of drought. Assessed variability in FB indicates the robustness and effectiveness of the proposed model, as a novel tool for the non-destructive estimation of soybean FB. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-023-01054-6. Background High-throughput phenotyping (HTP) allows that important information about cultivated plants be gathered in a faster and less expensive way than by standard techniques (manual measurements) used so far [1].Great potential of HTP for in-season biomass estimation is not utilized in soybean, one of the most important oil crops in the world with annual global production > 350 million tons [2].Biomass estimation offers comprehensive overview of a plant potential to use available recourses.This is crucial in low-input sustainable farming, where nutrients are mainly limited to natural resources and water supply depends on precipitation.Also, more robust genotypes intercept higher amount of light and have higher photosynthetic rates, which can increase seed yield.Low biomass accumulation in soybean leads to yield decrease due to reduced light interception and low radiation use efficiency [3].In addition to the effect it has on yield, the amount and rate of accumulated biomass are also significant in weed management, which is one of the main tasks of a successful agricultural production, especially in organic farming systems [4].Rapid canopy closure is also an important factor of weed suppression [5].Therefore, biomass accumulation rate, as a measurement of growth, can contribute to a more efficient selection of superior genotypes during the breeding process.Having a tool for soybean biomass estimation at any time during the growing period would provide a new insight into crop development, thereby giving breeders continuous information on biomass accumulation.The determination of many important phenotypic traits within the current breeding programs is restricted due to complexity and lack of adequate tools [6].A typical example of such a trait is plant biomass.Standard techniques for biomass assessment are destructive as well as time and labor-consuming.These difficulties can be overcome by using different remote sensing platforms and technologies. Devices such as satellites or various UAVs use different sensors for collecting information about the Earth's surface.Although satellites are becoming advanced and more precise, they cannot provide sufficient spatial resolution while data quality can be reduced due to clouds or other atmospheric factors [7].On the other hand, the use of UAVs in agricultural research is growing year by year [8].Photos taken by a digital camera mounted on UAV have a better resolution than those taken by satellites, thus ensuring greater data accuracy.The main advantage of using UAVs is in high-throughput i.e. a large amount of data collected in a short period of time [9,10].Hyperspectral (HS) imaging has the biggest potential for the assessment of different plant traits because HS cameras collect data from the entire spectrum [11].The price of HS cameras is still too high compared to others such as RGB and multispectral (MS) sensors.These two are the most common camera types used in recording spectral reflectance of plant material in agriculture.Plant spectral reflectance is measured through values of digital numbers (DNs) which are used for the calculation of vegetation indices (VIs) linked to different plant traits [12][13][14][15][16][17].Many VIs were used for biomass prediction in wheat [18], corn [19], and white oat [20].In addition to the spectral reflectance sensed, data related to photogrammetry can also be obtained by processing UAV images through the structure from motion (SfM).Photogrammetry represents a technique that allows measuring the dimensions of the object on the image [21].As a result of SfM processing of the UAV images a digital elevation model (DEM) can be obtained based on the point cloud data [22].Other SfM derivatives such as the digital surface model (DSM) stand for surface elevation where the surface represents the plant canopy.In an agricultural cropping land, along DSM there is also a digital terrain model (DTM) which represents the ground-based elevation of bare soil.Based on the difference between DSM and DTM, it is possible to calculate plant height (PH) [23][24][25].Another useful tool for PH determination is a light-detection and ranging system (LiDAR).This system measures the distance between the sensor and the object based on the time that passes when the laser signal goes from LiDAR to the object and back [26].This method was used for PH assessment in sugarcane [27], wheat [28], or soybean and maize [29].Even do LiDAR provides accurate measurements of plant traits in a non-destructive way there are some restrictions for wider application.Relatively high prices, complexity in data acquisition and data extraction are the main disadvantages of this system compared to the UAV camera sensors [30].As an important indicator of plant growth, canopy cover (CC) can also be extracted as the percentage of plant pixels on an image [31].Obtaining reliable information about PH and CC can be a useful tool in an estimation of accumulated biomass during the growing period. Prediction of crop biomass can be based on a simple linear regression with different VIs [32], PH [33], or CC [34].However, combining these individual data (predictors) can result in more accurate predictions.A study on barley shows that more precise results can be achieved by using both VIs and PH as opposed to using only reflectance-based data [35].For plant trait estimation, the power of combining different predictors can be utilized through machine learning models (MLMs) and mathematical algorithms.One of the most popular algorithms based on classification and regression is random forest (RF) [36].The RF was used for the prediction of corn [37] and soybean yields [38], leaf area index (LAI) of alfalfa, Rhodes grass, corn, carrots [39], and soybean [40], as well as for determination of wheat biomass [41].For big data analysis, the most suitable methods are based on deep learning (DL) techniques [42].These techniques use the artificial neural network (ANN), an ML algorithm designed to solve nonlinear relationships.The huge potential of DL models was utilized in many plant species for yield prediction [43], abiotic or biotic stress detection [44], leaf counting, and mutant classification [45].Still, the RF was reported to be a better choice for biomass prediction compared to DL models, ANN and support vector regression (SVR), which was proven in wheat biomass estimation where RF outperformed ANN and SVR [46].Partial least square regression (PLSR), another wellknown MLM, is not as straightforward as RF.With PLSR, the dependent variable (crop trait) and independent variables (predictors) are explained through principal components.Different crop traits such as wheat yield [47] or yield and biomass of potato were successfully estimated using PLSR [48]. The aim of the study was to find an optimal model for the prediction of soybean fresh biomass (FB) using MLMs, VIs, CC and PH obtained by analyzing UAV digital images of different wavelength ranges.Moreover, the research objective was to estimate the robustness of the final model by conducting biomass screening of diverse soybean germplasm grown in different environments. Experimental site The experiments were conducted in 2020 and 2021, at the experimental plots of the Institute of Field and Vegetable Crops in Rimski šančevi, Novi Sad, Serbia.For model calibration, ten soybean varieties with different maturity groups were sown in five replications in 2020 and four replications in 2021.The genotypes were sown on chernozem soil, characterized by homogeneous texture and well-aggregated structure.In total, 90 calibration plots, each 8 m 2 were used for calibration of the biomass prediction model. UAV and data acquisition The UAV used in the research was P4M (DJI, Shenzhen, China) equipped with six 1/2.9"CMOS sensors covering specific wavelength ranges.Five sensors were monochromatic B (450 ± 16 nm), G (560 ± 16 nm), R (650 ± 16 nm), Red edge (RE) (730 ± 16 nm), and Near-infrared (NIR) (840 ± 26 nm) and one RGB camera.Each sensor has a resolution of 2.08 megapixels (MP) with a focal length of 5.74 mm.During the flights, the UAV was connected with a real-time kinematic (RTK) system, a global navigation satellite system (GNSS) receiver, which provides centimeter-level precision of photographed objects on the image.The UAV is equipped with an integrated sun sensor which automatically corrects the reflectance based on the sunlight and secures data consistency in different weather conditions.Nine flights were conducted in this study, and the date for each of the flights was recorded in growing degree days (GDDs) calculated after emergence.In 2020, the photos were taken at 274, 413, 650, 745, and 1016 GDDs, and in 2021 at 215, 492, 747, and 1130 GDDs.Every flight was performed on a cloud-free, sunny day, wind speed didn't exceed 10 m/s.The UAV shooting angle was course aligned, and image capture mode was set to equal time intervals while the front and side overlap of the images was 80%.Mission planning was done with DJI GS PRO software (DJI, Shenzhen, China).Flights were done at a 60 m altitude which secured the ground resolution of 3.17 cm/pixel.Subsequently, after each flight, soybean FB was harvested and measured with a specialized Wintersteiger combine.No significant amount of biomass was left on the field. Data processing After collecting the photos of soybean genotypes, a dense cloud, digital elevation model (DEM) and orthomosaic were created using the Agisoft PhotoScan software (version 1.7.2.from 2021) build by Agisoft LLC from St. Petersburg, Russia (http://www.agisoft.com).The PH of the soybean genotypes was calculated using DEM (DSM and DTM), while CC and DNs were obtained from the orthomosaic for each plot.Each channel of the MS image was exported and analyzed using Fiji Is Just ImageJ (FIJI) software (version 1.51.from 2018), the open-source image analysis software [49].First the region of interest (ROI) was created for every plot and then the masking procedure was performed to filter out the soil pixels (Fig. 1).The masking procedure was done in FIJI using the Create Mask function that eliminates soil and ensures that only plant pixels remain for further analysis.Following the masking procedure, the average DN value of each channel was exported together with the CC which was calculated as the percentage of plant pixels filling each ROI. Based on the collected data, 31 different VI were calculated for each plot.The description and formula of each index are given in Additional file 1. High collinearity among many VIs was expected as they were obtained by combining the DNs of five spectral channels in different formulas.The relationship between VIs was analyzed so as to simplify the calculation within the MLM algorithm and ensure that collinearity does not disturb prediction quality.The correlation matrix was created in R with the ggcorrmat function from the ggstatsplot package [50].This function creates the matrix plot based on the values of the correlation coefficient and marks nonsignificant relationships (p < 0.05).Highly correlated VIs were excluded using the findCorrelation function with ± 0.8 set as the cutoff value of pair-wise absolute correlation within the Caret package [51].The function compares the mean absolute correlation (MAC) of two highly correlated VIs and eliminates the variable with the largest MAC. In both years, the PH of each plot was determined using the elevation models (DSM and DTM).The difference between DSM and DTM represents PH (Fig. 2).The average value of PH for each plot was used for further analysis. Machine learning models (MLMs) The RF and PLSR were used to predict the soybean FB using CC, PH, and VIs.In the RF algorithm, the number of trees (ntree) was chosen by the lowest value of root mean square error (RMSE) while the number of predictors evaluated at each node (mtry) was selected based on the cross-validation.The RF was applied for the prediction of soybean FB using the train function from the Caret package with mtry = 3 and ntree = 500 set as optimal tuning parameters.A leave group out cross-validation (LGOCV) was implemented in the model where the harvested biomass and predictors from 70% of the randomly selected plots were used as a training set, while the remaining 30% were used as a test set.The LGOCV procedure was repeated 10 times, generating new training and test partitions in each cycle.The model performance was rated based on the average result of 10 predictions obtained through the LGOCV.For FB estimation with PLSR, the Caret package was also used including LGOCV approach.In the PLSR, an optimal number of latent variables was chosen based on the lowest value of RMSE in the estimation of a dependent variable (FB). Prediction results of the models were evaluated through the coefficient of determination (R 2 ), mean absolute error (MAE), and RMSE calculated with the following formulas: where x i represents actual value of the trait for the i-th plot, x -average of all actual values, y i− predicted value of the trait for the i-th plot, ȳ -average of all predicted values and N-total plot number.The better performing MLM was chosen for further analysis. Evaluation of selected MLM on independent set of soybean genotypes The proposed MLM for biomass prediction was additionally validated by performing temporal screening of soybean genotypes grown in different environments. Development of the model for soybean FB estimation As a result of plant growth, the values of CC, PH, and FB for soybean calibration plots increased as the season progressed in both years (Table 1). The results showed that soybean plants were taller in 2020 than in 2021, while almost a maximum of CC was achieved in both years.Still, in 2021 the CC remained high even at 1130 GDDs, while in 2020, it dropped over 10% between the last two measurements.The increase in biomass accumulation was also noticeable in both years. Correlation matrix revealed a strong relationship between many VIs, non-significant correlation coefficients (p < 0.05) were marked with cross (Fig. 3). More than 62% (289/465) of all correlations were higher than the cut-off value set for pair-wise absolute correlation (± 0.8).Highly correlated variables were reduced by leaving only TGI and GCI as unique predictors.The relation between these two VIs was weak (r = ̶ 0.1), while at the same time they showed the lowest MAC values when compared to the other VIs.For example, CIVE and GCI, or TGI and GNDVI, were not correlated (r = 0).Still, CIVE and GNDVI were excluded due to having a higher MAC than TGI and GCI. Biomass prediction models Performance of the MLMs with different sets of predictors was analyzed by comparing the actual and the predicted values of soybean FB (Fig. 4). There was a negligible difference in soybean FB when CC and PH were combined in models with all VIs, as opposed to being combined with TGI and GCI only.Further comparison of the MLMs was based on the results of the cross-validation for soybean FB prediction with a reduced set of predictors. Both models showed good accuracy, as suggested by the high value of R 2 and low RMSE and MAE.Nevertheless, RF provided slightly better results.The difference between the actual and predicted biomass was observed in the results of both models.Discrepancies were present in positive and negative directions.A lower standard deviation (SD) between the actual and the predicted values was obtained with the RF (SD = 0.25 kg/m 2 ) model as compared to the PLSR (SD = 0.27 kg/m 2 ) (Fig. 5). Even though the RF and PLSR have different mathematical algorithms, they used the same variables to predict the soybean FB.The importance of each predictor variable was extracted from the prediction models with the varImp function in the caret package and shown through the relative levels (0-100) (Fig. 6). In both models, the predictor variables ranking was the same, the most important being PH, GCI, CC, and TGI, respectively.In the PLSR, the PH, GCI, and CC had a decisive impact in making the predictions, while the influence of TGI was marginal.The same situation with the TGI was found in the RF, where the CC also had a minor effect on the model performance.The GCI had a lower prediction effect in the RF as compared to the PLSR, while PH maintained its dominant position and was marked as a crucial variable.Correlation between four selected predictors was also calculated (Fig. 7). Significantly high correlation was observed between PH, CC and GCI while no significance was observed between TGI and three other predictor variables. Temporal screening of soybean FB using proposed RF model As a better-performing model, RF was subjected to further evaluation on an independent set of soybean genotypes grown in different environments.Biomass of 206 soybean genotypes was estimated within the ED, LD, EC, and LC trial in 2020 and 2021 (Fig. 8). The results showed that the amount of accumulated organic matter increased throughout the season.Soybean FB was low at 230 GDDs, while the increase was noticeable after 390 GDDs.This pattern was present in all trials for both years.The unfavorable conditions did not affect soybean at the beginning of the growing period as much as it did later, when the negative effect of drought led to a decrease in biomass accumulation for genotypes in ED and LD as compared to the control. Discussion The initial set of predictors contained PH, CC, and 31 VIs.Many VIs were highly correlated due to the similar origin, which enabled the reduction of their number without losing significant power in variability explanation.Further analysis suggested that only TGI and GCI could be used instead of a complete set of reflectancebased predictors.Although the number of VIs was reduced, it was necessary to check whether this reduction would affect the predictive ability of the RF and PLSR model.The results showed that the reduction of VIs did not have a significant effect on the performance of the MLMs.This indicates that soybean FB could be successfully predicted without using a large set of highly correlated VIs, which makes the entire process more efficient. A smaller error in prediction of FB was achieved with the RF model, thereby securing its advantage over PLSR as a novel tool for remote estimation of soybean biomass. Even though the RF model had high accuracy, there were some differences between the predicted and the measured values of FB (RMSE = 0.26 kg/m²).The explanation for these discrepancies may lie in the predictors themselves.The PH stood out as the crucial variable which had the highest influence on the model's performance.Some soybean genotypes are prone to lodging which can disturb the determination of plant PH in such a way that the predicted PH obtained by the analysis of DTM and DSM is lower than the actual one.As a result of the disturbance caused by lodging, imprecision in the remotely estimated biomass can be expected [52].In the Fig. 6 The importance of each predictor variable in (a) random forest (RF) and (b) partial least squares regression (PLSR) model for prediction of soybean fresh biomass (FB).Canopy cover -CC, plant height -PH, triangular greenness index -TGI, and green chlorophyll index -GCI prediction of soybean FB, the effect of the two remaining VIs was not the same.The TGI had a low impact on the model's performance, while in the case of GCI the results were a little different.The significance of GCI lies in its essence, as it was created by a combination of G and the particularly important NIR channel, associated with biomass in the previous studies [53].On the other hand, the TGI is based on plant reflectance caused by light from the visible part of the spectrum.This part of light spectrum does not penetrate plant tissue as deeply as NIR [54,55] does, which could be the reason why the GCI was far more important for biomass prediction than TGI.This is especially significant in later development stages, when plants achieve high CC and PH with lots of interlaced leaves.Adding more NIR-based VIs would not improve model accuracy because all VIs were highly correlated (r > 0.8) with the used GCI.Finally, the model's precision can be disturbed by weeds if their presence leads to an increase in CC.Also, weeds leaf tissue can cause changes in spectral reflectance of the plots which can harm precise determination of selected VIs.In that case, the model could overestimate biomass for that plot.This can be expected with PLSR, where CC and GCI has a great influence.In the proposed RF model, the importance of GCI and especially CC is lower, ensuring more stable predictions of soybean FB regardless of canopy density. In the barley research, biomass prediction relied on the correlation with PH, resulting in R 2 = 0.72 [56].Plant PH was also obtained using SfM and DSM in calculations of the crop surface model (CSM), thus ensuring higher accuracy compared to ground measurements.Biomass estimation based solely on PH simplifies the process, but it can also be very challenging.In the barley study, Fig. 7 Correlation matrix for assessing the relationship between four selected predictor variables, plant height (PH), canopy cover (CC), green chlorophyll index (GCI), and triangular green index (TGI).Crosses on the plot indicate non-significant correlation coefficients (p < 0.05) no other variables could compensate for the shortcomings of one predictor prone to a lodging error such as PH.On the other hand, utilization of the VIs, PH, and CC as combined set of predictors requires more computing, but it provides better results in biomass prediction.For instance, tomato fresh shoot mass was predicted several times during the growing season using the RF algorithm and a set of combined predictors [57].Six VIs, including G ̶ R index [58], NDRE and different variations of NDVI, were used alongside other predictors such as plant area, length, width or PH.This approach secured high accuracy in the estimation of tomato biomass with R 2 = 0.88.The VIs (especially the G-R index) were very important in making the prediction, but the crucial variable was plant area.Contrary to the study on soybean where PH was the main predictor, for tomato plant PH was not significant.This could be related to the different growth type of soybean and tomatoes.Unlike soybean, which has a predominantly vertical growth, tomatoes mainly spread their shoots horizontally, causing a smaller PH variation between the plants.The PH was also highly ranked in the prediction of maize biomass using different MLMs [59].In the research on maize, the initial set of predictors containing different VIs, PH and volumetric parameters was reduced because some variables were highly correlated.This was done to eliminate the possible impact of multicollinearity on the model's predictive ability, in the same way as it was done in the prediction of soybean FB.All of the above suggests that the selection of a proper set of predictor variables customized to the certain plant shoot architecture is crucial for successful biomass estimation.This was given special attention in the proposed RF algorithm for the determination of soybean FB, which therefore resulted in high accuracy (R 2 = 0.94). The quality of the proposed model was additionally tested in a two-year trial where FB was predicted for 206 soybean genotypes.The results obtained on the evaluation plots in 2020 and 2021 showed that both early and late genotypes from ED and LD trials accumulated less FB than the control.The reduction in biomass as the consequence of unfavorable conditions was expected based on the previous studies on soybean and corn [60,61].The negative impact of drought on soybean development manifested itself through reduced PH and LAI [62].Moreover, the water deficit changed spectral reflectance of the soybean plants, causing an increase in visible light reflection, while at the same time NIR dropped [63].This means that values of TGI and GCI were also modified, as they directly depend on canopy reflectance.The proposed model for remote estimation of soybean FB recognized changes in predictor variables, which was proven by the variability of predicted results in the testing environments.The difference in the estimated FB was especially noticeable after 390 GDDs for both the early and late genotypes, because soybean is less sensitive to drought in the early development stages while the greatest damage occurs after flowering, i.e. the generative phase [64].Furthermore, according to the results, the late genotypes accumulated more biomass than the early ones, which was anticipated as a result of the longer growing period.All of the above confirms the robustness of the proposed RF model, based on its ability to distinguish different values of soybean FB not just between different environments (EC/ED and LC/ LD), but also within each environment. Conclusions The estimation of soybean FB was tested using reflectance and photogrammetry based predictors in two different MLMs, including RF and PLSR.More precise results were obtained using the RF model with only four predictors.The PH and GCI stood out as the most important variables respectively, while the impact of CC and TGI was minor.The proposed MLM showed that the soybean FB can be accurately estimated (R 2 = 0.94) using a small set of predictors.The reduction in the number of VIs from the initial 31 to just two did not affect model performance.This information can be very useful for the future studies aiming to reduce unnecessary calculations.The robustness of the MLM was demonstrated on divergent soybean germplasm in drought simulation environments, where the predictor variables were affected by the unfavorable growing conditions.Based on these changes, the model adjusted the results of soybean FB prediction between as well as within environments.The results of additional testing proved that the model is able to adapt to different conditions which is important for gathering significant information about biomass accumulation and soybean development.This information could be utilized by practice and science.The farmers may benefit from it by knowing the current status of the crop biomass production and managing the production processes based on the obtained results.On the other hand, scientists from different fields could find this prediction model interesting as a tool for the enhancement of their research.For example, the proposed HTP model can provide a huge amount of data that can be used as new traits in soybean breeding programs.This can result in a more efficient and more precise selection of the best varieties.Still, there is a possibility for additional adjustments of the model.The observed significant correlation between PH, GCI, and CC indicates that further improvement of the proposed model could be achieved through enhanced predictor selection.To realize this idea, the additional testing of the model (new environment and germplasm) and literature survey will be continued in the future to collect as much information as possible.The acquired data will be used to perceive the possibilities for enhancement of the proposed model for soybean FB prediction. Fig. 3 Fig. 3 Correlation matrix for assessing the relationship between vegetation indices (VIs).Crosses on the plot indicate non-significant correlation coefficients (p < 0.05) Fig. 5 Fig. 4 Fig. 5 Box plots of differences between actual and predicted fresh soybean biomass (kg/m 2 ) obtained with random forest (RF) and partial least squares regression (PLSR) with reduced set of predictors.The error bars show the 95% confidence interval while the line inside the boxes represent the median value Fig. 8 Fig. 8 Temporal change in biomass accumulation for soybean genotypes grown in different environments based on the results of proposed random forest (RF) model.The line in each box plot stands for median value.The error bars represent the 95% confidence interval and outliers are represented by dots.Early group grown in drought simulation -ED, late group grown in drought simulation -LD, early control -EC and late control -LC, growing degree days after emergence -GDD (°C) The genotypes were divided into early (117) and late (89) based on the maturity group.They were sown on 8 m 2 plots on sandy soil with low fertility and poor water retention to simulate a drought environment.As control groups, an identical set of genotypes was sown on a carbonate chernozem, a soil with favorable conditions, good water retention, and optimal soil fertility (Additional file 2).Trials for biomass screening were labeled as ED (early group grown under drought simulation), LD (late group grown under drought simulation), EC (early control), and LC (late control).For 206 soybean genotypes within ED, LD, EC, and ED trials, the necessary predictors were calculated from the UAV images collected in four-time points during 2020 and 2021.In both years, the trials were photographed at approximately 230, 390, 706, and 917 GDDs, with a difference of ± 1.8 ̶ 21.6 GDDs.The FB of genotypes in ED, LD, EC, and LC trial was estimated at each time point.
2023-08-26T13:28:54.490Z
2023-08-26T00:00:00.000
{ "year": 2023, "sha1": "fc8da11d9b582ac31f62e4c9a5ca3e840c96cf73", "oa_license": "CCBY", "oa_url": "https://plantmethods.biomedcentral.com/counter/pdf/10.1186/s13007-023-01054-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c9ddf87cea7ac56a57664b08abdd1afe14279242", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
52061439
pes2o/s2orc
v3-fos-license
Growth response and serum biochemical parameters of starter broiler chickens fed toasted African yam bean ( Sphenostylis stenocarpa ) seeds meal with enzyme supplementation This study was conducted using one hundred and fifty one day old Marshal (R) strain of broilers to determine growth response and serum biochemistry parameters of toasted African yam bean seed meal (AYBSM) with enzyme supplementation. Five diets were formulated such that African yam bean seed meal replaced soybean meal at 0, 5, 10, 15 and 20 % for diets 1, 2, 3, 4, and 5 respectively. Diets 2, 3, 4 and 5 were enzyme supplemented at the rate of 100 g/1000 Kg of the feed . The birds were randomly assigned to the five dietary treatment groups in a completely randomized design (CRD) experiment. Each treatment had thirty chickens which were replicated three times with ten chickens per replicate. The parameters measured were body weight, feed intake, feed conversion ratio, weight gain, mortality, total protein, albumin, globulin creatinine, cholesterol, aspartate amino transferase, alkaline phosphate and alanine aminotransferase. The experiment lasted for five weeks. Result revealed that test diets group performed better than control (0%) i.e., test diet ranged between 929.77 (g)-1216.51 (g) while control had 800.00 g. 20 % AYBSM had least (2.03) feed conversion ratio (FCR) while control (0 % AYBSM) gave highest (2.97 g) FCR. There were no significant differences (p>0.05) in serum biochemistry analyzed except albumin and total protein that were significantly (p<0.05) affected. The chickens fed on control had highest value of albumin level (2.07 g/dl) while chickens placed on 20 % AYBSM gave least value of 1.67 g/dl, similar trend was observed for total protein. Toasted African yam bean seed meal with enzyme supplementation improved growth performance of at the broiler starter phase and had no negative effect on serum biochemistry measured. It is concluded that Toasted AYBSM can be used up to 20 % in broiler starter diets. INTRODUCTION Under-utilized legumes have tremendous potential for commercial exploitation but remain ignored (Bhag, 1992).They are important sources of dietary proteins for both human and animals, but the presence of relatively high concentration of toxins (trypsin inhibitors, phytic acid, saponin, oxalate e.t.c) affects the nutritional quality, inhibit a number of enzyme and bind nutrients making them unavailable (Nowacki, 1980).It's effects limit the use of raw African yam bean seeds in monogastric feed.Although various processing techniques tend to reduce the anti-nutritional factor content of the seed.African Yam bean (Sphenostylis stenocarpa) is one of the edible, underutilized grain legumes widely cultivated in Africa that is used in man and animal nutrition (Eke, 2002).African Yam bean seed is rich in protein (19.5 %), carbohydrates (62.6 %), fat (2.5 %), vitamins and minerals (Iwuoha and Eke, 1996).The protein is made up of over 32 % essential amino acids with lysine and leucine being predominant (Onyenekwe et al., 2000).Therefore, it helps to make use of this lesser-known and under-utilized legume in the feed preparations especially in the developing countries for animal consumption. Research on the use of exogenous enzymes in broiler diets has been ongoing for decades.Most commercial enzyme products currently available have more than one enzyme activity whereas fewer products have only one substrate specificity.A wide range of endogenous proteases are synthesized and released in the gastrointestinal tract of the bird, and these are generally considered sufficient to optimize feed protein utilization (Nir et al., 1993;Le Huerou-Luron et al., 1993).However, based on protein digestibility values reported in the literature, it appears that valuable amounts of protein pass through the gastrointestinal tract without being completely digested (Lemme et al., 2004.On the other hand, research done with products with only one protease activity allows for easier inperpetration; however, literature on this type of study with chickens is scarce. The use of blood examination as a way of assessing the health status of animals has been documented (Muhammed et al., 2000;Owoyele et al., 2003).This is because it plays a vital role in physiological, nutritional and pathological status of organisms (Muhammed et al., 2000).They range from giving the level of the blood to detecting ailment or disorders through them.It had been reported that biochemical changes as a result of toxins have effects on haematological parameters (John 1998, Kamish 2003).The effect of differently processed underutilized legumes has been evaluated on the hematological parameters of broiler (Muhammed et al., 2000;Owoyele et al., 2003), but there is little or no information on effect of toasted and enzyme supplementation of AYBSM on growth respond and haematological parameters of broiler production.Therefore, this study directed toward investigating growth response and serum biochemistry parameters of broiler starter fed graded level of toasted African yam bean seed meal with enzyme supplementation. ExpErimEntal sitE The experiment was carried out at the Poultry Unit, Teaching and Research Farm, Oyo State College of Agriculture and Technology, Igbo-ora, Nigeria.Latitude 7˚15'N and longitude 3˚30 E with average annual rainfall of 1278 mm and average temperature of 27 ºC. procurEmEnt and procEssing of tEst ingrEdiEnt(s) African yam bean seeds were procured at Bodija market, Ibadan North local government, Ibadan, Oyo State, Nigeria.The beans were sorted to remove extraneous materials such as stones, dirt and other seeds The brown AYB seeds were toasted using frying pan measuring 74.5 cm x 38 cm place fire and allowed to stay between 3-5 minutes with stirring at regular intervals to ensure even distribution of heat until the beans were crispy, thereafter, crispy beans were mill by using hammer mill machine and product called African yam bean seed meal. fEEd formulation Five experimental diets were formulated by using maize as source of energy and soybean meal and toasted African yam bean seed meal (TAYBSM) were sources of plant protein.The crude protein content of the diets ranged from 22.58 % diet 1 to 22.55 % in diet-5 while the Metabolizable energy ranges from 3002.92 ME (Kcal/Kg) in diet-1to 3077.97ME (Kcal/Kg) in diet 5.All treatment contained toasted African yam bean seed meal with enzyme supplementation except control diet (D1/0 %), i.e., diets (D2, D3, D4 and D5 contained 5 %, 10 %, 15 %, and 20 % as shown on table I. Protease enzyme was used as supplement and included at the rate of 100 g/tone. ExpErimEntal birds and managEmEnt A total of 150 day old Marshal® strain of broiler chicks purchased from a reputable hatchery were used for the experiment, Nigeria.The birds were divided into five treatment groups of 30 birds each.The treatments were replicated thrice at the rate of 10 birds per replicate in a Completely Randomized Design (CRD).Each replicate was housed in a floor pen (0.6 m by 0.3 m) with wood shavings as litter materials and equipped with feeders and drinkers.Experimental diets were supplied ad-libitum.Vaccination and medication schedule as applicable to the experimental location were strictly adhere to. data collEction Data on daily feed intakes were determined by subtracting the leftovers from the feed offered to the birds.The weight changes were calculated as the difference in the weight from the previous week.Feed conversion ratio was calculated as the ratio of the feed intake to the weight gain. At 5 weeks of feeding trial, blood samples were collected from one bird per replicate between 8 to 10 am through puncture wing vein by means of sterilized disposable needle and syringe into bottles that free of any anticoagulant.It was centrifuged at 1000 r.p.m for `10 minutes and the serum was separated and analyzed.Serum protein, albumin and globulin were analyzed colorimentarically using diagnostic reagent kits (Renal Diagnusztikal Reagents, Keszlet, Hungary) based on total protein (Wechelbaun, 1964) albumin and globulin (Doumas and Briggs, 1972) and cholesterol (Roschian et al., 1974) respectively.Activities of serum aspartate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphate (ALP) were determined colorimetrically (Reitman and Frankel, 1957). statistical analysis Data collected were subjected to analysis of variance (ANOVA) using (SPSS 2012) 21 version (IBM Corp, 2012).The means were separated by using Duncan multiple range test of the same software. RESULTS AND DISCUSSION The performance characteristics of the broiler starter is presented in table II.There were no significant (p>0.05)difference in initial weight and mortality but significant differences (p<0.05) were observed bet- ween final weight, weight gain, feed intake and feed conversion ratio.The highest (p<0.05)final weight (1216.33 g) and weight gain (1216.51)was obtained in birds fed 5 % African yam bean seed meal with enzyme supplementation while the lowest value of 800.77 g was recorded on bird fed control diet 1 (0 % toasted African yam bean seed meal without enzyme supplementation).The lowest weight gain was obtained on the birds placed on control diet with the value of 759.77 g.The lowest weight gain noticed in birds placed on the control diet (0 % AYBSME) compared with test diets suggested that soybean meal which is the only plant protein source in the control had lower amino acid profile than toasted African yam bean seed meal.African yam bean seed meal was reported to contain similar or better essential amino acid profile than soybean (Kine, 1991). The higher feed conversion ratio (FCR) was obtained in the birds fed control diet (2.97) while lowest (2.03 g) was recorded in the birds placed on 20 % toasted African yam bean seed meal with enzyme supplementation.Lowest value of 2.03 g recorded in the birds placed on Diet 5 could be as a result of low fibre content in the diet 5 (table I) thus leading to better conversion of the diet to flesh as revealed by the while highest FCR of 2.97g obtained from control birds could be attributed to the higher fibre content in the control diet as well as effects of the anti-nutritional factors on reduction of protein metabolism and absorption and utilization of minerals.D' Mello (1991) reports that trypsin inhibitor adversely influenced the utilization of protein in rats by increasing the amount of cysteine and methionine requirement.Udedibie and Carlini (1998) are of the views that even minute amounts of residual haemagglutinin in processed jack bean could constitute a problem to birds on adlibitum feeding system and anti-nutritional factor is resistant to proteolytic digestion and therefore tends to accumulate in the animals by binding to the intestinal wall, thereby reducing the efficiency of feed utilization. The result of the serum biochemistry of broilers starter is presented in table III.There were no significant differences (p>0.05)among serum biochemistry analyzed (globulin, creatinine, cholesterol, aspartate aminotransferase, alkaline phosphate, alanine aminotransferase except albumin and total protein that were significantly (p<0.05)affected.Birds fed on control diet had highest value of albumin level (2.07 g/ dl) while birds placed on 20 % inclusion of toasted African yam bean seed meal with enzyme supplementation had lowest value (1.67 g/dl).The total protein revealed that birds fed on control diets had highest value of 5.25 g/dl and lowest value of 3.80g/ dl was recorded on birds placed on 15 % inclusion level of toasted African yam bean seed meal with enzyme supplementation.The result of this study is in agreement with Lawrence et al., (2012) who opine that albumin and total protein were different when growing rabbit were fed with cocoa bean shell supplemented with enzyme.However, the result obtained in this study for all the dietary treatments fall within the normal range for broiler as reported by Mitruka and Rawnsley (1977). CONCLUSION It was observed that toasted African yam bean seed meal with enzyme supplementation resulted in a synergistic improvement in the performance of the broilers at the starter phase.This study also confirmed that toasting and enzyme supplementation of African yam bean seed meal gave no deleterious effect on serum biochemistry of broiler starter. RECOMMENDATION Base on the outcome of this finding, 20 % inclusion level of toasted African yam bean seed meal with enzyme supplementation can be used to feed broiler starter without adverse effect on performance and serum biochemistry.Higher levels of inclusion of toasted African yam bean seed meal with enzyme supplementation should be investigated in broiler starter diet. bcd =means within the same row with different superscript differ significantly (p<0.05);FCR= Feed conversion ratio.
2018-08-22T00:20:33.581Z
2016-06-11T00:00:00.000
{ "year": 2016, "sha1": "aa84732102a5ff84c10406ad36eb3d19a86f2aef", "oa_license": "CCBYSA", "oa_url": "http://www.uco.es/servicios/ucopress/az/index.php/az/article/download/480/457", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa84732102a5ff84c10406ad36eb3d19a86f2aef", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
19135603
pes2o/s2orc
v3-fos-license
A 69-dB SNR 89-μW AGC for Multifrequency Signal Processing Based on Peak-Statistical Algorithm and Judgment Logic A novel peak-statistical algorithm and judgment logic (PSJ) formultifrequency signal application of Autogain Control Loop (AGC) in hearing aid SoC is proposed in this paper. Under a condition of multifrequency signal, it tracks the amplitude change and makes statistical data of them. Finally, the judgment is decided and the circuit gain is controlled precisely.The AGC circuit is implemented with 0.13μm 1P8M CMOS mixed-signal technology. Meanwhile, the low-power circuit topology and noise-optimizing technique are adopted to improve the signal-to-noise ratio (SNR) of our circuit. Under 1 V voltage supply, the peak SNR achieves 69.2 dB and total harmonic distortion (THD) is 65.3 dB with 89 μW power consumption. Introduction In the twenty-first century, as China entered the aging society gradually, hearing damage has become a major disease that is prevalent in elderly people. In this group, more than 90% of patients can compensate and repair their hearing ability by wearing a hearing aid device. A digital hearing aid SoC includes Autogain Control Loop (AGC), ADC, digital signal processing platform (DSP), power amplifier driver, and EEPROM [1][2][3]. As the most significant part of SoC, the performance and power consumption of AGC are decisive for the SoC. Traditional AGCs use analog peak envelop-detection method which limits the resolution and dynamic range of gain adjustment. It also consumes extra power and worsens the noise performance [2][3][4]. In more complex occasions, the microphone output contains a number of different frequency sine waves, and the amplitude of them may be located in any interval of peak and act . If still using peak envelop-detection algorithm, AGC may process only one signal of a certain frequency and cause others to malfunction. Hence, we need a new algorithm for more accurate gain control. According to the characteristics of audio signal, a novel peak-statistical algorithm and judgment logic (PSJ) for AGC are proposed in this paper. It extracts the amplitude statistical characteristics of most signal and adjusts AGC gain precisely, which ensure that most signal remains in the best receiving range. The algorithm is realized by mixed-signal design method. As a logic circuit, the PSJ circuit consumes less power compared with its analog opponent. Also by adopting low-power topology, the power consumption of circuit is optimized and high performance of SNR and THD is achieved. Basis of Proposed PSJ Previous research proves that the sound pressure level (SPL) received by human beings ranges from 60 dB SPL to 100 dB SPL, and the safe sound level is about 80 dB SPL to 90 dB SPL [5][6][7][8][9]. So the gain of AGC in hearing aid SoC must be adjusted in a wide range of about 40 dB. The proposed AGC structure with PSJ is shown in Figure 1. The programmable gain amplifier (PGA) varies from −6 dB to 30 dB with 3-dB/step. Firstly, the microphone signal is acquired and amplified (or attenuated) by PGA. Then, the PGA output is compared with peak-threshold ( peak ) and active threshold ( act ) through two comparators and 2-bit digital code is generated. When the signal is greater than act and less than peak , we think that the signal is in the best reception range of the human ear. After that, 2-bit digital code is analyzed by PSJ logic. Finally, a 13-bit control code is transmitted to PGA by PSJ logic, which modifies the gain of PGA to process the signal amplitude until it is located between peak and act . VLSI Design The motivation of the new algorithm is as follows: Taking single-frequency input signal for example, peak and act are the two comparators' peak and active amplitude threshold of AGC, respectively. Since AGC's sampling clock frequency is much higher than the sound signal frequency, even if the input ( in ) is greater than peak , logic "11" and logic "00" (or 10, 01) coexist in the comparator output as shown in Figure 2(a). If we still use simple digital peak detection logic, it is not able to accurately judge the right magnitude range of input signal, which may make AGC error operation. And the same condition happens when in is between peak and act as shown in Figure 2(b). Only in the case of Figure 2(c), when in is less than act , the comparator outputs "00." AGC can amplify the input. Therefore, in Figures 2(a) and 2(b) case, we need to make statistics of the comparator output. Only when the number of right logic codes of comparator output is greater than the minimum number for judgment, the PSJ logic can output precise control code for PGA gain adjustment. In a more complex multifrequency signal environment as Figure 3, some of the signals may be located between peak and act , and others are greater than peak . At this time, the situation is much the same as single-frequency input when the comparator output also has a number of different logic values. Therefore, we also need to do a statistical analysis of comparator output, to complete the precise control of the majority of the signals. The statistical properties of the output signal are derived as follows: when the AGC input is the sine wave, the probability of PSJ is calculated as And for a given voltage, the probability density of a sine wave is Then, the signal probability between and is Based on cos (sin −1 ( / )) = √ 2 − 2 / , the cosine function is put on both sides of (3); then, When voltage supply is 1 V and the full input swing of Sigma-Delta modulator is 400 mV, in order to avoid output distortion, the optimized modulator input is about 250 mV. So peak is set as DD /4. In case of input noisy signal, the amplitude may be as large as 70∼80 mV. To maintain a certain dynamic range, the modulator input amplitude is not less than 90 mV which means act is 9 DD /100. Assuming DD ( = DD ) is the power supply voltage, the PGA optimum output is between (9⋅ DD )/100 ( act ) and DD /4 ( peak ). Then, put = (9⋅ DD )/100 and = DD /4 into (3); we get that the cos( ( , )) is 0.98682. So the signal probability ( , ) is 0.05175 in the range of (9 ⋅ DD )/100 ∼ DD /4. Through simulation, to get precise signal information, it needs to be sampled 600 times for a gain adjustment cycle. On the basis of the probability calculation, we have to get at least 31 sampling points characteristics to judge the right amplitude range of input signal. It means that when more than 31 sampling points output logic "11," the signal amplitude is considered greater than peak ; similarly, meanwhile, when more than 31 sampling points output logic "01," we know the signal amplitude is between peak and act . Finally, according to these statistical results, the PSJ logic outputs 13-bit control code to PGA for gain adjustment. Circuit Design Under low voltage supply such as 1 V, the OTA design is always a great challenge [10][11][12][13]. In this paper, the OTA circuit is as shown in Figure 4. It is a full differential Class-AB and Miller compensation structure, including the main amplifier and 2nd-stage common mode feedback (CMFB) amplifier. The main advantage of the two stage structure is the combination of high gain and wide output swing, and it also has better 4 VLSI Design noise performance. The first stage amplifier is a five-transistor structure with PMOS input transistors. The load transistors are split into two pairs of transistors NM1a, NM2a and NM1b, NM2b. The NM1a, NM2a are biased at a fixed voltage, and NM1b, NM2b are driven by CMFB amplifier. This scheme effectively reduces the load capacitance of CMFB amplifier, which is conducive to improving the OTA phase margin and frequency characteristics. Also, the input PMOS transistors can minimize the equal input noise due to its low flick noise, which means for the same output amplitude level the SNR is improved [6][7][8][9][10]. In the second stage, Class-AB composition optimizes the output quiescent current, while achieving a larger output swing. In this paper, a 2nd-stage CMFB amplifier is adopted for OTA design. In this OTA, the first and second pole are in the drain of PM2/NM2 and PMC2/NMC2, respectively. And they are also very close. Since the third pole is located in the drain of PM5/NM5, the zero-resistance R1/R2 is significant for offsetting the third pole influence to ensure enough phase margins. Finally, the designed OTA demonstrates 83-dB DC gain, 29-MHz unity gain bandwidth, and 61-degree phase margin for a 2-pF load. For high-resolution comparator design, the dynamic comparator with preamplifier is used in this paper [14]. The main comparator is a regeneration structure that consists of input pairs of PMOS (M2/M3), CMOS latch (M4-M9, M13/M14), and SR latch. NMOS transistor M11 is the reset switch, and the M10 and M12 as assistant transistors are used to reduce the impact of charge injection when M11 is on. The comparator circuit is shown in Figure 5. Measurement Result Fabricated with SMIC 0.13 m 1P8M mixed-signal CMOS technology, the AGC chip microphotograph is shown in Figure 6, which occupies the area of 1.127 mm 2 . When input 1 kHz, 200 mVpp sine wave and sampling clock is 1 MHz, the output of AGC is tested in time domain. As Figure 7 shows, the output is adjusted twice to 100 mVpp while the peak and act are set as 125 mV and 50 mV, respectively. Figure 8 shows the measured AGC output FFT spectrum with 2 kHz sinusoidal input and 200 mV output. In this condition, the measured peak SNR is 69.2 dB and the peak THD is 65.3 dB. The results indicate that although it is under the low supply voltage of 1 V, the AGC achieves high output dynamic range. As the important performance of AGC, the noise performance is also measured. We break the AGC close loop and set the PGA gain manually. Under 2 kHz frequency and 4 mV Vp-p input, Figure 9 indicates the relationship between PGA gain and noise floor. The result shows that the minimum and maximum noise floor are −111 dBm and −73 dBm, respectively, which are low enough to maintain a high resolution of AGC. Figure 10 shows the measured SNR as a function of the input signal of 2 kHz. When output amplitude is larger than 300 mV, the SNR reduces. VLSI Design The proposed AGC performance summary is concluded in Table 1. The performance comparison of our AGC with previous works is shown in Table 2. Our AGC shows the highest THD of 65.3 dB under 1 V power supply. However, compared with [9], the increase in peak THD is limited. The circuit in [9] used traditional analog envelop detector for gain adjustment, so it can only realize simple peak amplitude control and can not be used in high-resolution multifrequency input condition. Power spectral density Conclusion In this paper, a novel PSJ feedback logic for multifrequency signal application of AGC is proposed. In complex audio signal condition, compared with traditional analog peak envelop-detecting method, it can adjust the gain precisely with 3 dB/step, total 36 dB dynamic range. The PSJ algorithm is implemented in an AGC with 0.13 m 1P8M CMOS mixedsignal process. In 1 V power supply, the peak SNR of AGC achieves 69.2 dB and total power is 89 uW with 1.127 mm 2 core area. Measurement results satisfy the low-power and high-performance application of hearing aid SoC.
2018-04-03T02:44:27.628Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "697519e780ad3a3a4a1f44e973259e7ab82903eb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2016/6708253.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "187456acae3b568b7b2443fc0f5cd623c4c68fd4", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
157694965
pes2o/s2orc
v3-fos-license
Climate humanitarian visa: international migration opportunities as post-disaster humanitarian intervention With global action being outpaced by climate change impacts, communities in climate-vulnerable countries are at increased risk of incurring climate-induced losses and damages. In the last few years, disasters from extreme weather events such as typhoons have increased and have breached records, with typhoon Haiyan being the strongest ever typhoon to make landfall. Such an event solicited global compassion and altruism where Canada and the USA, apart from doling out traditional humanitarian aid, also offered immigration relief opportunities to typhoon Haiyan victims who have familial connections to their residents. Drawing from these immigration relief interventions, this paper uses a sociopolitical approach in proposing a climate humanitarian visa that would be offered to climate change victims on the basis of transnational family networks and skilled labor. Noting that several countries such as in Europe have demographic deficits and labor shortages, such a scheme would benefit both climate change victims and receiving countries. To counter the risk of selective compassion against economically trapped populations, potential receiving countries could provide skills upgrading geared toward addressing their labor shortages through their existing development programs. While migration is only one strategy in a spectrum of responses to climate change impacts, a climate humanitarian visa could provide climate change victims a legal choice for mobility while invoking altruism, hospitality, and compassion from potential receiving countries, whether or not they historically cause climate change. Introduction In the last few years, the frequency and intensity of disasters from extreme weather events have been increasing. Despite the uncertainty of scientific findings on the attribution of changes in cyclone frequency and intensity to anthropogenic climate change or humaninduced forcing, the Intergovernmental Panel on Climate Change (IPCC) states with high confidence that increasing exposure of people and economic assets to weather-and climaterelated disasters has been the major cause of long-term increases in economic losses (IPCC 2012;Knutson et al. 2010;Hoegh-Guldberg et al. 2018). Apart from causing damage to life and property, extreme weather events can displace people and also inflict intangible losses such as post-traumatic stress disorders or loss of sense of belonging ("solastalgia") that follows after a disruption of familiar surroundings (Landoy et al. 2015;Tschakert et al. 2013). Generally, the less developed countries are more affected by extreme events compared to developed countries, and some countries like Haiti, the Philippines, and Pakistan are even repeatedly affected by such catastrophes (Kreft et al. 2015;Eckstein et al. 2018). In this case where sudden-onset events become commonplace, adaptation must not only be reactive (i.e., an immediate response) but proactive (i.e., long-term planned response) as well (Biagini et al. 2014). One common reactive adaptation measure to sudden-onset climate change is migration. People migrate for complex reasons, and one push factor may be environmental threats (Black et al. 2011). While it is not a stand-alone solution to climate change impacts, migration has the potential to be a proactive adaptation measure and may be the most effective way to allow people to diversify income and build resilience where environmental change threatens livelihoods (Black et al. 2011;Hillmann et al. 2015). Despite having received increased attention in recent years, migration as an adaptation option has yet to be fully mainstreamed in multilateral climate policy. There is yet to be a legally binding migration treaty that climate change victims can invoke, and current legal and political solution approaches are inadequate and unable to provide suitable, just, and in-time assistance to the most climate-vulnerable groups (WBGU 2018). In this paper, the feasibility of immigration opportunities as humanitarian aid for victims of extreme weather events is explored as an add-on to traditional humanitarian aid doled out by states after disasters. Inspired by the USA and Canada immigration relief measures for typhoon Haiyan victims in the Philippines, this paper uses a sociopolitical approach in constructing an international humanitarian migration model, which could be captured as a "climate visa" for those who were affected and have survived extreme weather events. In order to construct this migration model, a stocktaking of recent multilateral initiatives and mandates involving migration and climate change was conducted, and landmark immigration relief measures such as those of the USA and Canada were reviewed through a sociological lens. The following section highlights relevant and recent climate change and migration multilateral mandates, which are important first steps in mainstreaming migration in climate policy. Section 3 reviews the sociology of immigration relief measures of the USA and Canada for victims of extreme weather events. Sections 4 and 5 introduces the climate visa as a potential humanitarian aid of concerned states for victims of extreme weather events while Section 6 provides an illustration of how to operationalize the climate visa. This paper concludes in Section 7 with a call for ambition and compassion from states, especially those historically responsible for climate change. Climate change and migration mandates The year 2010 was pivotal for the topic of migration in the international climate regime. The Cancun Agreements (Decision 1/CP.16), adopted by the Conference of Parties (COP) in COP 16, contained a decision to establish the Cancun Adaptation Framework and invited parties to enhance adaptation action by initiating several activities, one of which is to undertake "measures to enhance understanding, coordination and cooperation with regard to climate change induced displacement, migration and planned relocation, where appropriate, at the national, regional and international levels" (UNFCCC 2010). This invitation to parties takes into account common but differentiated responsibilities (CBDR) and respective capabilities as well as specific national and regional development priorities, objectives, and circumstances (UNFCCC 2010). The COP decisions after the Cancun Agreements and before the Paris Agreement (Decision 1/CP.21) did not contain any explicit reference to "climate change induced displacement, migration or planned relocation" (UNFCCC 2010). The other COP decisions, however, sought to address loss and damage arising from adverse impacts of climate change, including both extreme events and slow onset events. In COP 18, Decision 3/CP.18 acknowledged the need for further work to advance understanding and expertise on loss and damage including climate change impacts on migration, displacement, and human mobility. In COP 19, the COP established the Warsaw International Mechanism (WIM) for loss and damage associated with climate change impacts (Decision 2/CP.19) in order to advance knowledge and understanding about loss and damage. Of note is paragraph 5.c.iii, which gives an impression of action other than dialogue or further study. It suggests enhanced action and support to address loss and damage in order to enable countries to take actions including that "where necessary, facilitate the development and implementation of additional approaches to address loss and damage associated with climate change impacts, including extreme weather events and slow onset events" (UNFCCC 2013). COP 21 was a turning point for the WIM, especially on displacement issues, as the COP requested the WIM to establish a task force to "develop recommendations for integrated approaches to avert, minimize and address displacement related to the adverse impacts of climate change" (UNFCCC 2015). This paved the way for the establishment of the WIM's Task Force on Displacement (TFD). Article 8 paragraph 4 of the Paris Agreement also laid down more concrete suggestions for areas of cooperation and facilitation to enhance understanding, action, and support of parties (UNFCCC 2015). These include early warning systems, emergency preparedness, slow onset events, events that may involve irreversible and permanent loss and damage, comprehensive risk assessment and management, risk insurance facilities, climate risk pooling and other insurance solutions, non-economic losses, and resilience of communities, livelihoods, and ecosystems (UNFCCC 2015). Article 9 of the Paris Agreement, reminiscent of the CBDR principle invoked in the Cancun Agreements, called on developed countries to "provide financial resources to assist developing countries with respect to both mitigation and adaptation in continuation of their existing obligations under the Convention" (UNFCCC 2015). Furthermore, the provision of financial resources should take into account "country-driven strategies, and the needs and priorities and needs of developing country Parties, especially those that are particularly vulnerable to the adverse effects of climate change and have significant capacity constraints" (UNFCCC 2015). Indeed, the poorer countries in the South may be unable to initiate sufficient adaptation programs. In countries whose very existence is threatened by climate change, migration might be the only option for their communities (Biermann and Boas 2010). Within the UNFCCC, an aspiration to globally address climate change impacts, including on migration, displacement, and human mobility, is quite apparent in every COP decision. While this aspiration has not been effectively translated to reality yet, there are global developments that potentially advance action on migration. In December 2018, the Global Compact for Safe, Orderly and Regular Migration (GCM) has been adopted by UN Member States. The Compact, although not legally binding, is still a significant achievement (Newland 2019). The Compact exhibits, for the first time, a comprehensive set of consensual guiding principles for international cooperation on migration. It aims to "address migration movements, such as those that may result from sudden-onset and slow-onset natural disasters" (UNGA 2018). Considered as a soft law, the Compact has the same status as the 2030 Agenda for Sustainable Development and the Sustainable Development Goals and is an important step toward recognizing the role of climate change on migration (Newland 2019). The International Organization for Migration (IOM), in its role in the WIM's TFD, has comprehensively summarized processes, policies, and frameworks relevant to human mobility and climate change. While IOM (2018) recognizes a clear increase in number of relevant processes, it notes that there are still significant gaps such as the lack of an international "hard" law with specialized provisions that climate-related migrants and displaced persons could invoke. With global action being outpaced by the impacts of climate change, climate-vulnerable communities are left to survive on their own. If climate change is not reason enough to welcome the international migration of climate change victims, it can be argued for doing so on humanitarian grounds as has been mentioned in the GCM. An example would be the case of a family from the small Pacific Island State of Tuvalu granted New Zealand residency in 2014 after arguing, among other things, that the effects of climate change would have adverse impacts on them if they were forced to return home (McAdam 2015). The New Zealand Immigration and Protection Tribunal (IPT) ultimately permitted them to stay in New Zealand based purely on humanitarian and discretionary grounds, because of their strong family ties within New Zealand (McAdam 2015). A year after, in 2015, New Zealand deported a man (Ioane Teitiota) from the small island developing state Kiribati and refused his claim for recognition as a refugee and/or protected person (McDonald 2015). The IPT deemed that the risk of "arbitrary deprivation of life" of Teitiota and family was not substantial enough and that the Government of Kiribati had taken steps to address climate change; however, it did not exclude the possibility that "environmental degradation could create pathways into the Refugee Convention or protected person jurisdiction" (UN HRC 2020). Teitiota filed a case under the International Covenant on Civil and Political Rights monitored by the UN Human Rights Committee (UN HRC 2020), which upheld the ruling by New Zealand with some dissenting opinions. The UN HRC (2020), however, has established that "environmental degradation can compromise effective enjoyment of the right to life, and that severe environmental degradation can adversely affect an individual's well-being and lead to a violation of the right to life." The case of Teitiota shows how the New Zealand IPT looked at both the circumstance of the individual and the sending country. Observing that the Government of Kiribati has a National Adaptation Programme of Action drafted in 2007, the New Zealand IPT deemed the country as proactive in addressing climate change impacts. However, there are limits to adaptation, and both sudden and slow onset events can trigger cross-border movement of individuals seeking protection from climate change-related impacts (UN HRC 2020). The next section looks at how the USA and Canada responded to sudden onset events by providing immigration relief measures to victims of typhoons in the Philippines. These measures mainly required family ties for immigration application. Landmark immigration relief measures after typhoon Haiyan and typhoon Ketsana Typhoon Haiyan of 2013 is currently the strongest tropical cyclone to make landfall (Athawes 2018). Typhoon Haiyan particularly devastated the Philippines, becoming the country's deadliest typhoon on record. The USA and Canada were two countries that extended not only financial humanitarian aid but also immigration relief opportunities to disaster victims of typhoon Haiyan. Canada also extended immigration relief opportunities to Filipino victims of typhoon Ketsana in 2009. Apart from existing strong and friendly bilateral relations with the Philippines, particularly in administering development support, the USA and Canada host a considerable amount of Filipino residents. In the USA, the Philippines is the top three origin country of immigrants (Radford 2019), while it is the top country of birth of immigrants in Canada (Statistics Canada 2016). Indeed, familial affiliation of those affected by climate disasters can reinforce cross-border migration through immigration relief opportunities provided by states (Mosuela and Matias 2015). Whether being a top origin country of immigrants played a role in the USA and Canada opening their borders to kin of Philippine immigrants affected by disasters remains to be seen as other top origin countries such as Mexico, China, and India have yet to post record breaking deadly typhoons such as Haiyan. The Immigration and Refugee Board (IRB) of Canada prioritized the processing of application of family class applicants "under the Immigration and Refugee Protection Act (specifically, spouses, common law and conjugal partners, dependent children, parents, grandparents and orphaned family members)" who were proven to be "directly and significantly affected" by the disasters (IRB 2009). Expedited scheduling and processing as well as having a possibility to appeal on "humanitarian and compassionate grounds" were the benefits of the said immigration measure for the Philippines (IRB 2009). In addition, appeals from Filipino citizens temporarily living in Canada (i.e., visitors, international students, and temporary foreign workers) who were personally affected by the typhoon and who needed to extend their stay were said to be evaluated in a "compassionate and flexible" approach (Government of Canada 2014). In line with these measures, Citizenship and Immigration Canada (CIC) set up exclusive email addresses and a phone line to act on requests from applicants and their families (Government of Canada 2014). The latest publicly available figures were 2100 approved applications by September 2014, while a breakdown of 1097 approved applications in April 2014 showed 245 were temporary resident applications and 852 were for permanent residence (Dempsey 2014;Relief Web 2014). In a very similar response, the US Citizenship and Immigration Services (USCIS) instituted immigration relief measures to Filipino nationals residing in the USA and their kin affected by typhoon Haiyan. However, it has not granted "Temporary Protected Status" (TPS) designation to the Philippines. One of the options accessible is a request for a change or an extension of non-immigrant status (i.e., visitor visas, student, and temporary employment status) for an individual currently in the USA who is out of status. This measure can support Filipino citizens to reenter lawful status and avoid the negative immigration consequences of remaining in the USA beyond an authorized period of stay (i.e., when one's legal status has lapsed, a reinstatement to lawful status is usually not possible). Another relief measure that could be requested was expedited processing of immigrant petitions for immediate relatives of US citizens and relatives of lawful permanent residents (LPR) with current priority dates. Immediate relatives refer to a spouse, parent, stepparent, or a child and stepchild under the age of 21 of a US citizen and/or the spouse of a deceased US citizen. Some requirements need to be fulfilled: a step-parent or child will only qualify if the marriage creating the step-parent/child relationship occurred before the child's 18th birthday, and a US citizen must be at least 21 years old to file a petition for a parent. Other immigration relief measures for Filipino nationals affected by the typhoon involved extensions of grants of parole and advance parole, expedited processing of an advance parole application, expedited adjudication of an employment authorization application or off-campus employment authorization for F-1 students experiencing severe economic hardship, and assistance to LPR stranded outside the USA (Official Website of the Department of Homeland Security 2018; Reeves ILG 2013). The case of USA and Canada provides an exemplary case of humanitarian post-disaster interventions through loosening tight immigration and state policies for admission (Mosuela and Matias 2015). They represent how states could or should treat those people affected by a humanitarian crisis and of the protection they should receive when they cannot all be protected within their own countries. These particular states view migration not only as a reactive strategy but also a proactive one. Instead of regarding migration as a last-resort option, these states utilize migration strategy as a way of adapting to climaterelated hazard in the long term. The case of typhoon Haiyan shows a potential stepwise migration strategy in relation to international migration: the first step was a short-term reactive coping mechanism to immediately leave from typhoon ground zero and temporarily migrate to another part of the Philippines, which is a process called survival migration (Black et al. 2011;Cattaneo et al. 2019;Kleemans 2015). The longer-term immigration relief opportunities from USA and Canada constitute the second step of a proactive migration, which may be considered a profitable investment move (Cattaneo et al. 2019). Such a strategy allows for what the WBGU (2018) calls a humanistic alternative to authoritarian or nationalist planned migration as it enables individuals to freely decide on emigration. This also provides a legal migration route and could potentially curb criminal people-smuggling services that offer illegal and often dangerous routes of migration. It is necessary to extend the analysis of the USA and Canada immigration relief opportunities to developing a model that other states can use in extending help after disasters from extreme weather events apart from or in addition to traditional humanitarian aid. With climate change, there is a growing awareness of communal risks such as environmental change brought about by extreme weather events. Economies, cultures, and polities have been integrating on a global scale such that mutual vulnerability is now worldwide (Penz 2000). This, thus, encourages a sense of shared future and a collective responsibility on a transnational scale among countries, referring to sustained and continuous pluri-local transactions crossing state borders (Faist 2012). Such a perspective might avoid controversial issues of liability and compensation, which is a sticking point in the international climate change regime. Invoking guilt as a strategy for action might not be productive, given that "people react poorly to being told something is their fault" (Andresen et al. 2011 as quoted by Cameron 2018). Elements of potential immigration relief opportunities In proposing an international humanitarian migration opportunity, countries may take the elements of transnational migrant networks and skilled labor into account as part of humanitarian post-disaster interventions within the context of cosmopolitanism and human rights. Cosmopolitanism treats all humans as part of one moral community, without distinguishing between countrymen and foreigners (Penz 2000). This paper proposes a "climate humanitarian visa" as a more progressive implication of cosmopolitanism wherein states take action not only in a charitable way but also bounded by moral duties in providing humanitarian assistance to the global needy, especially if the countries share common political values, a security agreement, and mutual strategic and economic interests. In times of climate-related hazard, practices of cosmopolitanism surpassing the nation-state model show their relevance such as in the case of the USA and Canada and their respective alliances with the Philippines. Such practices provide fundamental bases for a more just global governance. Drawing on Penz (2010), strong practices of cosmopolitanism such as articulation of ethical values and relations between countries should be expressed globally. Bounded by moral obligations, states play an essential role in upholding moral community and humanity as a whole. One instrument advancing international ethical responsibility is opening states' borders for the protection of the right to free movement, which is a basic right of human beings. Opening states' borders pertains to both exit and entry to enable a certain category of people to move beyond their national territory. Cross-border movement should be an option following a climate disaster, especially when countries lack certain measures and resources to protect their citizens within their borders. Migration is only one among many strategies for adapting to climate change (Cattaneo et al. 2019). It is important to highlight that such an instrument is proposed as an additional humanitarian intervention and not as a stand-alone be-all, end-all adaptation solution to extreme weather events since it is well-recognized that not everyone is able to migrate (Adger et al. 2015;Black et al. 2011). The USA and Canada schemes, for instance, considered only a particular group to be accepted, such as the kin of those legally residing in the USA and Canada. Moreover, only those who survived the typhoon benefitted from the schemes. By default, free movement entitlements are ignored for some climate change victims by an inevitable "selective compassion" entrenched in such humanitarian post-disaster interventions. However, migration may still be the most effective way to allow people to diversify income and build resilience where environmental change threatens livelihoods (Black et al. 2011). It is, therefore, suggested that additional measures such as functional early warning systems and stress-tested emergency evacuation plans be in place for those at risk of being trapped while climate change mitigation measures are concomitantly pursued (Black et al. 2011;WBGU 2018). Extending the immigration relief schemes of the USA and Canada to include other individuals without familial networks would increase the potential benefits of international migration. Similar to the contributions of labor migrants, individuals migrating can help a community to remain viable in the long run if money remittances and goods are sent back to help build resilience such as the case of Africa where remittances to home communities even surpassed official development assistance since 2007 (Black et al. 2011). In the year 2018, remittances accounted for almost 10% of the gross domestic product (GDP) of the Philippines, which is the top four remittance recipient in the world (Central Bank of the Philippines 2019; World Bank 2019). In countries like the Philippines where wages from low-status overseas jobs outstrip potential earnings from white-collar work back home, immigration relief opportunities can be seen as an opportunity than as a liability (Constable 2007;Parreñas 2001;Paul 2019). Proposing a climate humanitarian visa as an international humanitarian migration model This paper's proposal for an international humanitarian migration model is based on the assumption that states put their national interests ahead of global interests, even in the disbursement of humanitarian aid. Warner et al. (2015) sees current humanitarian migration legislations to be a key gap across regions in the world because this only responds to emergency situations, with the assumption that recipients of these schemes will go back to their areas of origin once things get back to normal. While temporary visa waivers are common when a natural disaster happens, the immigration relief measures of the USA and Canada instituted after typhoon Haiyan in the Philippines show that some states are also open to welcoming permanent migrants. The targeted beneficiaries of the immigration relief opportunities provided by the USA and Canada after typhoon Haiyan and typhoon Ketsana in the Philippines can be referred to as transnational migrants. These are defined as immigrants who build social fields that link together their country of origin and their country of settlement, simultaneously seeking to remain embedded in the everyday affairs of the homeland community while engaging in activities that define and enhance their position in the country of settlement (Faist 2012). Such mode of migration not only ensures the beneficial nature of immigration relief opportunities to countries of origin (in the case of remittances as cross-border transactions) but also poses as an invaluable resource in adopting humanitarian post-disaster interventions by receiving countries. Receiving countries can tap such transnational networks of current migrants to provide general support to newly arrived network-linked migrants who were granted a "climate humanitarian visa". Drawing on Hugo (2010), mobility is more probable to be considered as a choice in communities with a background of movement and dynamic migration networks. A priori migration may reinforce the adaptive capacity of transnational migrant networks by acquiring new knowledge (e.g., agricultural innovations) from the receiving country that may constitute additional adaptation strategies in the country of origin (Siar 2011). Transnational migrants, through their connections and web of networks, can propel diffusion of new technologies, management, and trade (Ouaked 2002). In addition, migration out of areas affected by recurring natural disasters reduces the amount of individuals exposed to the disaster and provides the area with an income stream that can help in rebuilding an area after a disaster has occurred (Cattaneo et al. 2019;de Moor 2011). In the case of typhoon Haiyan in the Philippines, survivors who have cross-border migrant family members were able to get much-needed financial assistance from kin living abroad and were eligible to apply for the immigration relief schemes of the USA and Canada. With several countries such as in Europe having a demographic deficit and labor shortages, economic competitiveness becomes contingent on importation of human capital and skilled labor (Adger et al. 2015;Black et al. 2011;Harper 2012). Extending immigration relief opportunities on a labor basis may help both first-time immigrants and their host country. Skilled labor opens opportunities for migrants to immediately integrate in the workforce and contribute to the host country. In the US immigration relief scheme, eligible applicants can obtain temporary authorization to remain and work in the USA for a set period of time. It may also be prolonged if the conditions in the country of origin do not change (Official Website of the Department of Homeland Security 2013; Seguritan 2014). The International Organization for Migration (IOM), for instance, recommends migration instruments, such as circular or temporary migration between developed and developing countries, as an adaptation response to climate-induced vulnerability (IOM 2009). This allows climate-vulnerable communities to work seasonally or on a temporary basis in countries where their skills are in demand and for both parties (host country and migrants) to be sensitized to such a migration arrangement. Planned circular labor migration between countries could be agreed through bilateral agreements, with host states able to control the incoming movement (Vlassopoulos 2013). This also provides a layer of legal protection for the skilled labor migrants if their cross-border occupation is sanctioned by the government. An example of a skilled labor bilateral agreement is the Triple Win program of the German government, which recruits nurses from Serbia, Bosnia and Herzegovina, and the Philippines to alleviate the nursing shortage in Germany while reducing unemployment in the nurses' countries of origin (GIZ 2019). Kiribati has a relocation strategy, which includes the elements of transnational family migrants and skilled labor (Kiribati Office of Te Beretitenti 2019). As one of the low-lying island states in the Pacific that are at risk of inundation due to sea level rise, Kiribati's only recourse is to resettle its citizens should the worst case occur. By employing the concept of "migration with dignity", the Government of Kiribati is seeing its current overseas citizens (I-Kiribati) as opportunities that can enable the migration of those who would like to migrate now and in the future. They also see this as transnational migration increasing the level of remittances to the country. The second element of skilled labor aims to raise the level of skill qualifications available locally to that of Australia and New Zealand to make the I-Kiribati more attractive as migrants while also improving the level of standards within the country. On a practical level, the Government of Kiribati does not only look after the future of its citizens but also of their potential host country should relocation be necessary. Based on the components of transnational family migrants and skilled labor, two routes of international humanitarian migration are proposed as responses to extreme weather events. The first is a family reunion humanitarian route, and the second is a skilled labor humanitarian route. The US and Canada typhoon Haiyan and Ketsana schemes for climate change victims demonstrate how states can support survivors of extreme climate events beyond doling out traditional humanitarian aid. Primarily targeting climate change victims who have familial ties to a priori migrants in their countries, the USA and Canada showed that they are willing to loosen visa restrictions or expedite visa applications to provide a safer residence for climate change victims who are at risk of recurring extreme weather events. Should a climate change victim not have familial ties to migrants in potential receiving countries, she or he can apply based on his or her labor skills. Both low-or high-skilled labor should be considered in humanitarian migration applications. The temporary and/or circular labor migration schemes such as those proposed by the European Union for third countries (European Commission 2007) or the Recognized Seasonal Employer (RSE) scheme implemented by New Zealand for foreigners can be a starting point in developing a program for skilled labor humanitarian migration (Brickenstein and Tabucanon 2013). In the case of Colombian temporary workers in Spain, the beneficiaries had support before and during their stay in Spain (de Moor 2011). They were also provided with training courses, which could potentially upgrade their skills. In order to circumvent the issue of selective compassion, especially in so-called trapped communities or those who cannot afford to migrate, receiving states should explore targeted skills upgrading of the very poor of the most vulnerable communities as one of their humanitarian interventions. This could be in addition to the usual development interventions where money is coursed through the grantee governments to, for example, build schools or subsidize courses. Targeted skills upgrading should offer training opportunities to fill labor shortages and assure the participants of employment after their training. Such a scheme could facilitate social mobility of potentially trapped populations and assist in building their resilience. Operationalizing the proposed climate humanitarian visa While there is no hard law or legally binding agreement that climate victims can invoke to grant them protection rights, there are soft laws such as the UN Guiding Principles on Internal Displacement and initiatives such as the Platform on Disaster Displacement (follow-up to the Nansen Initiative) that are relevant for victims of extreme weather events framed within the context of disasters. If CBDR or the polluter pays principle is employed, then countries with historically high emissions should take on the obligation of granting climate humanitarian visas. However, since a hard law that could apply to victims of extreme weather events is yet to be passed, the initial step would be to encouragenot to obligatestates to provide climate humanitarian visas. Such an approach would not only provide a legal route of migration but would also help states build experience in altruism, hospitality, and cosmopolitanism. As experiences build up, states would also be able to develop a proof of concept such as the study by the German Federal Office for Migration and Refugees (2019) that shows almost 35% of refugees who arrived in 2015 were employed by October 2018 with 50% employed in skilled jobs. This is consistent with the longitudinal (years 1985-2015) macroeconomic study by d' Albis et al. (2018), which shows that inflows of asylum seekers are beneficial to Western European host countries' economic performance or fiscal balance due to an increase in tax revenues (net of social transfers) (Maxmen 2018). Potential receiving states could make use of their development agencies in offering skills upgrading for climate-vulnerable communities for shortage occupations in the receiving states similar to how the German development agency, GIZ, supports the Triple Win program by assisting with language skills, professional preparation, and cultural integration of the nurses even before their deployment (GIZ 2019). Shortage occupations can range from low-to highskill requirements and can potentially accommodate the different skill levels of climatevulnerable communities. New Zealand, for example, lists varied shortage occupations such as baker, pig or cattle farmer, vehicle painter, or university lecturer (New Zealand Immigration 2019). Such interventions, despite being developmental in nature (and not humanitarian), could contribute to a potential receiving states' deliberation process whether to issue climate humanitarian visa should extreme climate events occur in the future. In any case, skills upgrading interventions could be considered part of the potential receiving states' official development assistance and integrated in their respective development agencies' programs. Cross-border migration decisions are often complex and context specific; in several cases where potential migrants have strong place attachment, immobility may even become a choice (Cattaneo et al. 2019;Serdeczny 2017). A climate humanitarian visa should, therefore, enable individuals to decide freely on emigration, similar to the concept of "climate passport" proposed by the German Advisory Council on Global Change (WBGU 2018). The climate passport, drawing from the Nansen passport issued after the First World War in 1922 up to 1938 for stateless refugees, is proposed as a humane climate policy instrument that would open up early, voluntary, and humane migration pathways to countries that are threatened by the potential loss of territory due to climate change (WBGU 2018). Settler states such as the USA and Canada have demonstrated, through their immigration relief measures, that they could be open in providing a new home for victims of extreme weather events associated with climate change. The US and Canada immigration relief schemes and this paper's proposed international humanitarian model constituting a "climate humanitarian visa" could be seen as important precursors toward a "climate passport" especially when the time comes that issues of compensation within the context of climate change become a globally acceptable. Guided by the principle of cosmopolitanism and encouraging compassion, a climate humanitarian visa can be seen as responsive to the urgency of climate change and mindful of raising ambition while retaining the interests of each sovereign receiving states. Conclusion Global action is being outpaced by the impacts of climate change, leaving climate-vulnerable communities at risk of incurring climate-induced losses and damages. In the international climate regime, migration is receiving increased attention. For example, the UNFCCC (through the WIM) established a Task Force on Displacement, which develops recommendations for integrated approaches to avert, minimize, and address displacement related to the adverse impacts of climate change. This paper argues for additional action on climate migration by looking into transnational migration opportunities for climate change victims of extreme weather events under humanitarian grounds. An international humanitarian migration model is proposed to supplement traditional incountry humanitarian aid by drawing on the US and Canada immigration relief schemes for the Philippines after typhoon Haiyan in 2013 and typhoon Ketsanain 2009. Transnational migrant family networks and skilled labor are crucial elements of such a model, as these lay down the groundwork for integration and support to the host country. To counter the risk of selective compassion against economically trapped populations due to lack of resources while increasing the benefits for host countries, it is recommended that potential receiving countries make use of their existing development programs in offering skills training and upgrading in sending countries especially for shortage occupations. In some countries like New Zealand, shortage occupations require a whole spectrum of skills ranging from baking, to farming, to academic lecturing. This paper supports the proposal of the German Advisory Council on issuance of a climate passport to climate change victims and sees the climate humanitarian visaas demonstrated by the US and Canada's post-disaster immigration relief schemesas an important precursor to such a landmark climate adaptation tool while climate change mitigation is concomitantly pursued. All countries, especially those with historically and/or currently high emissions, are encouraged to be guided by the principle of cosmopolitanism, be ambitious with their compassionate stances, and consider offering immigration relief measures to countries hit by extreme weather events, in addition to doling out traditional humanitarian aid. Such interventions could potentially increase the economic competitiveness of receiving countries through increase in tax revenues, even with social transfers. By doing so, receiving countries show responsiveness to the urgent needs of climate change while building experiences in altruism, hospitality, and cosmopolitanism. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2019-02-22T16:06:15.959Z
2020-03-25T00:00:00.000
{ "year": 2020, "sha1": "b9ad9aa577192148866a15df717b7ae333dc7acc", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10584-020-02691-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "b50a3b427c5bdbb9cfe4787fc6ca1bacd3646db2", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
234139750
pes2o/s2orc
v3-fos-license
What Do We Mean by Sustainable Finance? Assessing Existing Frameworks and Policy Risks I observe that the sustainable finance landscape as it stands today is featured by an overabundance of heterogeneous concepts, definitions, industry and policy standards. I argue that such heterogeneity may hinder the smooth development of the conceptual thinking underpinning sustainable finance and originates specific risks that may harm the credibility of the nascent market. These risks include green and sustainable washing, the rebranding of financial flows without additionality, the disordered adjustment in the cost of capital spreads between industries. I argue that to reflect the actual industry and policy context as wells as to steer conceptual and applied practice sustainable finance should be today referred to as “finance for sustainability”. To this extent, both its definition and implementing standards should make clear reference to the relevant sustainability dimensions (in particular in line with the Sustainable Development Goals and the Paris Agreement) and to the sectors or activities that positively contribute to these dimensions. Introduction and Summary Defining exhaustively sustainable finance, that is ensuring clarity on both its definition and implementing standards, is not an easy task. As a matter of fact, today an off-the-shelf, universal and workable definition of sustainable finance does not exist. Neither does it exist for the different families of sustainable finance securities, products or services available in the market. Guidelines have emerged and consolidated in the industry only for some key categories of securities (as green bonds and social bonds) and are considered effective but non-binding market references [1,2]. Thus far, financial institutions, governments, and international organisations have tended to create definitions according to their underlying motivations [3], resulting inter alia in a proliferation of heterogeneous terminology [4]. Region-wide policy initiatives to streamline and mainstream the sustainable finance market have been launched only recently [5] and are characterised by different perimeters and levels of ambition. Only little effort has been made to limit the issue of possible diverging interpretations of what sustainable finance is [6]. A flourishing of ambiguous definitions in what today can be considered the broader sustainable finance landscape has been noticed since long. Already two decades ago, environmental, social, and governance (ESG) investing could be described as an "area characterised by at best loose terminology, at worst by a conceptual confusion that would benefit from the rigour of academic analysis" [7]. Indeed, it is incumbent on academics to periodically reflect on the level and origins of conceptual fuzziness in their bodies of literature, as their disciplines evolve [8]. This paper aims at enriching the debate surrounding the issue of defining sustainable finance. To this extent, its contribution to the standing body of literature is twofold. On the one hand, it proposes a "workable" definition of sustainable finance, that is able to properly reflect the actual industry and policy context and specific enough to steer Sustainability 2021, 13,975 2 of 17 conceptual and applied practice. As a matter of fact, many of the existing definitions lack these characteristics and contribute to dilute research and industry efforts in explaining what sustainable finance means. This is done by building on the assessment of the key concepts, frameworks, and standards today featuring the sustainable finance industry, and their evolution over time. On the other hand, the paper identifies the main policy and financial risks stemming from defining sustainable finance, in particular in relation to the implementation of key operational standards, such as labels and taxonomies. From this basis, it is also possible to propose axes of action for policy makers in order to streamline the phasing-in of sustainable finance and shield the credibility of the nascent market (and related accountability practices). I first argue that, on the basis of the observation of the current industry and policy context, sustainable finance should be today more coherently referred to as (be a synonym of) "finance for sustainability". In this respect, it should be considered as a self-standing factor in the effort to reach a sustainable society, in particular in line with the Sustainable Development Goals (SDG) and the Paris Agreement. Understanding today sustainable finance as "finance for sustainability" would represent a significant departure from its original meaning, when it could be more often referred (and limited) to the attempt to include sustainability-related considerations into investment decisions. In this vein, a workable definition of sustainable finance should make clear reference to two intertwined issues, both intervening upstream and not pertaining specifically to the field of finance. The first is the identification of the possible sustainability dimensions. These are, but are not limited to, the preservation of the environment and the ecosystems, the conservation of the biodiversity, the fight against climate change, the eradication of poverty and hunger. The second is the assessment of the contribution of each economic sector or activity to the achievement of, or the improvement in, at least one of the relevant sustainability dimensions. This has to be done in order to identify the areas that deserve "sustainable" financing. Considering these two elements is sufficient to frame a workable definition of sustainable finance fitting with the actual industry and policy context. To this extent, sustainable finance could be defined as "finance to support sectors or activities that contribute to the achievement of, or the improvement in, at least one of the relevant sustainability dimensions". This notwithstanding, to depict the complexity of the sustainable finance market as it stands today, a specific element should be given further attention. That is, the compliance of financial securities, products or services with one or the other of the several existing policy and industry frameworks, guidelines and related labels. To recognise the relevance of their specificity, these financial flows and stocks can be specifically referred to as "labelled sustainable finance". Labelled green, social and sustainable financial securities, products or services today represent the core component of the sustainable finance market (even if not the largest) and are particular relevant in steering market demand. Finally, I argue that defining exhaustively sustainable finance should be a primary concern for policy makers. I recognise two main reasons why it has to be considered a critical factor in the process of consolidation of the market. First, uncertainties regarding what can be considered fully-fledged sustainable finance securities, products or services may undermine the confidence of the investors and discourage investment. In this respect, it is reasonable to assume that economic agents would consider sustainable finance as an option only if they are sure that their sustainability and values orientations are systematically reflected in the financial securities, products or services they are offered, with "no surprises". This issue can be analysed under the point of view of the emergence in the sustainable financial market of new information asymmetries and opaque information, and hence consequent possible opportunistic behaviours. The risks of greenwashing or sustainable washing or the possibility of mere refinancing of existing debt via labelled sustainable finance securities with no incremental resources directed towards sustainable investments (rebranding without additionality) are relevant examples. Second, on the basis of the specific sustainable finance framework in place, a progressive process of rebalancing of the investment portfolios may occur, in particular via the sell-off by institutional investors of securities of "non-sustainable" sectors and the purchase-and-hold of labelled sustainable finance or unlabelled securities pertaining to "sustainable" industries. This phenomenon may be accelerated by policy measures that may be launched to mainstream sustainable finance, such lower capital requirements for financial intermediaries holding sustainable finance securities, ad-hoc fiscal incentives for sustainable investments, or the inclusion of sustainability considerations in the central banks' monetary policy. Uncertainty on what should be considered sustainable finance can eventually contribute to trigger disordered pricing adjustments and a rapid increase in the differentials of both the cost of debt and equity between "sustainable" and "non-sustainable" industries, when changes in investors' expectations materialise. The rest of the paper is structured as follows. Section 2 offers a narrative of the existing frameworks, definitions, and labelling standards that today feature the wider sustainable finance landscape. Section 3 deals with the construction of a workable definition of sustainable finance able to reflect the actual industry and policy context. In Section 4, the main risks stemming from defining sustainable finance are discussed, as well as their possible policy implications. Finally, Section 5 states concluding remarks and recalls the need for further debate on the topic. The Sustainable Finance Landscape: A Review of Existing Frameworks, Definitions and Labelling Standards The concern of the impact of economic activities on nature and social structures has been discussed for decades [9]. Through time, a number of possibilities to account for the connection between finance and sustainability have been proposed. Among them, it is possible to mention the rise of environmental, social and corporate governance (ESG) criteria in investment decision-making [10], the impact investing and the socially responsible investing (SRI) approaches [11], the concern with climate change and human rights [12], the assessment of the effect of finance in terms of negative externalities [13], or the role of sustainable finance for financial institutions already having a formal dual bottom-line approach and for which financial performance needs to coexist with social goals [14]. Moreover, in recent years, the sustainability landscape has been further shaped as following the landmark international agreements on the United Nations (UN) 2030 Agenda adopting the Sustainable Development Goals (SDG) and the Paris Agreement on climate action. In both these initiatives, the sustainability governance schemes and accountability patterns have put on finance an unprecedented attention and its role has been recalled as a key enabling factor for the attainment of the most ambitious sustainability-related objectives [9]. In the sub-sections that follow, I review the main frameworks, definitions, and labelling standards existing in the sustainable finance market. In this respect, I do not aim at providing a hermeneutic analysis of the all the possible definitions featuring the market (for this kind of exercise, focused on ESG literature, see for example [7]). Rather, I want to provide an extensive easy-to-understand description of the sustainable finance landscape and its funding principles as it stands today. This approach allows inter alia to propose two schematic representations that can be used to portray the sustainable finance market. I conduct the review on the basis of three concentric layers of analysis, that is (from the largest to the smallest): (i) the wider policy context, (ii) the industry-originated frameworks, and (iii) the operational and labelling standards. As the layers are concentric, inner layers are coherent with larger ones in terms of scope but present specific and stricter characteristics. Figure 1 gives a representation of the sustainable finance landscape on the basis of these layers. Sustainable Finance and the Wider Policy Context The Sustainable Development Goals (SDG) and the Paris Agreement have landmarked the commitment of the international community towards a more sustainable society and a climate-neutral economy. To reach these ambitious objectives, a new technology framework, enhanced capacity-building, and a change in the consumption patterns were recalled as essential and nested elements, all needed to steer the transition. Nevertheless, to support such a transition, the mobilisation of financial resources has gained extraordinary attention [9]. Concerning the financing of the 17 SDG, several initiatives, mainly led by different UN bodies, have been launched in order to draw the main principles to direct the necessary flow of resources towards the goals and hence align global economic policies and financial systems with the 2030 Agenda. To this end, a total gap of USD 5-7 trillion worldwide a year until 2030 has been estimated [15]. Eventually, the concept of SDG finance has emerged. Enhancing sustainable financing strategies and investments at regional and country levels and seizing the potential of financial innovations, new technologies and digitalization to provide equitable access to finance are the main specific objectives underpinning this concept. In such a context, the Principles for Positive Impact Finance have been issued in 2017 within the Financial Initiative of the United Nations Environment Programme (UNEP-FI). They are guidelines for financiers and investors to increase their impact on the economy, society, and the environment, and aimed at providing basis for a common language across all categories of financial instruments and business activities [15]. Similarly, the Principles for Responsible Banking, focusing on banking operations, have been released in 2019 by the same body. These principles have the purpose to provide a general framework for a "sustainable" banking system in line with the SDG and the Paris Agreement, by embedding sustainability considerations at the banks' strategic, portfolio, and transactional levels, and across all business areas [16]. In parallel, the notions of Paris Agreement-aligned investments and, more general, green and climate finance have also emerged, these latter broadly referring to the financial resources necessary to support environmental objectives (green finance) and mitigation and adaptation actions that address climate change (climate finance) [9]. Against this background, concrete policy initiatives have been also launched worldwide in the attempt to mainstream the flow of resources directed towards sustainabilityrelated objectives. Probably, the most noteworthy example is represented by the European Sustainable Finance and the Wider Policy Context The Sustainable Development Goals (SDG) and the Paris Agreement have landmarked the commitment of the international community towards a more sustainable society and a climate-neutral economy. To reach these ambitious objectives, a new technology framework, enhanced capacity-building, and a change in the consumption patterns were recalled as essential and nested elements, all needed to steer the transition. Nevertheless, to support such a transition, the mobilisation of financial resources has gained extraordinary attention [9]. Concerning the financing of the 17 SDG, several initiatives, mainly led by different UN bodies, have been launched in order to draw the main principles to direct the necessary flow of resources towards the goals and hence align global economic policies and financial systems with the 2030 Agenda. To this end, a total gap of USD 5-7 trillion worldwide a year until 2030 has been estimated [15]. Eventually, the concept of SDG finance has emerged. Enhancing sustainable financing strategies and investments at regional and country levels and seizing the potential of financial innovations, new technologies and digitalization to provide equitable access to finance are the main specific objectives underpinning this concept. In such a context, the Principles for Positive Impact Finance have been issued in 2017 within the Financial Initiative of the United Nations Environment Programme (UNEP-FI). They are guidelines for financiers and investors to increase their impact on the economy, society, and the environment, and aimed at providing basis for a common language across all categories of financial instruments and business activities [15]. Similarly, the Principles for Responsible Banking, focusing on banking operations, have been released in 2019 by the same body. These principles have the purpose to provide a general framework for a "sustainable" banking system in line with the SDG and the Paris Agreement, by embedding sustainability considerations at the banks' strategic, portfolio, and transactional levels, and across all business areas [16]. In parallel, the notions of Paris Agreement-aligned investments and, more general, green and climate finance have also emerged, these latter broadly referring to the financial resources necessary to support environmental objectives (green finance) and mitigation and adaptation actions that address climate change (climate finance) [9]. Against this background, concrete policy initiatives have been also launched worldwide in the attempt to mainstream the flow of resources directed towards sustainabilityrelated objectives. Probably, the most noteworthy example is represented by the European Union (EU). In the EU policy context, sustainable finance is defined as "finance to support economic growth while reducing pressures on the environment and taking into account social and governance aspects" and it is clearly understood to support the delivery of the "European Green Deal" by trying to channel private investment into the transition to a climate-neutral, climate-resilient, resource-efficient, and just economy [17]. To do that, the strategy of the European Commission builds on a specific Action Plan for sustainable finance [5] and follow-up initiatives. The combination of abovementioned frameworks, concepts, and initiatives can indeed represent a first crucial, even though rather raw and generic, reference in the attempt to draw the (policy-driven) perimeter of action for sustainable finance. At this stage, without fixing any specific definition, sustainable finance may be indeed considered to first and foremost embrace the financial stocks and flows mobilised to achieve the SDG (SDG finance). Green finance and climate finance can be also considered specific components of sustainable finance. In this respect, green finance can be referred to as the financial stocks and flows aiming at supporting the achievement of the environment and climate-related SDG, while climate finance can be associated to that component of green finance focusing on climate action in line with the Paris Agreement objectives (in particular, in the form of climate change mitigation and climate change adaptation). However, to these broad categories it may be necessary to add as part of sustainable finance the financial stocks and flows directed to policy objectives that may not be covered by the SDG, but still have sustainability implications. Examples of these latter are the threats to sustainable development such as the weakening of democracy aided by "big technology," or the inferences of the fourth industrial revolution on the global workforce [18]. Figure 2 gives a visual representation of a policy-driven classification of the possible components of sustainable finance. As a matter of fact, such a wide scope, embracing the financing of the SDG, the Paris Agreement, and going even beyond, would result in considering sustainable finance as already a sizable and stable, but not yet completely visible, component of the modern financial system. In this regard, sustainable finance may in particular also encompass government spending programmes (eventually financed by unlabelled debt), when financing sustainability-related objectives. Union (EU). In the EU policy context, sustainable finance is defined as "finance to support economic growth while reducing pressures on the environment and taking into account social and governance aspects" and it is clearly understood to support the delivery of the "European Green Deal" by trying to channel private investment into the transition to a climate-neutral, climate-resilient, resource-efficient, and just economy [17]. To do that, the strategy of the European Commission builds on a specific Action Plan for sustainable finance [5] and follow-up initiatives. The combination of abovementioned frameworks, concepts, and initiatives can indeed represent a first crucial, even though rather raw and generic, reference in the attempt to draw the (policy-driven) perimeter of action for sustainable finance. At this stage, without fixing any specific definition, sustainable finance may be indeed considered to first and foremost embrace the financial stocks and flows mobilised to achieve the SDG (SDG finance). Green finance and climate finance can be also considered specific components of sustainable finance. In this respect, green finance can be referred to as the financial stocks and flows aiming at supporting the achievement of the environment and climate-related SDG, while climate finance can be associated to that component of green finance focusing on climate action in line with the Paris Agreement objectives (in particular, in the form of climate change mitigation and climate change adaptation). However, to these broad categories it may be necessary to add as part of sustainable finance the financial stocks and flows directed to policy objectives that may not be covered by the SDG, but still have sustainability implications. Examples of these latter are the threats to sustainable development such as the weakening of democracy aided by "big technology," or the inferences of the fourth industrial revolution on the global workforce [18]. Figure 2 gives a visual representation of a policy-driven classification of the possible components of sustainable finance. As a matter of fact, such a wide scope, embracing the financing of the SDG, the Paris Agreement, and going even beyond, would result in considering sustainable finance as already a sizable and stable, but not yet completely visible, component of the modern financial system. In this regard, sustainable finance may in particular also encompass government spending programmes (eventually financed by unlabelled debt), when financing sustainability-related objectives. Sustainable Finance and Industry-Originated Frameworks The financial industry has endogenously developed through time a number of frameworks that today should be considered to fully integrate the wider sustainable finance landscape. In some cases, these frameworks have seen the light much before the consolidation of the policy movement towards sustainability that can be observed in the last decade. Likely, the most important example is represented by the inclusion of environmental, social, and governance (ESG) considerations in the investment decisions of financial actors. ESG have roots in not only faith-based investing, but also in the civil rights, anti-war, and environmental movements of the 1960s and 1970s [9,19]. However, in more recent years, the investment risks posed by climate change and poor corporate governance provided a huge catalyst in the growth of ESG investing [19]. In addition, disclosure of ESG information for financial and non-financial companies are increasingly demanded by policy makers in order to create a more transparent market and steer investors' decision-making. Noteworthy examples of ESG disclosure standards are the EU non-financial reporting directive (NFRD) or the voluntary guidelines developed by the climate disclosure project (CDP), the climate disclosure standards board (CDSB), the global reporting initiative (GRI), the principles for responsible investment (PRI), the sustainability accounting standards board (SASB) or the task force on climate-related financial disclosures (TCFD) [20]. Spreading instruments aimed to sustain investment decisions, providing information on the firm's position within a sustainability perspective jointly with financial information represents a newness both for people interested in ESG investments and for the entire plethora of stakeholders interested in the overall companies' performance [21,22]. An obvious consequence of these paradigm shifts is the felt need for strong support from institutional investors [23]. Within the carbon accounting literature, authors have identified two end-points on a spectrum of possible bases to deal with sustainability: accounting for un-sustainability and accounting for sustainability improvements [24]. The former aims to the disclosure of un-sustainable practices concerning past and current operations, and at predicting future levels of expected negative externalities (e.g., the level of GHG emissions). The latter informs about the decisions, and related measures, that a company is going to implement for improving its sustainable performance. Among these decisions and measures, the use of sustainable financing instruments probably represents one of the most effective, since it directly realizes the bridge between financial and natural capitals [25,26]. However, the growing interest of the industry in ESG performance may be also linked to the emergence of specific market incentives, related in particular to reputational gains and corporate social responsibility acknowledgement by existing and potential clients. As also observed in the literature, for a company to be recognised as engaged in sustainable activities it can bring concrete benefits in terms of customer satisfaction, customer retention, and market positioning [10,27]. Being today largely nested with ESG considerations, the concept of socially responsible investing (SRI) has also progressively spread in the financial industry. The basis for this contention revolves around the launch in 2006 of the United Nations-facilitated Principles for Responsible Investment (PRI) and the subsequent rise to prominence of this initiative among practitioners [28]. It refers to a voluntary set of investment principles that offer a set of possible actions for incorporating ESG issues into investment practices. More than a half of the total global institutional assets base are currently managed by institutions formally embracing these principles, demonstrating the commitment of financial markets towards ESG criteria within investment decisions [10]. ESG considerations and SRI do not certainly complete the financial industry-originated practices towards sustainability. Among the others, impact finance and the related impact investing should be first mentioned. Impact investments can be defined as "investments made with the intention to generate positive, measurable social and environmental impact alongside a financial return" [29]. This concept, in which the application can indeed span from social businesses to financial actors, in particular, focuses on the formal distinction between (and the co-existence of) financial and non-financial performances, with the aim to widen the final investment scope for market participants [30,31]. To this extent, impact investments can be made in both emerging and developed markets, and target a range of returns from below-market to market rate, depending on investors' strategic goals [29]. On the other hand, specifically concerning project management practices, the equator principles (EP) are emerging in recent years as a "financial industry benchmark for determining, assessing and managing environmental and social risk in projects" [32]. The EP may apply to all industry sectors and refer to five main financial products: project finance, project finance advisory services, project-related corporate loans, bridge loans, and project-related refinance and acquisition finance. In practice, the EP could already cover the majority of international project finance debt within both developed and emerging markets [32]. Sustainable Finance and Operational and Labelling Standards When it comes to the observation of sustainable finance with regard to operational and labelling standards, the framework put in place for green bonds is by far the most advanced. This framework, endogenously developed within the financial industry, today benefits from a large acceptance of the green bond principles (GBP), issued by the International Capital Market Association (ICMA) in 2014 and then updated in 2018. The GBP are voluntary process guidelines that recommend transparency and disclosure and promote integrity in the development of the green bond market by clarifying the approach to be followed for the issuance of a green bond [1]. To this extent, green bonds are defined as "any type of bond instrument where the proceeds will be exclusively applied to finance or refinance, in part or in full, new and/or existing eligible green projects and which are aligned with the four core components of the GBP" [1]. The GBP then provide issuers with guidance on the four core components involved in launching a green bond, which are: use of proceeds, process for project evaluation and selection, management of proceeds and reporting. In practice, the framework provided by the GBP recommends a structured process for issuers, investors, banks, underwriters, and placement agents that can be used to appreciate the expected features of any given green bond [1]. The GBP also foresee issuers, in connection with the issuance of a green bond, to appoint at least one external reviewer to confirm the alignment of their bond with the four core components of the GBP (these external reviews can be of four types: second party opinions, verifications, certifications, or green bond scoring/ratings). Even though the GBP are not mandatory, their development has played thus far a significant role in structuring the green bonds market, providing all stakeholders with a tool able to effectively and easily segregate green bonds from other debt securities. In this respect, certification agencies acting as reviewers today make wide reference to the GBP in their assessment activity, in this way prompting a certain degree of homogeneity in the market. In 2019, a total of USD 257.7 billion green bonds was issued worldwide [33], representing a new record and confirming the double-digit growth of the market in recent years. Nevertheless, it should be argued that the green debt market can hardly be considered to be limited to instruments formally in line with the GBP and eventually labelled as green bonds. As a matter of fact, a not negligible part of the unlabelled bonds outstanding could in principle meet the criteria set by the GBP, even though the issuers eventually disregarded the labelling option (e.g., in the case of many municipal bonds issued to finance projects of water pollution prevention). The size of this market, which is very difficult to calculate with accuracy, is indeed expected to be at least twice as large as the labelled green bonds market standalone [33]. Following green bonds, labels were then proposed for social bonds [2] and sustainability bonds [34] with similar operational standards as green bonds but focused respectively on social and general sustainability goals, and sustainability-linked bonds [35], that novates by identifying a return to the investment for bond holders linked to the attainment of sustainability objectives measured by specific Key Performance Indicators (but, as intended to be used for general purposes, the use of proceeds for sustainability-linked bonds is not a Furthermore, labelling standards are progressively getting available for securities other than bonds, as it is the case for green loans [36], sustainability-linked loans [37], or the various types of sustainable funds. As a matter of fact, the fortune of these labelling standards will depend on their appeal in terms of steering market demand and on the incidence of administrative costs on the issuer. As these latter may be largely independent from the size of the operation, in many cases they may result in being too high to be attractive for operations of limited size [9]. Beyond labels, taxonomies represent today the essential operational standards in the sustainable finance market. Taxonomies, which are normally developed by multilateral development banks (MDB), financial industry organisations or policy makers, are lists that specify the sectors or the activities which are entitled to receive "sustainable" financing. In this respect, they are also used within the labelling frameworks mentioned above when it comes to analysing the use of proceeds [1,2]. Table 1 gives a general overview of the possible treatment of economic sectors and activities in existing sustainable finance taxonomies. Policy-wise, the most important initiatives launched to create taxonomies comprehend the EU Taxonomy of sustainable activities, in line with the mentioned Action Plan for sustainable finance [5] and the People Bank of China's Green Bond Endorsed Project Catalogue. However, as of today, these taxonomies have been principally developed for climate and (partially) environment-related investments, with little coverage of the possible other sustainability dimensions. Furthermore, today they are far from being homogeneous in terms of contents [6]. Author's elaboration on [1,2,6,38]. Core element means that the sector or activity is commonly included in the main existing sustainable finance taxonomies. Possible element means that the sector or activity is included in some of the existing taxonomies or is a debated element. The table does not aim to be exhaustive. A Definition of Sustainable Finance It can be argued that the meaning attached to sustainable finance has evolved over time. In a retrospection of its early days, that can be broadly brought back to the rise of ESG concept, sustainable finance largely meant that the financial system should incorporate sustainability considerations in investment decision-making, in order to better reflect environmental and other sustainability-related risks. With the evolution in the societal and policy patterns, the meaning of sustainable finance has progressively consolidated around the need to provide sufficient financial resources to the transition towards a more sustainable society and a climate-neutral economy. This (incremental) shift in perspective may also help explain the observed acceleration, in recent years, in the adoption of sustainable finance practices by financial institutions. In this vein, I argue that coherently defining sustainable finance requires today the comprehension of two intertwined elements. The first is the identification of the concrete sustainability dimensions. In practice, this means answering the question: What is sustainability? Even though also in this case an universal answer may not be readily available, it can be easily stated that there is little incertitude about including among the relevant sustainability dimensions the preservation of the environment and the ecosystems, the conservation of the biodiversity, the fight against climate change (in particular in the form of climate change mitigation and climate change adaptation), the eradication of poverty and hunger, the reduction of inequalities. In this respect, it should be agreed that the SDG and the Paris Agreement have arisen in recent years as key (policy-driven) initiatives to forge the perimeter of sustainability, being also able to steer at a larger extent recent financial industry developments. This is true even though it can be argued that these initiatives do not consider some possible sustainability dimensions. The second element to consider in developing a definition of sustainable finance is the contribution of each economic sectors or activity to the achievement of, or the improvement in, one or more of the relevant sustainability dimensions. This has to be done in order to identify the areas that deserve "sustainable" financing. This reasoning can be brought back to answer the question: How can sustainability be reached? If some sectors or activities are unanimously considered as contributing to sustainability (e.g., in the case of renewable energy projects or initiatives for the access to sanitation in developing countries), for others this assessment may still not be straightforward (e.g., in the case of nuclear energy, a low-carbon energy source, or incentives for facilitating the access to higher education in developed countries). I argue that answering these two questions is sufficient to frame a workable definition of sustainable finance, which is able both to reflect the actual industry and policy context and steer conceptual and applied practice. To this extent, sustainable finance can be defined as "finance to support sectors or activities that contribute to the achievement of, or to the improvement in, at least one of the relevant sustainability dimensions". As a matter of fact, the proposed definition of sustainable finance does not aim at redefining "finance", which is a concept factored-in as an input. Indeed, it focuses on the recognition of the role of finance in supporting sustainability. On this basis, sustainable finance could be today also referred to as (and is a synonym of) "finance for sustainability". The proposed definition is workable as it explicitly refers to the need for an analysis upstream of both the relevant sustainability dimensions (What is sustainability?) and the sectors and activities that have a positive impact on such dimensions (How can sustainability be reached?). In this respect, it should be underlined that the taxonomies that today exist, even if a heterogeneity of approach and perimeters, mainly follow this line or reasoning and are indeed useful tools in order to systematize knowledge on what should be considered eligible for "sustainable" financing. Furthermore, it can be highlighted that via the proposed definition of sustainable finance no considerations are made on whether or not finance can be considered by itself a sustainable activity (in this sense again largely reflecting the current industry and policy practice). This apart, when proposing a given definition of sustainable finance, an addition element should be specifically considered in order to face the complexity of the market. It refers to the compliance of financial flows and stocks (via financial securities, products or services) with one or the other of the several policy and industry frameworks, guidelines and related labelling standards that feature the sustainable finance landscape. This aspect may be linked to the question: How can sustainable finance be easily recognised? In this respect, the methodologies that have been progressively proposed have mainly aimed to attract the investors' appetite towards new products categories via new labels. In point of fact, labelled green, social and sustainable financial securities, products or services today represent the core component of the sustainable finance market and the one which is more easily identifiable. In order to highlight the relevance of this specificity, the ensemble of these financial instruments can be referred to as "labelled sustainable finance". As of today, labelled sustainable finance plays a particularly relevant role in steering market demand. In this general framework, an argument can be made accordingly to the idea that it is the joint responsibility of policy makers and the scientific community to define which are the relevant "sustainability dimensions" and the "sectors or activities that contribute to the achievement of, or the improvement in, at least one of the relevant sustainability dimensions". Furthermore, it is in the remit of policy makers and the financial industry defining coherent frameworks, guidelines, and labelling standards for sustainable financial instruments. Finally, it should be in the mission of financial institutions to mainstream sustainable finance. Lastly, a degree of freedom should be recognised to sustainable finance. As scientific knowledge progresses and societal sensitivity towards certain issues may evolve over time, what today could be considered as "sustainability dimensions" and "sectors and activities that contribute to the achievement of, or the improvement in, at least one of the relevant sustainability dimensions" may also change. Even though the proposed definition of sustainable finance may still hold, it is in the hands of policy makers to manage these possible shifts, the main paradigm remaining constant. Main Risks Stemming from Defining Sustainable Finance Coherently defining sustainable finance, by ensuring clarity on its definition as well as its implementing standards, is not a mere exercise of style. On the contrary, a wellconceived identification of sustainable finance represents a key enabler in the development of the market. In this section, I aim at identifying the main risks stemming from defining sustainable finance (in particular with reference to labels and taxonomies) and their possible policy and financial implications, as well as at proposing policy actions to reduce the incidence of these risks. Rebranding without Additionality A first risk linked to defining sustainable finance concerns labelled sustainable finance and its role within the wider sustainable finance landscape. Labelled sustainable finance has experienced an exceptional growth in recent years. In this respect, Figure 3 shows the level of labelled green bonds issuance since the inception of the market. However, such a growth does not necessarily mean that the investments flow towards "sustainable" sectors or activities have increased at the same pace. It is indeed reasonable to argue that the growth rate experienced in the market of labelled sustainable finance securities, products, and services is considerably higher than the growth rate of the overall stream of investments in "sustainable" sectors or activities. newly labelled debt of pure players in the renewable energy sector, or green labelled debt issued by financial intermediaries are among the financing operations which are likely to provide little additional resources to the transition to a more sustainable society. Similarly, several labelled "sustainable", "social" or "green" funds may also suffer from similar limitations in their sustainability imprint, for example, when considering that already in 2015 about 50% of the institutional assets base were managed by institutions formally endorsing the Principles for Responsible Investment [10]. As a matter of fact, rebranding is up to a certain extent physiological in a market which is still a promising niche and far from becoming mainstream. Nevertheless, two main risks on the nascent market should be highlighted concerning the possible lack of additionality of labelled financial flows. On the one hand, investors may start perceiving that "no real impact" is triggered when buying labelled sustainable securities, products and services, resulting in a progressively lower market appeal. This may be particularly true for retail customers after the completion of the phase of first expansion of the market. On the other hand, policy makers may misjudge the effectiveness of the measures they have put in place to foster a more sustainable society or mainstreaming sustainable finance if over-relying on the growth of labelled sustainable finance. As a matter of fact, a certain level of rebranding of financial flows may have featured the market thus far and it is likely to remain in the near future. In this respect, labelled debt from refinancing operations, labelled government debt for green or social projects, newly labelled debt of pure players in the renewable energy sector, or green labelled debt issued by financial intermediaries are among the financing operations which are likely to provide little additional resources to the transition to a more sustainable society. Similarly, several labelled "sustainable", "social" or "green" funds may also suffer from similar limitations in their sustainability imprint, for example, when considering that already in 2015 about 50% of the institutional assets base were managed by institutions formally endorsing the Principles for Responsible Investment [10]. As a matter of fact, rebranding is up to a certain extent physiological in a market which is still a promising niche and far from becoming mainstream. Nevertheless, two main risks on the nascent market should be highlighted concerning the possible lack of additionality of labelled financial flows. On the one hand, investors may start perceiving that "no real impact" is triggered when buying labelled sustainable securities, products and services, resulting in a progressively lower market appeal. This may be particularly true for retail customers after the completion of the phase of first expansion of the market. On the other hand, policy makers may misjudge the effectiveness of the measures they have put in place to foster a more sustainable society or mainstreaming sustainable finance if over-relying on the growth of labelled sustainable finance. To ease this problem, an improvement of disclosure requirements and standards may indeed play a decisive role. In particular, non-financial disclosure at the corporate level having as an object sustainability performance may incorporate a dimension linked to effective additionality. In practice, additionality could be disclosed with respect to a specified reference period in the past (e.g., 5 years or the beginning of the most recent strategic plan) or over time (e.g., in the last 10 years). In this vein, an increase in the level of labelled sustainable finance issued may be contextualised within the discourse around additionality at a single organisation-level. On the other hand, the assessment of effective additionality may benefit from statistics and historical series on the level of investments in "sustainable" sectors and activities at a national or regional level. This could be eventually done in line with a defined taxonomy of "sustainable" activities. In this way, it would be possible to provide easily accessible information to policy makers and other stakeholders on the level of additional resources directed over time towards sustainability-related objectives. Greenwashing and Sustainable Washing Simultaneously to the development of the sustainable finance and the rise of related certifications and reviews, the discussion on whether financial actors can use deceptive strategies to promote their securities, products, and services, and build a sustainabilityoriented image is also emerging. This phenomenon had first concerned environmental performances and policies, and is commonly known as "greenwashing". It has recently expanded to the entire sustainability spectrum, featured as "sustainable washing". Greenwashing and sustainable washing are not specific to sustainable finance, and first appeared in the consumer goods industry. In this respect, the literature has already observed that greenwashing or sustainable washing can take many forms, ranging from changing the name of a product to inducing the perception that it comes from a natural environment to the launch of marketing campaigns by polluting industries in order to foster a sustainable-friendly image [39,40]. The same body of literature has also noticed that given the increase in the awareness in the society regarding the potential sustainability impacts of the products purchased, formal or informal labelling (such as "green", "ecofriendly" or "sustainable") are getting increasingly effective in driving market demand. Consequently, it has been observed how many products have been benefiting from a form of advertising even though the claims put forward did not present the real characteristics of the product [41,42]. The risk of greenwashing and sustainable washing is progressively consolidating in the sustainable finance market. As a matter of fact, the lack of universal definitions and standards amplifies such a risk as it opens to several possible interpretations of what sustainability means in the financial markets. For this reason, as the market for sustainability-related certifications and reviews continues to develop, regulation on the communication regarding the sustainability impact of financial securities, products, and services marketed as sustainable should be also expected to become stricter. Again, clarity on the relevant sustainability dimensions and the sectors and activities that deserve "sustainable" financing is a first step. Therefore, stricter labelling standards and disclosure requirements (at a level of issuance) would further strengthen the reliability of existing market references. Disordered Adjustment in Cost of Capital Spreads Sustainable finance can impact directly on both the cost of capital and the cost of equity of "non-sustainable" sectors via a demand-driven redirection of financial flows towards "sustainable" industries. On the basis of the specific sustainable finance framework put in place (in particular, concerning its industry standards such as labels and taxonomies), a process of rebalancing of the investment portfolios may occur in the market. This may take the form of a progressive sell-off by institutional investors of securities of "non-sustainable" companies and the purchase-and-hold of labelled sustainable finance or unlabelled securities of industries considered to be sustainable. In this respect, some early evidence has been already produced by scientific literature, in particular, concerning differences in yields between green bonds and corresponding conventional bonds [43]. A green bond premium (or greenium) may progressively consolidate in the financial market, underlying the role of labelled sustainable finance in shaping market investors' preferences. This dynamic could be further strengthened in the near future by policy measures aiming at mainstreaming sustainable finance, such as in the case of lower capital requirements for financial intermediaries holding sustainable finance securities, ad-hoc fiscal incentives for sustainable investments, or the inclusion of sustainability considerations in the central banks' monetary policy [20]. Uncertainty on what should be considered sustainable finance can eventually contribute to trigger disordered pricing adjustments and result in a sharp and significant increase in the differentials of both the cost of debt and equity between "sustainable" and "non-sustainable" industries, in case changes in market expectations materialise. The possible policy actions to mitigate the risk of disordered pricing adjustments between "sustainable" and "non-sustainable" industries are manifold. These may include performing ex-ante and ex-post impact assessments for the measures aiming at fostering sustainable finance, introducing reliable "transition" labels for securities aiming at encouraging reconversion of polluting industries, or establishing policy frameworks to manage sustainability-related financial risks both at the systemic and company level [20]. This issue may be also further investigated in relation to the emergence of stranded assets. Stranded assets can be defined as "assets that have suffered unanticipated or premature write-downs, devaluations, or conversions to liabilities" [44], and may become particularly relevant in the context of transition to a low-carbon economy [45,46]. Policy and technology shifts that may follow the identification of sectors and activities that contribute to reach a "sustainable" society (in these terms also relevant in the context of defining sustainable finance) are expected to be the main triggers of stranded assets [47,48]. In this respect, measures to encourage the low-carbon transition may take several forms, such as ad-hoc fiscal incentives [49], public support schemes to investments [50], or stricter environmental regulatory requirements [51]. Even though the occurrence of stranded assets principally pertains to the possible paths towards sustainability more than to the field of finance, financial flows directed via sustainable finance still have the potential to accelerate the necessary adjustments and ignite financial risks [52,53]. Differences between Jurisdictions in Labels and Operational Standards Financial markets are nowadays widely globalised, and financial securities are easily tradable worldwide within minutes. On the contrary, the initiatives to steer or even mainstream sustainable finance have had thus far a national or regional scope, without significant efforts made at the international level to harmonise the different approaches. When it comes to labelling standards, there is today the concrete risk that securities having a similar label (e.g., green bonds) eventually may not meet the same requirements, in particular, in terms of the use of proceeds. As a concrete example, the Chinese Green Bond Endorsed Project Catalogue accepts as eligible activities the retrofits of fossil fuel power stations, clean coal and coal efficiency improvements, or rail lines that transport fossil fuels. These activities are normally not present in similar taxonomies developed in Europe. Such heterogeneity may indeed jeopardise the interest of investors and create uncertainty in a globalised financial market. Policy-wise, two main axes of action may be envisaged to limit the incidence of this issue. On the one side, initiatives at the international level may be launched in order to steer harmonisation in financial labels and other operational standards. This could be done within the existing fora such as the network for greening the financial system (NGFS). On the other side, reliable, synthetic, and easily accessible information on differences between several standards should be made available to market investors (e.g., at a level of stock exchanges) in order to allow proper decisionmaking at the moment of buying labelled sustainable finance. As a matter of fact, the need of an effective transnational governance regime of sustainable finance can no longer be postponed or put off. Indeed, it should follow the same trends observed for global wicked issues as climate change [54][55][56]. Concluding Remarks The path towards the achievement of a sustainable society and a climate-neutral economy encompasses different disciplines. Effective regulation, technological improvements, scientific research, and changes in consumption patterns have been considered for many years the main engines of the transition. However, finance has recently arisen as an essential enabling factor, capable of having a concrete impact on the feasibility and the speed of the changeover. In this context, the notion of sustainable finance has emerged to catalyse the financial efforts of policy makers, financial industry, and civil society in reaching sustainability. This notwithstanding, I noticed that the sustainable finance landscape as it stands today still suffers from a certain degree of conceptual incertitude. This is mainly due to the overabundance of frameworks, definitions, and labelling standards that have been created over time and have contributed to dispersing conceptual thinking on the matter. This heterogeneity may indeed dilute policy and industry efforts to streamline and mainstream sustainable finance, by triggering specific financial and policy risks. Table 2 summarizes these risks and the consequent possible mitigating policy actions, as discussed along the paper. Table 2. Risks linked to defining sustainable finance, possible negative effects, and possible policy actions to mitigate the risks. Risk Possible Negative Effect Possible Policy Actions to Mitigate the Risk Rebranding without additionality • Lessening of investors' confidence in the market and consequent sub-optimal resource allocation to "sustainable" sectors • Dilution of policy action in case of overreliance on labelled sustainable finance to reach sustainability-related policy objectives • Disclose additionality at the corporate level • Publish national or regional-level statistics on investments in "sustainable" sectors or activities (eventually on the basis of a taxonomy) Greenwashing and sustainable washing • Lessening of investors' confidence in the market and consequent sub-optimal resource allocation to "sustainable" sectors • Identify sectors and activities eligible for "sustainable" financing (e.g., via a taxonomy) • Define clear labelling standards • Identify disclosure standards for labelled securities Disordered adjustment in cost of capital spreads • Sharp increase in the cost of equity and debt for "non-sustainable" industries • Acceleration in economic and financial losses for "non-sustainable" industries • Perform ex-ante and ex-post impact assessments of measures aiming at supporting sustainable finance • Introduce "transition" labels for financial securities aiming at financing the reconversion of polluting industries • Put in place specific policy frameworks to manage sustainability-related financial risks (at the systemic and company level) Differences between jurisdictions in labels and operational standards • Lessening of investors' confidence in the market and consequent sub-optimal resource allocation to "sustainable" sectors • Promote international initiatives aiming at harmonising financial labels and other operational standards in sustainable finance • Provide investors with reliable, synthetic, and easily accessible information on the differences between jurisdictions in labels and operational standards (eventually at a level of stock exchanges) This paper represents a concrete step in the direction of fostering a deeper understanding of what sustainable finance means in the actual industry and policy context and what are the related main policy and financial risks as the market develops. In this respect, I argue that bringing sustainable finance to the notion of "finance for sustainability" is an effective way to reflect the complexity of the market as it stands today and steer both theoretical conceptualisation and industry practice. In this vein, a definition of sustainable finance should be framed in a way to clearly refer to the debate on the identification of the relevant sustainability dimensions and of the sectors or activities that effectively contribute to these dimensions. Labels and taxonomies should be created along this pattern with the aim to ensure clarity to market investors. However, further academic research and discussion seem to be needed as concerns both the definition of sustainable finance and the related implementing standards. For this reason, a call for further work has to be made in order to consolidate a way of thinking or propose alternative venues of reasoning.
2021-05-11T00:03:33.489Z
2021-01-19T00:00:00.000
{ "year": 2021, "sha1": "db1a2abfed71a33a62d1d6f3958cb8a3a3a54ab7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/2/975/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2a1eac167b1323b2fa842a2cc38ac928e321069a", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
119314693
pes2o/s2orc
v3-fos-license
The Bolzano mean-value theorem and partial differential equations We study the existence of solutions to abstract equations of the form $0 = Au + F(u)$, $u\in K\subset E$, where A is an abstract differential operator acting in a Banach space $E$, $K$ is a closed convex set of constraints being invariant with respect to resolvents of A and perturbations are subject to different tangency condition. Such problems are closely related to the so-called Poicar\'e- Miranda theorem, being the multi-dimensional counterpart of the celebrated Bolzano intermediate value theorem. In fact our main results can and should be regarded as infinite-dimensional variants of Bolzano and Miranda-Poincar\'e theorems. Along with single-valued problems we deal with set-valued ones, yielding the existence of the so-called constrained equilibria of set-valued maps. The abstract results are applied to show existence of (strong) steady state solutions to some weakly coupled systems of drift reaction-diffusion equations or differential inclusions of this type. In particular we get the existence of strong solutions to the Dirichlet, Neumann and periodic boundary problems for elliptic partial differential inclusions under the presence of state constraints of different type. Certain aspects of the Bernstein theory for bvp for second order ODE are studied, too. No assumptions concerning structural coupling (monotonicity, cooperativity) are undertaken. Introduction The purpose of this paper is twofold. On one hand, being motivated by some concrete applications (see subsection 2.1), we want to establish appropriate topological tools to study the existence of solutions to systems of N partial differential equations or N -dimensional partial differential inclusions subject to various boundary conditions and under state constraints. The presence of such constraints is justified and explained below. This is closely related to the method of the so-called 'moving rectangles' (see e.g. [50]) and corresponding techniques used for the study of long time behavior of evolution systems. We, however, leave aside questions concerning existence, stability and invariance of solutions of parabolic evolution equations, but in this paper we confine ourselves to elliptic equations rather and their solutions, i.e. steady state or stationary solutions to related evolutions problems. Nevertheless the 'evolution' origin of the studied steady state problems is of great importance. On the other hand the proposed topological methods are closely related to problems of the existence of constrained equilibria or fixed points of abstract single-or set-valued maps, having origins in the Bolzano mean-value theorem (see subsection 2.2). This celebrated result is perhaps the most important topological device when studying one-dimensional equations of the form f (x) = 0. This fact was extensively used and generalized by numerous authors for almost 150 years (see [39] and [12]) and various important results were established. One of the best known statements in this direction is the Poincaré-Miranda theorem, which is a direct N -dimensional version of the Bolzano theorem. We develop the infinite-dimensional counterparts of Poincaré-Miranda theorem, show their relation with different branches of research concerning e.g. viability theory for differential inclusions and, finally, apply them in the context of constrained PDE. The notation used throughout the paper is standard. In particular x · y is the scalar product of x, y ∈ R N and |x| = √ x · x stands for the norm of x. The use of function spaces (L p , Sobolev etc.), linear (unbounded in general) operators in Banach spaces, C 0 semigroups is standard. In the paper, for the sake of generality, we deal mostly with set-valued maps (the terminology in set-valued analysis taken after [4]: the symbol ⊸ denotes a set-valued map with at least closed values). It is however important to observe that results we propose are, to the best of our knowledge, new in the single-valued case, too. The paper is organized as follows: in Section 2 we discuss origins of problems and motivations of main assumptions; in Section 3 we establish main abstract results, while Section 4 is devoted to applications. Section 3 concludes with subsection 3.2 and a discussion of invariance issues playing an important role in the paper. The motivation 2.1. Drift reaction-diffusion equations. When dealing with an evolving in time multicomponent active continuous substance, whose components interact via certain reaction mechanism, such as e.g. predator-prey, activator-inhibitor, competition, reaction kinetics etc., and they all diffuse with different (in general) diffusive constants and are subject to drift or advection, i.e. a passive transfer caused by, for instance, the moving ambient media, such as gas or fluid, then the adequate model is provided by the so-called systems of drift reaction-diffusion equations (see e.g. [42]). Such systems in general are of the form Our interest is mainly focused on ecological or chemical systems, where u i (x, t) is the concentration at x ∈ Ω and time t ∈ [0, T ] of the i-th reactant, i = 1, ..., N , contained in a bounded stirred up vessel (or reactor) Ω. Clearly the initial state u(·, 0) 0 on Ω and the natural expectation is that u i (x, t) 0 since the concentration cannot be negative. On the other hand there is a threshold value R i > 0 beyond which the i-th component is saturated or the model is not adequate. In a similar manner the implicit threshold value of concentrations may follow from mass conservation: the total mass of reactants, say R, must be constant. Therefore it makes sense to look for solutions u( In general equations of the form (2.1) should be therefore considered under the presence of state constraints: In what follows we admit also discontinuous nonlinearities f (or g). This appears when, for instance, system data are determined by measurements or a subject to phase transition phenomena and is motivated by numerous applications of systems with hysteresis (see e.g. [52], [11]). The typical situation concerns (2.1) with N = M = 1 and is of the form where H is the hysteresis operator -see [54], [34]. In the simplest case H is driven by the Heaviside function and maybe described via the related Nemytskii operator: given a threshold value α > 0, an input function u : For some other instances of the problemsee [8], [16] and numerous examples in [10]. The common way to overcome this obstacle is to replace the discontinuous f , or g in (2.2), by an appropriate set-valued regularization F or G (introduced e.g. by Fillipov or Krasovski -see [27,Sect. 2.7] or [3, p. 101]) and instead of (2.1) consider a problem subject to initial and boundary conditions, where F : is an upper semicontinuous set-valued map with compact convex values. 2.2. Zeros of set-valued maps. In the present paper we shall deal with the existence of steady state (stationary solutions) of state constrained autonomous problems related to (2.3) or (2.4). This leads to the second objective of the present paper. Since 1941, when Kakutani showed that every upper semicontinuous set-valued self map ϕ of the closed ball D in R n admitting closed convex values has a fixed point, a lot of attention has been paid to the different aspects of the fixed point theory for set-valued maps (see e.g. [29]). In one direction the development has led to substantial weakening in the assumption that the values of the mapping are subsets of its domain. The idea is well-illustrated by the classical (single-valued) mean value theorem of Bolzano. This important observation has been generalized by Poincaré in 1883 in his famous conjecture proved by Miranda [43]. .., n, denote the k-th face of C. Let f = (f 1 , ..., f n ) : C → R n be continuous and suppose that for all k = 1, ..., n Quite a complicated history of this and other similar results is well-described by Mawhin [39] and [12] (see also [40], [41]). In the spirit of the above we have (see [41], [48]) then f has a zero. In order to understand the nature of assumptions of these results we need to recall the following definition. Let E be a Banach space, K ⊂ E be a closed set and x ∈ K. The contingent (or Bouligand) cone T K (x) is defined by and the Clarke tangent cone is defined by where d K (u) := inf y∈K y − u . T K (x) and C K (x) are closed cones; additionally C K (x) is convex. Observe that if x belongs to the interior of K, then T K (x) = E. For examples and a detailed discussion see [4]. (2) If K = D(0, r) is a closed ball in a Hilbert space, then T K (x) = {v ∈ E | x, v 0} for x in the boundary of K. In the apparently independent stream of research, the best known equilibrium result is the following pioneering result of Browder [13] (with some modification due to Halpern and Bergman [31,32]) being, in the opinion of Aubin and Cellina (see [3, p. 213, Chapter 5.2] and the discussion therein), 'one of the most powerful theorems of nonlinear analysis'. Theorem 2.5. Assume that K ⊂ E is compact convex and ϕ : K ⊸ E is upper semicontinuous with closed convex values. If ϕ satisfies the weak tangency condition with respect to K, i.e. , then ϕ has a fixed point. In view of Remark 2.4 it is evident that Theorem 2.5 provides a far reaching generalization of Theorems 2.2 and 2.3. Remark 2.6. It is easy to see that if K is convex and 0 ∈ K, then T K (x) ∈ x + T K (x) for all x ∈ K. Hence in Theorem 2.3 (and in 2. Similarly in Theorem 2.5 if 0 ∈ K, then (2.8) implies (2.9) and ϕ has fixed points. Two drawbacks of this result has to be pointed out. In order to get a decent tool to study existence of equilibria one needs to get rid of compactness and convexity in Theorem 2.5. The best known result in this direction is due to Deimling -see [22,Th.11.5], [24]. Theorem 2.7. Let K be a closed bounded convex subset of a Banach space E and let an upper semicontinuous map ϕ : K ⊸ E with compact convex values be condensing with respect to the Kuratowski or Hausdorff measure of noncompactness. If ϕ is weakly inward, then ϕ has a fixed point. It is interesting to observe that in Deimling's theorem there is no way to replace inwardness by outwardness condition although it was possible in Theorem 2.5. In order to discuss a nonconvex version of Theorem 2.5 one needs to understand which property of a set is a suitable substitute for convexity and what should be a suitable counterpart of tangency. This problem was addressed in [7] and discussed in [35]. After [7] we say that a closed K ⊂ E is an L-retract if there is ε > 0, a continuous r : Therefore K is an L-retract whenever K is a neighborhood retract in E with retraction r such that distance of x ∈ U from r(x) ∈ K may be controlled by the distance d K (x). The class of L-retracts is large. Closed convex sets (in this case one can define r on E with L = 1 + ε, where ε > 0 is arbitrary), compact sets being bi-Lipschitz homeomorphic with closed convex sets, the so-called proximate retracts, Lipschitz retracts and epi-Lipschitz sets (in the sense of Rockafellar [46]) are L-retracts. Remark 2.8. (1) If X is a topological space of finite type, i.e., such that the (singular with rational coefficients) cohomology groups H k (X; Q), k 0, groups are finitely generated and vanish above some dimension, then the Euler characteristic χ(X) : (2) If X is a neighborhood retract in E and f : X → X is compact, then f is a Lefschetz map, i.e. the homomorphism H * (f ) is a Leray endomorphism of H * (X, Q) and the generalized Lefschetz number Λ(f ) of f is well-defined -see [25, ]. If f is homotopic to the identity I X on X, then H * (f ) = H * (I X ) is the identity H * (X). This implies that I X is a Lefschetz map; hence H * (X) is of finite type and the Euler characteristic χ(X) is well-defined. Moreover, in this case Λ(f ) is equal to the ordinary Lefschetz number λ(f ) = λ(I X ) = χ(X) (for details concerning these notions see also e.g. [14]). In particular if χ(X) = 0, then f has a fixed point. In view of the above if K is a compact L-retract, then its Euler characteristic χ(K) is welldefined. Note that if K is additionally convex, then χ(K) = 1. After [7] (see also [19]) we have the following result. Theorem 2.9. Let K ⊂ E be a compact L-retract with χ(K) = 0. If ϕ : K ⊸ E is upper semicontinuous with closed convex values and weakly tangent to K in the sense of Clarke, i.e. then ϕ has an equilibrium. Note that in condition (2.10) the Bouligand cone has been replaced by the Clarke cone; there are examples showing that (2.8) is not sufficient (see [35]); however if ϕ = f is a single-valued map, then (2.8) implies (2.10). It is also evident that the weak inwardness in the sense of Clarke cone implies the existence of fixed points. There is no direct generalization of the equilibrium problem from Theorem 2.9 in the noncompact setting, although there were some partial answers have been discussed in [19] and [20], since we have the following example showing a compact tangent map without zeros. Then f is continuous, it has neither zeros nor fixed points and f (x) = 1/ √ 2 for all x ∈ D 1 . Let D be the unit closed ball in E and r : D → D 1 the radial retraction. Define g : D → D by One can see that g is well-defined, continuous and g(x) = −x whenever x ∈ ∂D; an easy argument yields g(x) = 0 for every x ∈ D. Finally, define κ : E → E by Clearly κ is an injective compact linear map. Thus G := κ • g : D → D is compact and For examples, further generalizations and a deeper discussion of issues surveyed above the reader can see [35]. The main aim of the present paper is to show a result in this direction with applications to constrained steady state problems related to (2.3) or (2.4). Existence results 3.1. The setting and results. In order to study the existence of steady states of autonomous problem (2.3) we shall take an appropriate appropriate abstract setting and consider the following coincidence problem where: (1) Let us recall the so-called Lions construction, see [21], [2]. Let V be a reflexive Banach space which is dense in a (real) Hilbert space H and suppose that the identity V → H is continuous. Suppose a bilinear continuous form a : V × V → R is such that This defines a linear A : D(A) → H. According to e.g. [49,Prop. 4 The situation described in this example is very typical in various applications. ( It is well know that (see e.g. [10, prop. 2.3]) that Φ, being H-upper semicontinuous with weakly compact values, is upper semicontinuous when E is endowed with 2 On D(A) the graph norm · A is considered: the weak topology. This, in turn, implies, that given sequences (x n ) ⊂ K 0 and y n ∈ Φ(x n ), if x n → x 0 ∈ K 0 , there is a subsequence y n k such that y n k ⇀ y 0 ∈ Φ(x 0 ) (⇀ denotes the weak convergence). Obviously if Φ is upper semicontinuous with closed compact values, then (A 4 ) is satisfied, too. (3) Let h > 0 and hω < 1, then h −1 ∈ ρ(A) and the resolvent It is worth to note that in the situation described in part (1) Let us recall a version of Lemma 17 from [6]; for the reader's convenience we give an independent proof. Proof: Take ε > 0 and u ∈ K 0 . By ( Hence, there is α(u) > 0 such that By the H-upper semicontinuity choose a number γ(u), 0 < γ(u) < ε/4 such that Φ(B 0 (u, 2γ(u)) ∩ K 0 ) ⊂ B(Φ(u), ε/2) and a number 0 < δ(u) < min{γ(u)/C, γ(u)/α(u)}, where C := j . Let {λ s } s∈S be a locally finite locally Lipschitzian partition of unity refining the open cover For any s ∈ S, we define a map f s : . 3 The graph of f is contained in the ε-neighborhood of the graph of is the unit open ball in E, for any x ∈ K0; in particular f is bounded. Here and below we write B(x, r), x ∈ E, to denote a ball in E and B0(x, r), x ∈ E0, to denote a ball in E0. It is clear that f s , s ∈ S, is Lipschitz continuous. Now we define f : K 0 → E by the formula Observe that f is locally Lipschitz because so are all functions λ s , f s for s ∈ S, and the covering Therefore, for any s ∈ S(u), Hence, by convexity of Φ(u s 0 ) + εB, where f comes from Lemma 3.2. Proof: Choose u ∈ K 0 and ε > 0. Taking into account (3.3), (2.7) and the continuity of f there Thus, for such v and h we have Now we are ready to prove Theorem 3.4. In addition to (A 1 ) -(A 5 ) above, let us assume that K is bounded and for all sufficiently small h > 0: Then there is u ∈ K 0 ∩ D(A) such that 0 ∈ Au + Φ(u). Proof: Choose ε > 0 and f = f ε according to Lemma 3.2. Denote by r : is continuous and compact, since K and Φ (and so does f ) are bounded and J h is compact. Then, by the Schauder fixed point theorem, for large n 1 (precisely for n > ω), there is u n in K 0 such that so u n ∈ D(A) and As a result, we have Hence {Au n } n 1 is bounded in E. Fix h > 0 with hω < 1 and note that Since {j(u n ) − hAu n } n 1 is bounded in E, the above equality yields {u n } n 1 is relatively compact in E 0 . Passing to a subsequence if necessary, we can assume that u n → u ε in E 0 and u ε ∈ K 0 . In view of (3.5) and Lemma 3.3 we have Hence Au n → −f (u ε ) in E. The closedness of A yields u ε ∈ D(A) ∩ K 0 and −Au ε = f (u ε ). Arguing as above, we may assume without loss of generality that u ε → u 0 ∈ K 0 as ε → 0. ∈ Gr(Φ) and u ′ ε → u 0 ; in view of Remark 3.1 (2) we gather that, after passing to a subsequence if necessary, v ′ ε ⇀ v 0 ∈ Φ(u 0 ). Since v ε ⇀ v 0 , too, and the graph of A, being closed and convex, is also weakly closed, we see that u 0 ∈ D(A) and −Au 0 = v 0 ∈ Φ(u 0 ). Now we are going to establish a counterpart of Theorem 3.4 valid for L-retracts. In this case the choice of E 0 is immaterial since we shall assume that Φ is defined on K. In addition Let us assume that: Thus h is continuous and provides a homotopy joining the identity to the compact map J h . As a consequence if χ(K) = 0, then any compact map g : K → K homotopic to the identity has fixed points. (2) If A is given as in Remark 3.1 (with E = H), then assumption (B 2 ) is satisfied. Moreover, in this case (B 4 ) holds for a convex K if and only if K is semigroup invariant, i.e. S(t)K ⊂ K for any t 0. Indeed if K is resolvent invariant then by the Post-Widder formula (see [26,Corollary 5.5,5.6]) for each x ∈ K and t > 0 Conversely, if K is semigroup invariant, then by [26, Th. 1.10], for any h > 0 with hω < 1 and x ∈ K, (3.7) First we need a result, which may be of interest on its own, similar to Lemma 3.2. Lemma 3.7. Suppose X ⊂ E is closed and that Φ : X ⊸ E is H-upper semicontinuous with convex values. Let a function ξ : X × E → R be such that for each z ∈ E, ξ(·, z) is upper semicontinuous (as a real function) and for each x ∈ X, ξ(x, ·) is convex. If for all x ∈ X, inf z∈Φ(x) ξ(x, z) 0, then for any ε > 0 there exists a locally Lipschitz ε-graph-approximation f = f ε : Proof: For any z ∈ X choose 0 < δ z < ε such that Φ(B(z, δ z ) ∩ X) ⊂ Φ(z) + εB and let an open covering U of X be a star refinement of the covering {B(z, δ z ) ∩ X} z∈X of X. For each x ∈ X choose z x ∈ Φ(x) such that ξ(x, z x ) < ε. Given U ∈ U and x ∈ U let where z s := z xs . Then f is well-defined and locally Lipschitz. Take x ∈ X and let S(x) := {s ∈ S | λ s (x) = 0}. If s ∈ S(x), then x ∈ V s , i.e. ξ(x, z s ) < ε. By convexity of ξ(x, ·) we gather that ξ(x, f (x)) < ε. Since x s ∈ U s and U is a star refinement of {B(z, δ z ) ∩ X} we get that for all s ∈ S(x), x, x s belong to the star of x with respect to U: x, x s ∈ {U ∈U|x∈U } U ⊂ B(z, δ z ) ∩ X for some z ∈ Z. Hence z ∈ B(x, ε) and for s ∈ S(x), z s ∈ Φ(x s ) ⊂ Φ(z) + εB. This together with the convexity of Φ(z) shows that Proof of Theorem 3.6: Let where ∂d K (u) ⊂ E * denotes the generalized Clarke gradient at u ∈ K of the (locally Lipschitz) function d K . It is clear that is the Clarke directional derivative of d K at u in the direction v. Then ξ : K × E → R is upper semicontinuous and, for each u ∈ K, ξ(u, ·) is convex. Observe now that Suppose now to the contrary that there are no solutions to (3.1). We claim that there is 0 < ε < ε 0 such that if u ∈ K ∩ D(A) and f is an ε-graph-approximation of Φ, then If not then there are sequences ε 0 > ε n → 0 + , u n ∈ K and an ε n -approximation f n : K → E of Φ such that Au n − f n (u n ) < (L + 1)ε n , n ∈ N. This implies that the sequence (Au n ) is bounded; hence by the same argument as in the proof of Theorem 3.4, we gather that, passing to a subsequence if necessary, u n → u 0 ∈ K. Since f n (u n ) ∈ Φ(B(u n , ε n )) + ε n B, we find u ′ n ∈ B(u n , ε n ) and v ′ n ∈ Φ(u ′ n ) such that f n (u n ) − v ′ n < ε n . By Remark 3.1 (2) we may assume that v ′ n ⇀ v 0 ∈ Φ(u 0 ). This implies that f n (u n ) ⇀ v 0 and, thus −Au n ⇀ v 0 , too. Hence v 0 = −Au 0 , i.e. 0 ∈ Au 0 + Φ(u 0 ): a contradiction. Now take ε > 0 provided above and, using Lemma 3.7, let f : K → E be an ε-graphapproximation of Φ such that ξ(u, f (u)) < ε for all u ∈ K. Take a decreasing sequence h n → 0 + with h 1 < λ. Since f is bounded the map provides a (continuous) homotopy joining the identity on K with g n . In view of Remark 3.5, g n (u n ) = u n for some u n ∈ K ∩ D(A). This means that . Similarly as before we may suppose that u n → u 0 ∈ K; therefore f (u n ) → f (u 0 ). By (3.8) and (3.9), for all n ∈ N, . Passing to lim sup and remembering that ξ(u 0 , f (u 0 )) < ε we get Lε, a contradiction. This completes the proof. Invariance and viability. A central role among assumptions of Theorems 3.4 and 3.6 is played by the resolvent invariance of the set K and the tangency condition. Let us consider conditions (B 1 ) -(B 4 ) and (3.7) and let K be an arbitrary closed subset of E. The Hille-Yosida Theorem implies that in this case A is the generator of a C 0 semigroup {S(t)} t 0 . It is not difficult to show that (B 4 ) and (3.7) imply that This condition implies that is equivalent to the following: the problemu ∈ Au + Φ(u), u(0) = x ∈ M has (3.11) a mild solution u : [0, +∞) → E takie, że u(t) ∈ M dla t 0, because the semigroup {S(t)} t 0 is immediately compact. This and related results are thoroughly discussed in [56] and [10]. We thus see that our conditions imply the invariance of K (sometimes called viability) with respect to the 'heat flow' generated by A, i.e. condition (3.11). Conversely condition (3.12) ∀ u ∈ K 0 ∈ T A K (u) implies the semigroup invariance and, in case of a convex K, resolvent invariance (B 4 ). The point is that, in concrete situations of differential problems, condition (3.12) needs to be verified. In most cases this can be done via an appropriate use of the maximum principles. In the next section we shall encounter examples of such arguments. The problem of invariance of systems of parabolic PDE was studied in numerous papers [1], [38], [50], [6], [10], [57] (and references therein), [58] (the so-called Müller conditions important in various applications). The most general, often necessary and sufficient, abstract results are presented in [56]. The invariance problem of parabolic problem from (3.11) will be studied in the forthcoming paper [36]. In particular we shall study the topological structure of the set of all viable (i.e. 'surviving' in K) solutions and show its relation with the existence of steady states, i.e. solutions to (3.1). Applications We now put a.e. on Ω}; Proof: It is straightforward to show that Φ(u) is weakly compact (we work in a Hilbert space, thus closed convex and bounded sets are convex weakly compact). Below we shall prove a slight generalization of Proposition 6.2 from [10]. It implies immediately that Φ is H-upper semicontinuous. Lemma 4.1. If ψ : Ω × R d → R N is upper semicontinuous with convex compact values and sup y∈ψ(x,u) |y| b(x) + a|u| for all u ∈ R d and a.a. x ∈ Ω, where b ∈ L 2 (Ω) and a > 0, then the Nemytskii operator is H-upper semicontinuous Proof: Suppose it is not the case: there are ε 0 > 0, a sequences u n → u 0 in E and v n ∈ Ψ (u n ) such that Up to a subsequence (u n ) n 1 converges a.e. on Ω to u 0 and there is h ∈ L 2 (Ω, R) such that |u n (x) | h (x) for a.e. x ∈ Ω and every n 0. By assumption for n 0 and a.e. x ∈ Ω. There is η > 0 such that for A ⊂ Ω with Lebesgue measure µ (A) < η For each n 0, the set-valued map H n := ψ (·, u n (·)) : Ω ⊸ R N is measurable and if w : Ω → R N is a measurable selection of H n , then w ∈ E since By the Egorov and Lusin theorems (see [5,Th. 1] for a multivalued version of the Lusin theorem) there is a compact Ω η ⊂ Ω such that µ (Ω \ Ω η ) < η, u n → u 0 uniformly on Ω η , the restriction u 0 | Ωη : Ω η → R N is continuous and H 0 | Ωη : Ω η ⊸ R N is H-lower semicontinuous. Let δ := ε 0 / 2µ (Ω). We will show that there is n 0 such that if n n 0 and x ∈ Ω η , then Suppose to the contrary that there is a subsequence (n j ) j 1 and a sequence (x j ) j 1 in Ω η such that We can assume that x j → x 0 ∈ Ω η , since Ω η is compact. The continuity of u 0 | Ωη and the uniform convergence u n → u 0 on Ω η imply that u n j (x j ) → u 0 (x 0 ) and thus x j , u n j (x j ) → (x 0 , u 0 (x 0 )) as j → ∞. The upper semicontinuity of ϕ together with the H-lower semicontinuity of H 0 on Ω η show that H n j (x j ) ⊂ H 0 (x j ) + B R N (0, δ) for sufficiently large j, which contradicts (4.6). Let us fix n n 0 . For a.e. x ∈ Ω η we have Observe that the map Ω η ∋ x ⊸ B R N (v n (x) , δ) ∩ H 0 (x) is measurable and has nonempty values for a.e. x ∈ Ω η . By the Kuratowski-Ryll-Nardzewski theorem, there is a measurable selection v : Take an arbitrary selection w : Ω → R N of H 0 , i.e. w (x) ∈ H 0 (x) for a.e. x ∈ Ω. Let χ = χ Ωη be the indicator of Ω η . Notice that χv Recall that µ (Ω \ Ω η ) < η, hence and by (4 Thus, contrary to (4.3), v n ∈ Ψ (t 0 , u 0 ) + B L 2 (Ω,R N ) (0, ε 0 ) for infinitely many n 1. In order to get the weak tangency (A 5 ) fix u ∈ K 0 and define G, H : Ω ⊸ R N , by The map T C (·) : C ⊸ R N is lower semicontinuous (see [4,Th. 4 Proof: In view of [9,Cor. 7.49], C is an intersection of countably many closed half-spaces containing it, i.e. C = n 1 C n , where C n := x ∈ R N | p n · x a n for some p n ∈ R N and a n ∈ R. Thus, it is enough to show that J h (K n ) ⊂ K n , where for every n 1, since then Without loss of generality, we assume that C = x ∈ R N | p · x a for some p ∈ R N and a ∈ R and K = {u ∈ E | p · u(x) a for a.a x ∈ Ω}. Take f ∈ K and put u = J h (f ). By definition u ∈ D(A) and u − hAu = f. In view of Propositions 4.1 and 4.2 we get Similarly as before we put E = L 2 (Ω, R N ), K := {u ∈ E | u(x) ∈ C for a.a. x ∈ Ω}. Thus (B 1 ) is satisfied. Let us define a continuous bilinear form where ∇u · ∇v is the Frobenius product of derivatives ( 8 ) and Observe that for any v ∈ H 1 (Ω, R N ) and ε > 0, where d 0 = inf x∈Ω |d(x)|, in view of the so-called ε-Cauchy inequality. Taking 0 < ε < d 0 /2 we get for some positive constants c, C. Therefore for all v ∈ H 1 (Ω, R N ) 7 Note that G is bounded, i.e. sup y∈G(x,u) |y| < ∞. This implies that we are back in the situation of Remark 3.1 (1), (3) and putting where f corresponds to u as in the definition of D(A), well defines a closed densely defined linear operator satisfying assumption (B 2 ). Moreover A is the generator of a C 0 semigroup {S(t)} t 0 . The smoothness of the boundary ∂Ω and the standard regularity arguments imply that Now, for any u ∈ K, we put Following arguments from the proof of Proposition 4.1 we easily get that Φ has properties (B 3 ) and (3.7). In order to apply Theorem 3.6 we need Proof: To this end we need use the C 0 -semigroup structure. In view of Remark 3.5 (2) we need to show that K is semigroup invariant i.e. S(t)u 0 ∈ K for all t 0 and u 0 ∈ K. It is well known is the unique mild solution to the Cauchy initial value problem (4.10) u ′ = Au, u ∈ E, t > 0 u(0) = u 0 . Using a counterpart of estimate (4.9) valid forā we see that for By the Gronwall inequality we infer that w(t) L 2 = 0 for all t > 0, since w(0) = 0 in L 2 (Ω). It other words u(t) ∈ K for all t > 0. −Lu + Γu ∈ G(x, u) u(x) ∈ C a.e. on Q, Assume (D), (Γ) (with Ω = Q) and d ∈ C 1 p (Q) ( 9 ) and (P 1 ) G : Q × C ⊸ R n is upper semicontinuous with compact convex values; (P 2 ) G is weakly tangent to C, i.e. G(x, u) ∩ T C (x) = ∅ for all x ∈ Q and u ∈ C. Let us put ( 10 ): We show, exactly as in section 4.2, that conditions (B 1 ) -(B 4 ) are satisfied. Thus, Theorem 3.6 yields the existence of solutions to (4.4). 9 This symbol stands for the restrictions to Q of functions from C 1 (R M ) which are l-periodic in each direction. 10 By H k p (Q) we denote the Sobolev space of l-periodic functions on the M -dimensional domain Q (k positive integer); see [45,Chapter 5.10] for definitions and properties of H k p (Q). 4.3. Some remarks to the Bernstein theory. In a series of results in [30] authors presented a modern approach to the so-called Bernstein theory for boundary value problems for second order ordinary differential equations (see also [28], [25,II.7.4]; for a numerous research afterwards see e.g. [53] and bibliography therein). For the sake of completeness we formulate a model result [30,Theorem 1.7]. Then the problem To illustrate our approach we will stay on the level of an ordinary differential inclusion and study the Dirichlet problem (the Neumann and periodic problems may be studied analogously) for In order to apply Theorem 3.4 let us put E := L 2 ((0, T ), R N ), K := {u ∈ E | |u(x)| R a.e. on (0, T )}; Within this setting we see that conditions (A 1 ), (A 3 ) and (A 4 ) are satisfied. As concerns (A 2 ) note that for u ∈ D(A) and v ∈ V := H 1 0 ((0, T ), R N ) where a bilinear form a : V × V → R given by Thus for any v ∈ V a(v, v) + c v 2 where · H 1 0 is the 'short' norm in V . Hence we see that A is the generator of the C 0 semigroup of linear operators on E and conditions (A 2 ), (A 7 ) hold true, since the inclusion D(A) → E 0 is compact. Condition (A 6 ) may be shown as (B 4 ) in Proposition 4.3. Therefore we only need It is clear that X ⊂ (0, T ). If t ∈ X, then u(t) · u ′ (t) = 0 since the function (0, T ) ∋ s → |x(s)| 2 takes maximum at t. Therefore there is z ∈ ϕ(t, u(t), u ′ (t)) such that z · u(u) cR 2 = c|u(t)| 2 , i.e. (z − cu(t)) · u(t) 0. Hence and by Remark 2.4 (2) [ϕ(t, u(t), u ′ (t)) − cu(t)] ∩ T C (u(t)) = ∅. If t ∈ X, then T C (u(x)) = R N and so Arguing as in the last part of the proof of Proposition 4.1 we produce a measurable v ∈ Φ(u) such that v(t) ∈ T C (u(t)) for all t ∈ [0, T ]. Then again by [4,Cor. 8.5.2], v ∈ T K (j(u)). The reader will easily formulate analogous results for elliptic PDE or partial differential inclusions. For instance one can get the generalization of the classical concerning the existence of steady states of the heat equation u t − ∆u = g(u) subject to the Dirichlet boundary condition, where a continuous g is such that for some positive K, C one has ug(u) C|u| 2 for |u| K. Now we assume 11 Observe that if N = 1, then this means that u · f (t, u, 0) cM 2 . As in Proposition 4.1 we check that assumption (A 4 ) is also verified. Moreover (A 7 ) holds true since p > N and, thus, the inclusion D(A) → E 0 is compact. We will check that (A 5 ) and (A 6 ) are true. Condition (A 6 ) follows implicitly from [38,Th. 16]. Since we are in a special situation let us show a simple argument. Using (H 1 ) and and the density arguments we see that if v ∈ H 1 0 (Ω) and v 0 a.e. on Ω, then for any i = 1, .., N , Ω ∇v · ∇α i dx 0 and Ω ∇v · ∇β i 0. Remark 4.9. The existence of solutions to (4.15) may be established by a direct use of arguments employed in the proof of Theorem 3.4 since in this situation we can use some particular issues present in the problem. Given i = 1, ..., N and u ∈ W 1,p 0 (Ω, R N ) let On can show that π i : E := W 1,p 0 (Ω, R N ) → W 1,p (Ω) is well-defined and continuous. The map π := (π 1 , ..., π N ) : E → E is a retraction of E onto K ∩ E. Note that E 0 ֒→ E and let Ψ(u) := Φ(π(u)), u ∈ E 0 . Taking into account that 0 ∈ ρ(∆) we my consider the composition This composition is a compact (at large: i.e. the range of ξ is relatively compact) upper-semicontinuous map with compact convex values. By the Glicksberg-Fan theorem (the set-valued version of the Schauder fixed point principle) we gather that ξ has a fixed point u ∈ E 0 . Using (H 3 ) and (H 4 ) and the maximum principle one show that u is located in K and, therefore is a solution to (4.15).
2016-11-05T10:50:16.000Z
2016-11-05T00:00:00.000
{ "year": 2018, "sha1": "a21a3ebff65fa65aa35b756c2243d586bfa60574", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jmaa.2017.01.040", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "a21a3ebff65fa65aa35b756c2243d586bfa60574", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
239475365
pes2o/s2orc
v3-fos-license
Preparation and Characterization of Sludge-Based Magnetic Biochar by Pyrolysis for Methylene Blue Removal The development of low-cost adsorbent is an urgent need in the field of wastewater treatment. In this study, sludge-based magnetic biochar (SMB) was prepared by pyrolysis of sewage sludge and backwashing iron mud without any chemical agents. The samples were characterized by TGA, XRD, ICP, Organic element analysis, SEM, TEM, VSM and BET. Characterization analysis indicated that the magnetic substance in SMB was Fe3O4, and the saturation magnetization was 25.60 emu·g−1, after the adsorption experiment, SMB could be separated from the solution by a magnet. The batch adsorption experiment of methylene blue (MB) adsorption showed that the adsorption capacities of SMB at 298 K, 308 K and 318 K were 47.44 mg·L−1, 39.35 mg·L−1, and 25.85 mg·L−1, respectively. After one regeneration with hydrochloric acid, the maximum adsorption capacity of the product reached 296.52 mg·g−1. Besides, the adsorption kinetic described well by the pseudo-second order model revealed that the intraparticle diffusion was not just the only rate controlling step in adsorption process. This study gives a reasonable reference for the treatment of sewage sludge and backwashing iron mud. The product could be used as a low-cost adsorbent for MB removal. Introduction The pollution of water systems by various pollutants has attracted widespread attention for many years. The widespread use of dyes has made the pollution of water systems more serious; dyes are mainly discharged into wastewater from dyestuff manufacturing, food, cosmetics, printing and leather manufacturing industries [1]. It is estimated that during manufacturing and processing about 15% of the dyes are lost to the wastewater [2]. In addition, more than 200 tons of dyes are released from the textile industry as wastewater every year [3]. Among them, MB is a cationic dye, which is widely used for coloring of textiles [4]. MB is fairly toxic, contains a large amount of organic compounds, has a high solubility and is difficult to degrade [5]. Many methods have been applied for removing of dyes in wastewater. Some of these are flocculation, oxidation, electrolysis, biodegradation, ion-exchange, photo catalysis and adsorption [6,7]. Adsorption is one of the most common used methods in dye wastewater remediation. Generally, adsorption technology does not leave behind by-products and is exceptionally effective for removing dyes in water [8,9]. Activated carbon is a widely used adsorbent, but due to its high manufacturing and recycling costs, the price is relatively expensive. Biochar could be used as an alternative to activated carbon in some areas of wastewater treatment as adsorbent. Considering the use of biochar as adsorbent for wastewater treatment can greatly save the purchase cost of activated carbon. Thermal conversion provides a method to convert biomass into biochar. Thermal conversion methods mainly include pyrolysis, gasification and hydrothermal carbonization [10]. Among these methods, pyrolysis has a high efficiency, is reasonable cheap and the produced biochar usually has high application value [11,12]. Therefore, this method is one of the common techniques for converting organic waste into biochar. In this study, magnetic biochar was considered to be prepared by one-step pyrolysis. In recent years, the sewage sludge production increases dramatically due to the population growth and the improvement of treatment capacity of sewage treatment plants. In addition, the sludge treatment has become one of the main problems of wastewater treatment plants, it requires a lot of work, energy and money [13]. Traditional treatment methods include filling, incineration, paving, dumping into the sea and turning it into building materials, but these methods can easily cause secondary pollution [14], how to transform the sludge waste into useful substances is a difficult point. Wastes such as crop waste and forestry residues have been considered for the preparation of biochar [15,16]. Considering the conversion of sewage sludge waste into biochar can not only reduce the cost of biochar preparation, but also provide a way for sludge disposal. In addition, the separation and recovery of adsorbents after adsorption is still a huge challenge [17], and traditional separation method usually requires centrifugation and filtration steps. These steps may lead to desorption, which in turn produces secondary contamination [18]. Magnetic composites could be controlled by the principle of magnetic separation. They could be attracted by magnetic field in the presence of it, and could be used as ordinary materials in the absence of the magnetic field [19]. Therefore, the magnetization of biochar is an effective strategy to meet the challenges of separation and recycling [18]. Transition metals and their oxides are usually introduced into biochar matrix to give them magnetism [20]. At present, most of the magnetic sources of biochar are iron-containing chemical agents [21][22][23]. The usage of chemical agents increases the cost of preparing magnetic biochar, which is unwanted. Therefore, some scholars have focused on the iron-containing wastes. For example, Yi et al. and Chen et al. used steel pickling wastewater as magnetic source to prepare magnetic biochar [24,25], but these wastes usually come from industrial production, which has the risk and trouble of toxic substances. The search for a cheap and non-toxic source of iron remains a research hot spot. In the past twenty years, a large number of iron and manganese removal water plants have been established in Northeast China to treat groundwater with excessive iron and nganese content [26][27][28][29][30][31]. However, along the operation of the water plants, a large amount of iron mud would be produced, and the subsequent treatment of iron mud costs a lot [32,33]. This iron mud is cheap, easy to obtain, and contains no toxic substances [34]. This study creatively proposed to use the waterworks sludge of iron as the magnetic source, which does not contain the risk of heavy metal pollution, and provides a good method for the final disposal of waterworks sludge. In addition, no chemicals were added in the preparation process, which saved the preparation cost of magnetic biochar. Therefore, the main purposes of this study are: (1) to attempt to use water work backwashing iron mud and sewage sludge to prepare an environmentally friendly and economical magnetic biochar through pyrolysis method; (2) to use various methods to characterize the properties of the obtained magnetic biochar; and (3) to evaluate the adsorption and regeneration properties of the adsorbent by testing the adsorption capacity of MB on the product. Materials All chemicals used in this work were analytical grade and were dissolved in deionized (DI) water. The backwashing iron mud was collected from groundwater treatment plants in the city of Harbin, Heilongjiang province of China. Previous studies have shown that the waterworks sludge is mainly contains Fe and a small amount of Si, Ca, K, and Mn, which account for 89%, 4.3%, 4.0% 2.4%, and 0.3% by mass, respectively [35]. The sewage sludge used in the study is from a sewage treatment plant in Beijing, and its physicochemical properties are shown as follow: The moisture content of the sewage sludge is 99.6%, no other high concentration metals are present, pH is 6.7 and the total carbohydrate is about 2450 mg·L −1 . A previous study [36] shows that the average content of Cr, Cu, Ni, Pb, and Zn of sewage sludge from this plant is about 33.8 mg/L, 96.9 mg/L, 16.5 mg/L, 23.1 mg/L, and 733 mg/L, which are far lower than the sludge utilization standards of China, the European Union, and the United States [36]. Synthesis of Magnetic Biochar The synthesis procedure was as follows: the untreated iron mud and sewage sludge were mixed in a ratio of 1:5 (mass ratio), the required volume is converted according to the solid content of these two kinds of sludge. The suspension was stirred and sonicated for 10 min with an ultrasonic instrument to make them fully mixed. The mixture was then dried in an oven at 80 • C for 6 h. The dried product was milled and sieved. The sieved product was placed in a closed container and pyrolyzed in a muffle furnace at a temperature of 600 • C for one hour. The sample was then washed with DI water for several times, and saved in a sealed container for later use (the preparation flow chart is shown in Figure S1). In addition, above steps were repeated for sewage sludge to prepare biochar (BC) without magnetic. Characterization Surface crystalline was analyzed to identify the sample's constituents using X-ray diffraction (XRD) (Bruker D8 Advance, Germany) with Co Kα radiation (l = 1.79026 A) operated at a 2θ range of 10~90 • , and the operated voltage, current, and scanning speed were 40 kV, 40 mA, and 6 • ·min −1 , respectively. X-ray photoelectron spectrometer (XPS) (Thermo escalab 250XI, USA) was used to detect surface elemental composition. The surface characteristics and morphology of the sample was characterized by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Magnetic property of samples was measured using a magnetometer (Quantum Design, USA) with versalab system. Total surface area was measured using N 2 sorption on an ASAP 2460 analyzer and calculated using Brunner Emmet Teller (BET) method. The proportions of various elements were detected by Inductively coupled plasma emission spectrometer (ICP) and organic element analysis. In addition, Thermo Gravimetric Analyzer (TGA) was also performed. The zero-charge point (pH pzc ) of SMB was determined by adding 0.025 g sample into 25 mL 0.1 M KCl solution at different initial pH (2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12), and the final pH of each solution was measured 24 h later. Finally, the graph was drawn with the initial pH as the abscissa and the final pH as the ordinate. Adsorption Experiments First, sorption kinetics of MB at a constant concentration (10 mg·L −1 ) onto SMB were examined to obtain the contact time needed to achieve the adsorption equilibrium. Four hundred milligrams of magnetic biochar (0.5 g·L −1 ) was added to a polyethylene plastic bottle, which containing 800 mL MB solution (initial pH 6.6 ± 0.1). Then the bottles were shaken at 175 rpm under 25 • C in a thermostatic orbit shaker. Adsorption kinetic data were obtained by sampling at different times within 0-48 h, and each sampling volume was 15 mL. At each sampling point, the bottle was withdrawn, and the mixtures were immediately filtered through 0.45 µm pore size nylon membrane filters, and the residual MB concentration in the obtained solution was determined using a visible spectrophotometer at the wavelength of 665 nm. The data obtained were then fitted with various kinetic models. The adsorption isotherm batch experiments were carried out in 200 mL conical flasks. In addition, each group included 50 mL MB solution (concentration between 1 mg·L −1 and 160 mg·L −1 ) and 25 mg adsorbent (initial pH 6.6 ± 0.1). The suspension was shaken in a thermostatic orbital shaker at a speed of 175 rpm for 36 h, this period has been determined by kinetic experiments previously to be sufficient to establish adsorption equilibrium. Then the mixtures were immediately filtered and determined by the same method. The adsorption isotherm experiments were carried out at 25 • C, 35 • C, and 45 • C, respectively. Isotherm data were simulated with various isotherm models. Reproducibility and Reusability In the regeneration experiment, SMB after adsorption was treated with 0.01 mol·L −1 , 0.05 mol·L −1 , 0.1 mol·L −1 , and 0.5 mol·L −1 hydrochloric acid respectively. The regeneration time was set at 9 h [34]. The regenerated product was washed several times, and the adsorption effect of the product on MB was detected respectively. According to the results, the optimal regeneration concentration was selected as 0.5 mol·L −1 . The adsorption kinetics and adsorption isotherm experiments were repeated for the SMB after one regeneration, and the experimental conditions are the same as Section 2.4 (only 25 • C was selected for isotherm experiment). The SMB was regenerated five times and the adsorption effect was determined respectively. Characterization of Adsorbents The adsorption rate is specific to the adsorbent and largely depends on the characteristics of the adsorbent [37]. In this study, the structural characteristics of the adsorbent were studied by detecting TGA, XPS, ICP, Organic element analysis, SEM, TEM, VSM, and BET. TGA analysis showed that both BC and SMB have the first weight loss about 100 • C, which may be caused by the evaporation of water (Shown in Figure 1). Although the sample just prepared was completely dry, it may have absorbed moisture in the air and thus contain a certain amount of water. In addition, the second weight loss occurred at about 450 • C, at which time the sample was thermally degraded. During the whole weight loss process, the mass of BC decreased by 5.79%, while the mass of SMB decreased by 4.65%. In addition, SMB was decomposed at a higher temperature, indicating that the thermal stability of the SMB was slightly higher than the original BC, which may be due to thermally inactive Fe 3 O 4 serving as a diluent [38]. Although two weight losses occurred, the weight loss ratios of the two samples were both small, indicating that the prepared samples have a certain degree of stability. In addition, the weight loss of magnetic biochar prepared by Wang et al. was nearly 40% in TGA analysis [38], while the weight loss of SMB in this study was only 4.56%, indicating that SMB has good stability [39]. The structure and phase purity of SMB were investigated by XRD ( Figure 2a). As can be seen, the diffraction peaks marked with round shape in Figure 2a are well indexed to (111), (220), (311), (400), and (440) planes of Fe 3 O 4 [34]. According to these strong and sharp diffraction peaks, it can be inferred that Fe 3 O 4 with good crystallinity had been prepared. In addition, biochar is one of the main components of SMB, but no specific diffraction peaks of carbon crystal structure were found, indicating that the carbon in SMB was amorphous carbon [34]. It is not negligible that there is a strong peak belonging to SiO 2 (square mark in Figure 2a) and some slight peaks belonging to other substances in the XRD pattern. This phenomenon may be caused by the fact that the iron mud and sewage sludge in the raw materials have not been purified and contain impurities. It is well-known that XPS is often used to verify the elemental composition of materials. Therefore, XPS was applied for component analysis, and the full scan spectrum is presented in Figure 2b, it can be seen that the main composition elements of SMB were C, N, O, and Fe. Besides, there were some broad and low intensity peaks belonging to impurities such as Na, Ca, Si, and Al. The mass fractions of impurities in the product were all less than 5% and the concentration of these impurities can be seen in Table 1. At the same time, the strong C peak and high surface concentration (At% = 55.31) indicated that the main component of the sample surface was carbon. Therefore, the binding energy peak of C1s was displayed in the high-resolution spectrum (Figure 2c) to explore the C-containing functional groups present on the surface of SMB. It could see that the C-C/C=C peak at 284.88 ev is the strongest, and there were also C=O and C-O peaks. In addition, the high-resolution spectrum of Fe was shown in Figure 2d. The peaks of Fe 3+ and Fe 2+ were not equivalent, indicating that there was not only Fe 3 O 4 in SMB but also FeOOH. FeOOH was the incompletely reacted part of the raw iron mud. The binding energy peaks of Fe was very weak, and the surface concentration was also low (At% = 4.68), indicating that the core of Fe 3 O 4 may be hidden in the carbon shell. Through ICP detection and organic element analysis, the element composition and proportion of BC and SMB were further analyzed ( Table 1). The results showed that both materials contained lower carbon content, which is very common in sludge-based materials [40]. Elemental analysis showed that the Fe content of the SMB was 3.6-times greater than that of the BC due to the addition of waterworks sludge. The high iron content in SMB demonstrates that Fe was successfully loaded on the biochar. The mass ratio of iron in BC was 3.94%, this may be caused by the introduction of Fe in the coagulation [14] or flocculation [41] process in the sewage treatment process. However, in the experiment, it was found that the BC could not be magnetically separated from the water, indicating that the Fe introduced in the water plant's sludge treatment process alone was not enough to make the material produce good magnetic properties. Besides, due to the dilution effects of the waterworks sludge addition, the contents of non-volatile elements such as Si, K, and Ca in the samples were reduced. Referring to the method used by the International Biochar Initiative (IBI), the carbon storage value (BC +100 ) was used to quantify the stability of biochar. The larger BC +100 is, the more stable biochar is. The core of this method is that when H/C mole ratio is less than or equal to 0.4, the residual percentage of organic carbon (BC +100 ) in biochar after 100 years of evolution is expected to be 70%. When H/C mole ratio is between 0.4 and 0.7, and BC +100 is 70%. According to a study by Huang et al., the H/C molar ratio of biochar prepared at 600 • C is between 0.13-0.76 [42]. After calculation, the H/C molar ratio of SMB was 0.42, indicating that the sample has good stability. The H/C molar ratio of BC was 0.83, and the carbon storage value of SMB was greater than that of BC, indicating that the addition of magnetic material improves the stability of BC. The details about the structure and morphology of the resulting composite material were examined in SEM and TEM images (Figure 3). Due to the nature of biochar itself, the surface of the material was rough and porous and had an irregular surface (Figure 3a,b). It could be seen that after Fe 3 O 4 was loaded on the biochar, the surface became more smoothly, and part of the Fe 3 O 4 particles were embedded in the biochar matrix. This indicated that a good mechanical bond was formed between the biochar matrix and the Fe 3 O 4 particles [43]. Considering the pyrolysis process under 600 • C for one hour, there might be a minor sintering effect which activates the binder function of Fe 3 O 4 to the biochar. Thus, the product had a certain degree of stability and would not separate under the impact of water flow. In order to further explore the internal structure and spatial distribution, the TEM was used for further characterization. Combined with XPS analysis, it was speculated that the outer gray parts were carbon shell, which covering the inner iron oxide (black part) (Figure 3f). In addition, SMB particle had irregular morphology. The nanometer measurement software was used to measure the diameter of the particles in the TEM images. The particle size of SMB ranged from 13 nm-1.45µm, indicating that the magnetic biochar produced by this work was a nano/colloidal composite material. Magnetic separation capacity is very important for the separation and recovery of adsorbents. The magnetic properties of SMB were measured by the VSM experiment. From the hysteresis loop of Figure 4, the saturation magnetization (Ms), the magnetic remanence (Mr), and the coercivity (Hc) values were 25.60 emu·g −1 , 6.10 emu·g −1 , and 205.20 Oe, respectively. The saturation magnetization was significantly lower than the ferromagnetism of Fe 3 O 4 (90 emu·g −1 ) [44]. The reason for this phenomenon may be the following two aspects: (1) XRD and TEM results showed that the structure of carbon shell coated on iron oxide core possibly was formed. In addition, after measurement, the thicknesses of the carbon shells were between 16 nm-200 nm (Figure 3f). The existence of the carbon layer reduced the magnetic properties of SMB; (2) The reaction material sewage sludge and iron mud were not purified, resulting in impurities in the SMB, and the presence of these impurities had an impact on the magnetic properties of SMB. Besides, the presence of impurities may also be the reason for the presence of Mr and Hc. According to the study by Ma et al., magnetic materials can be used for magnetic separation when the magnetization reaches 16.30 emu·g −1 [45]. Despite the magnetic properties of SMB was only 25.60 emu·g −1 , it could still be easily separated from the solution within 1 min with an ordinary magnet (Figure 4b,c). According to the IUPAC classification, the N 2 adsorption-desorption isotherms of BC and SMB both belong to type IV isotherms with hysteresis loops of type H3 (Figure 5a,c). In addition, the average pore size of the samples can be obtained from the BJH (Barrett, Joyner, and Halenda) pore size distribution chart (Figure 5b,d). All the specific surface areas and the average pore diameters were marked in Figure 5. The BET of BC and SMB were 70.64 m 2 ·g −1 and 20.19 m 2 ·g −1 , respectively. In addition, the average pore diameters were 9.72 nm and 13.01 nm, respectively. This indicated that BC and SMB were both mesoporous materials. Kinetics Modeling Adsorption is a time-dependent process [41]. As for the adsorption of MB on SMB, the adsorption process could be divided into two parts: fast adsorption stage and slow adsorption stage (Figure 6a). In the fast adsorption stage, the adsorption removal rate reached 85.74% of the entire removal efficiency, and then the adsorption rate decreased rapidly, and reached the adsorption equilibrium and the process of adsorption lasted for 2160 min. The change in adsorption rate may be due to the fact that all adsorption sites were initially empty and the concentration of MB was very high. The high concentration of MB formed a strong ion driving, which caused MB molecules to approach the surface of the adsorbent. Therefore, the adsorption rate was very fast in the initial stage. As the adsorption progressed, the adsorption sites on the adsorbent were occupied and the dye concentration decreased, so the adsorption rate also decreased [46,47]. In order to evaluate the adsorption mechanism of MB dye, pseudo-first-order (PFO) (Equation (1)), and pseudo-second-order (PSO) (Equation (2)) were used to fit the adsorption process. where q t (mg/g) and q e (mg/g) are the adsorption capacity at time t and at equilibrium respectively, k 1 is the first order rate constant, k 2 is the second order rate constant. The results were listed in Table 2. The result showed that both the PFO and PSO can describe the adsorption kinetic data well (r 2 > 0.90). However, compared with the PFO (r 2 = 0.9454), the PSO (r 2 = 0.9784) was more suitable in describing the adsorption kinetics of MB on SMB. This means that the main determinant of the predominantly adsorption rate of MB on SMB was chemical adsorption [48], which involves valency forces by the sharing or exchange of electrons between SMB and MB [49,50]. Dye adsorption in an aqueous solution usually includes the following steps: diffusing to the outer surface of the adsorbent through the solution (film diffusion); being adsorbed on the outer surface of adsorbent; diffusing from the outer surface of adsorbent to the internal of adsorbent (particle internal diffusion), and adsorption to the active center of the inner surface of adsorbent. The control step of the adsorption rate is usually membrane diffusion or ion diffusion [51]. In order to further explore the control step in the adsorption process, the intra-particle diffusion model (Equation (3)) was also used for fitting. k 3 is the rate constant of particle internal diffusion model, C (mg·g −1 ) represents the thickness of the boundary layer. It could be seen from Figure 7b that the fitted curve did not pass through the origin, which meant that the intra-particle diffusion was not the rate determining step of the adsorption mechanism of MB on SMB [52]. Adsorption is a multi-step process involving adsorption on the outer surface and diffusion in the interior [53], the entire MB adsorption process was divided into three stages. After MB molecules were adsorbed to the surface of SMB by molecular diffusion and membrane diffusion control, under the action of intraparticle diffusion, MB molecules entered the internal pore structure of SMB, and finally reached the adsorption center gradually to achieve adsorption equilibrium. In addition, as shown in Figure 6b, in the initial stage of adsorption, the slope is the largest and the intercept is the smallest, which indicates that the adsorption in this stage occurred very quickly. At this time, there were many remaining adsorption sites over the adsorbent, and the high MB concentration produced a strong ion driving force, so the adsorption in this stage was completed quickly. In the second stage, the adsorption rate was slower. In the last stage, the slope was the smallest and the intercept was the largest. At this stage, the adsorption site was gradually completely occupied by MB molecules, and the reaction reached the final equilibrium stage [34]. The adsorption of TC on magnetic biochar studied by Lin et al. also went through a similar process [41]. Isotherms Modeling The adsorption isotherm is the definition of the relationship between the amount of adsorbate adsorbed per unit mass of adsorbent and adsorbate concentration in solution phase at constant temperature and at equilibrium condition [37]. As shown in Figure 7a-c), the sorption isotherm of MB onto SMB was "L" shape. The equilibrium adsorption capacity of MB on SMB increased with the increase of the initial MB concentration, similar findings have been found in previous studies [34,50]. In order to find the most suitable adsorption model, data was fitted to Langmuir (Equation (4)) and Freundlich (Equation (5)) sotherm models. The Langmuir isotherm can also be expressed by R L (separation factor), the formula is shown in Equation (6) [54]. q e = k F c 1 n e (5) where q e (mg·g −1 ) is the amount of MB adsorbed per unit mass of the adsorbent when equilibrium is reached, q m (mg·g −1 ) is a theoretical maximum adsorption capacity. Ce (mg·L −1 ) is the MB concentration at equilibrium in the solution. The K L and K F are Langmuir and Freundlich constants respectively, and 1/n is the Freundlich index coefficient. C 0 is the initial concentration. Parameters of these adsorption isotherms were presented in Table 2. Both Langmuir and Freundlich models were applied to simulate the sorption isotherms (Figure 7a-c). All models reproduced the isotherm data well with a correlation coefficient (r 2 ) above 0.90 (Table 2). For the Langmuir model, 0.9634 < r 2 < 0.9749, shows a better description than the Freundlich model. Langmuir model assumed that the adsorption was a single layer adsorption, with the same limited adsorption sites on the homogeneous surface, suggesting that the chemical adsorption might play an important role in the adsorption of MB on SMB. At 298 K, adsorption advantage R L = 0.173, adsorption strength n = 0.208, n and R L are measures of adsorption preference, For this experiment, n < 1, R L < 1, proved that the adsorption of MB on SMB is a chemical adsorption process and SMB has the ability to adsorpt MB [55][56][57]. Freundlich adsorption isotherm is an empirical model, which is suitable for heterogeneous mass transfer systems and multilayer adsorption [58]. In this study, the Freundlich model also shows a high linear relationship (0.9316 < r 2 < 0.9666), suggesting that the adsorption of MB on SMB may also be closely related to physical interactions. 1/n is a parameter of Freundlich model, which can represent adsorption characteristics. The values of 1/n in this study are all less than 1, indicating that adsorption is both convenient and favorable. Great linear fitting based on two typical models (r 2 was basically greater than 0.95), indicating that both chemical adsorption and physical adsorption may have played a role in the adsorption process, and the adsorption of MB onto SBM was mainly controlled by the Langmuir surface adsorption. As shown in Table 2, the adsorption capacity of SMB at 25 • C, 35 • C, and 45 • C was 44.47 mg·g −1 , 39.35 mg·g −1 , and 25.85 mg·g −1 (the maximum adsorption capacities were calculated by Langmuir model), respectively. With the increase of temperature, the adsorption of MB on SMB showed a trend of decline, indicating that the reaction was exothermic, low temperature was conducive to the adsorption. Effect of Initial Solution pH The pH of the solution is one of the most critical factors affecting the adsorption of pollutants in water [47]. Due to the different types of pollutants and the properties of adsorbents, the optimal pH for adsorbing pollutants also changes. Too high or too low pH will affect the adsorption effect. Therefore, the effect of initial solution pH on the removal of MB was studied. It could be seen from Figure 8a, as pH increased from 2 to 12, the removal capacity of SMB to MB first decreased and then increased. When pH was 4, the removal capacity was the worst. When pH was about 12, the adsorption effect was the best. There was a relative lyminor shift on MB removal efficiency under the condition of the pH range 4-8, only rose from 33.78% to 47.29%, but a sudden jump occurred in the range of 8-12, the removal efficiency rose from 47.29% to 97.97%. The adsorption experiments under different pH proved that the effect of pH on MB adsorption is not negligible, which may be related to the properties of MB molecule and the isoelectric point of the adsorbent. The research of Nhamo et al. also confirmed this [50]. According to the measurement, the isoelectric point of SMB was about 6.43 (Figure 8b). When the pH < pH pzc , the surface of SMB was positively charged. As the pH increases from 3.93 to 6.43, the repulsive force between SMB and MB (a kind of cationic adsorbent) slowly decreased and the adsorption capacity slowly increased. When the pH > pH pzc , the surface of the adsorbent was electronegative, and electrostatic attraction occurred between MB molecules and SMB. Therefore, when pH > 6.43, the removal efficiency of MB ions had been significantly improved. In addition, with the further increase of pH, the electrostatic adsorption effect was more intense, and the removal ability of SMB to MB was further improved. According to the experimental results, it is concluded that alkaline conditions are most conducive to adsorption. Other researchers have reported similar findings [59]. It is worth noting that this experiment did not reach the minimum adsorption value at the lowest pH. When pH is 1.84, the adsorption efficiency was 45.65%, which was greater than the adsorption efficiency when pH is 3.93. This may be because the BET of SMB was significantly increased after strong acid treatment (described in detail in the regeneration experiment), which was conducive to the adsorption, it made the adsorption capacity a little increase. Reproducibility and Reusability In order to investigate the possibility of regeneration of used adsorbent, after the adsorption was completed, the desorption study was carried out using hydrochloric acid of different molar concentrations as the desorbent. Adsorption capacity of SMB after one regeneration of hydrochloric acid at different concentrations was shown in Figure 9f. The optimal regeneration concentration of 0.5 mol·L −1 was selected to regenerate the adsorbent. In addition, the adsorption after regeneration is shown in Figure 9a. It could be noticed that the adsorption efficiency had been improved after regeneration (shown in Figure 9a). In the first three regenerations, the adsorption efficiency was almost 100%, and after the fifth regeneration, the adsorption efficiency still reached 61.73%, which was significantly higher than the adsorption rate of SMB without regeneration (38.14%). It is suspected that SMB may be modified by hydrochloric acid treatment, so the adsorption isotherm experiment was carried out on the SMB after one regeneration to investigate its adsorption capacity. The results of the isotherm model are shown in Figure 9b, and the theoretical maximum adsorption capacity fitted by the Langmuir model reached 296.52 mg·g −1 . In addition, SEM ( Figure 9c) and BET (Figure 9d,e) were used for characterization. It is worth mentioning that the adsorption capacity of SMB after one regeneration to MB was significantly higher than that of sludge-based biochar previously reported. Previous studies have shown that exposure of biochar to an acidic solution can remove mineral elements, organic matter, and carbonate on the surface of biochar, and increase the number of micropore, which increases the roughness of the surface of biochar [60,61]. Comparing the SEM images of SMB ( Figure 3d) and SMB after one regeneration (Figure 9c), it could be noticed that the surface roughness of SMB after one regeneration increased and more micropore were clearly visible. This was also confirmed in the BET results (Figure 9d,e). Compared with SMB, the surface area of SMB after one regeneration had increased from 20.19 m 2 ·g −1 to 278.23 m 2 ·g −1 , the total pore volume had increased from 0.0819 cm 3 ·g −1 to 0.1126 cm 3 ·g −1 , and the average pore diameter had decreased from 13.01 nm to 6.77 nm. The reason for this result may be that the modification process removed or dissolved the ash in the SMB, opened blocked pores of SMB, thus exposed more micropore inside the SMB [62]. Wang et al. obtained similar results by modifying rice husk biochar with nitric acid [63]. The increased surface area provides more attachment points for MB, which also explains why the adsorption effect of SMB after one regeneration has been greatly improved. Conclusions In this study, pyrolysis method was used to prepare magnetic biochar using backwashing iron mud from waterworks and sewage sludge, no chemicals were added during the whole reaction process. The results of this study are as follows: (1) This magnetic biochar could be used to remove MB molecules in the solution with the maximum adsorption capacity as high as 47.44 mg·g −1 , and after 0.5 mol·L −1 hydrochloric acid modification, the adsorption capacity of the magnetic biochar for MB could be increased to 296.52 mg·g −1 ; (2) pH had a significant influence on the adsorption effect, SMB had the worst adsorption capacity when pH = 4, and the best adsorption capacity when pH = 12; (3) The magnetic material Fe 3 O 4 in the product was well combined with biochar, and the saturation magnetization was 25.60 emu·g −1 ; (4) The adsorption process was more consistent with the Langmuir adsorption isotherm model and the pseudo-second-order model could better describe the adsorption kinetics of the adsorbent; (5) Even after five regenerations, the removal efficiency of MB could still reach 61.73%. In brief, this study provides a method for the preparation of a low-cost and effective adsorbent for the treatment of dyeing wastewater. Conflicts of Interest: The authors declare no conflict of interest.
2021-10-19T16:02:14.417Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "b8fc51844dcbb18bc338e25c8a7ccfa39c499e39", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/11/10/2473/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01a08e10e24e1ef56a73c995db04819b4b07634f", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
248792148
pes2o/s2orc
v3-fos-license
Carbon-based electrically conductive materials for bone repair and regeneration Electrically conductive polymers and carbon-based materials are emerging as promising biomaterials for applications in bone tissue engineering solutions. Carbon-based conductive materials may be more suitable alternatives due to their ability to adsorb proteins, act as load-bearing materials, and accelerate bone regeneration and maturation through exogenous electrical stimulation. Furthermore, incorporating carbon-based conductive materials into bone tissue engineering scaffolds better mimics the natural structural and electrically conductive properties of the native bone. This review discusses the in vitro and in vivo performances of one-dimensional and two-dimensional carbon-based conductive materials and their applications as three-dimensional scaffolds for bone tissue engineering. Cellular processing mechanisms of carbon-based conductive materials are summarized to understand better the cellular uptake, degradation, and excretion of these conductive materials if they were to be delivered to the human body to treat bone defects. Both in vitro and in vivo models are discussed to provide insight into the role played by the carbon-based electrically conductive bone scaffold, which may lead to clinical translation. Introduction By 2025 there will be over 3 million cases of bone fractures in the United States that require clinical intervention, creating an increased medical system cost of $25 billion per year. 1 In Canada, the healthcare system already faces an overall yearly cost of $2.3 billion for the treatment of osteoporosis and osteoporosisrelated fractures, 2,3 and as the aging segment of the population increases, an ever-increasing financial burden will be imposed on the healthcare system.Repair of bone fractures and reconstruction of critical-size bone defects that exceed the natural healing ability of the human body thus represent a significant challenge. 4one defects are caused by either external factors or deformation of existing bone, resulting in structural deterioration. 5urrent intervention strategies to treat bone defects involve the replacement of the damaged region with donor bone either from autograft, allograft, or xenogeneic sources.However, the use of donor bone sources possesses significant risks, including donor-site morbidity, hemorrhaging, and an elevated risk of disease transmission. 6Of greater concern is their limited availability, with autografts already in short supply and therefore unable to meet the increasing demand for our aging population.For these reasons, synthetic bone graft substitutes have received significant attention. Synthetic materials were first prepared as bone graft substitutes with the design purpose to match the physical properties present in natural bone, with a minimal adverse response to the host. 7Sustained research in this field developed biomaterials that could create a favorable interface between the implanted material and the host tissue, promoting positive responses in the surrounding tissues within the body. 8One of the most studied bone biomaterials is bioactive glass (BG), originally 45S5 BG invented by Hench, 9 which has gained great attention due to its ability to bond to bone through the formation of hydroxyapatite layers on its surface within a physiological environment.However, the processing of synthetic bone graft substitutes into porous complex scaffolds for load-bearing implantation sites can become challenging due to their brittle and stiff nature.In addition, patients treated with scaffolds made entirely of BG displayed limited anatomical and functional recovery demonstrating the requirement for an alternative intervention solution. 10The new solution would integrate natural or synthetic polymers to create a hybrid tissue-engineered bone that would ideally degrade at a similar rate to the formation of new tissue to maintain the integrity of the repaired region of bone which can physiologically and mechanically adapt to the natural environment and local load within the body. 11lthough several materials such as organic polymers, inorganic phosphates, and organic-inorganic hybrids have been extensively studied for bone repair and regeneration, [12][13][14][15] new generation biomaterials that provide additional functionality for bone scaffolds such as conductivity, [16][17][18] fluorescence property, and drug delivery 19 are active areas of research being applied for bone tissue engineering solutions.The native bone possesses endogenous conductive properties 20,21 and the incorporation of a conductive element into a bone biomaterial could better mimic the bone's natural electrical conductivity providing significant advantages at a physiological level. 22Carbon-based conductive materials have specifically been incorporated into polymer-based bone biomaterials as a reinforcement element and also as a component that can deliver electrical cues through the application of electrical stimulation for the maturation of osteoblasts and promotion of the repair and regeneration of bone defects (Fig. 1). Fig. 1 Application of carbon-based conductive materials for bone repair and regeneration.Electrically conductive bone scaffolds are developed through the incorporation of carbon-based conductive materials into natural/synthetic polymers, creating a tissue-engineered bone that can be implanted into regions of bone defects.The aim of the implanted electrically conductive tissue-engineered bone is to degrade at a similar rate to the formation of new tissue to maintain the integrity of the repaired region while acting as a load-bearing element until the scaffold is completely remodeled and matches the original tissue's mechanical strength.Electrical stimulation can be delivered to the implanted conductive bone scaffold to promote cell proliferation, migration, and maturation of bone. The incorporation of electrically conductive materials that enhance tissue restoration is a promising field within bioengineering and can be applied across various contexts.This is demonstrated by the recent articles that have reviewed the use of electrically conductive materials, including conductive polymers 23 to regenerate cardiac, 24 muscle, 25 and nerve. 26owever, a focused review of electrically conductive materials that can influence bone formation and their potential in clinical translation is lacking.Therefore, this review will discuss the importance of incorporating carbon-based conductive materials into bone scaffolds.The preparation strategies of different types of carbon-based conductive materials, including zero-dimensional buckminsterfullerene (C 60 ), one-dimensional carbon nanotubes (CNTs), and two-dimensional graphenebased sheets, will be presented.More specifically, we discuss the different applications of CNTs and graphene-based materials as they continue to be investigated for bone repair and regeneration.Lastly, the physiological responses of CNTs and graphene-based materials will be discussed for their potential as bone substitutes.Overall, this review aims to provide a better understanding of how carbon-based conductive materials, specifically one-and two-dimensional carbon-based materials, could potentially become promising candidates for bone tissue engineering solutions. Electrical conductivity of native bone The electrical conductivity of bone was discovered in the 1950s when Fukada and Yasuda observed that by applying mechanical stress to the bone in different directions, electrical signals were generated within the bone and they produced an endogenous electric field that supported osteogenic cell proliferation. 20,21,27Since then, it has been suggested that stress on the crystalline components of bone produces current to flow and triggers healing and that electrical signals similar to those generated by mechanical stress can enhance fracturehealing. 28,29he endogenous electric field is generated by an applied mechanical load on the bone which creates strain gradients, and these strain gradients produce pressure gradients which in turn allow interstitial fluid to flow through small channels known as caniculae within the bone structure.Fluid flows from areas of compression to areas of tension within the stressed bone, and as a result, electrical potentials are generated (Fig. 2). 30Electronegative potentials are developed upon compression, producing bone formation, whereas electropositive potentials are produced when a bone is under tension, causing bone resorption. 30,31Therefore, administration of exogenous electrical stimulation at the site of a bone defect can be applied as a way to mimic the normal formation of electrical potentials generated on bone upon application of mechanical loads. Since the discovery of electrically conductive properties of bone, various methods have been investigated clinically to deliver electrical stimulation in an attempt to aid bone healing ranging from treatment of non-union, [32][33][34] bone fractures, 35,36 delayed unions, [37][38][39] osteotomies, 40,41 bone grafts with electrical stimulation, [42][43][44] and aiding of osteonecrosis. 45,46Exogenous stimulation of bone healing can be delivered electrically through three main methods.The first approach is an invasive direct electrical current technique to stimulate bone whereby one or multiple cathodes are implanted onto the site of injury and an anode is implanted on soft tissue to permit current flow. 32,47,48However, this technique carries a significant risk of infection and tissue reaction due to a lack of biocompatibility from the electrodes.Therefore, the two other alternative noninvasive techniques, namely, capacitive coupling and inductive coupling, have received significantly more attention in promoting bone healing.Capacitive coupling uses two electrodes that are placed onto the skin between the bone defects.The electrodes generate an alternating electric field which is delivered to the damaged site. 49,50However, the need for a high voltage power source in this method is a major limitation since the energy dissipated from the electric field decreases quickly.2][53] In this method, one or two current-carrying coils are placed onto the skin, through which pulsed or sinusoidal electromagnetic fields are delivered that subsequently induce an electrical field in the damaged area. 54mongst these three techniques, inductive coupling best mimics the natural strain-generated potentials found in bone for its repair. 55,56However, the principle behind all three techniques has made possible the development of medical devices that have received FDA approval 57,58 to stimulate bone electrically for its healing. 59,60lthough electrical activity on bone defects promotes accelerated healing, there are some drawbacks, as mentioned above, related to the use of electrodes or the amount of energy required to promote bone regeneration.However, incorporating a conductive material directly onto a polymer scaffold could potentially eliminate the requirement for electrodes and could allow the effects of electrical stimulation on bone regeneration to be explored.Electrically conductive materials are thus emerging for bone tissue engineering due to the natural conductive properties of bone.Although conducting polymers, such as polyaniline (PANI), polypyrrole (PPY), and polythiophene and their derivatives have also been used for bone tissue engineering applications, 16,61 carbon nanotubes (CNTs), graphene, and reduced graphene oxide (rGO) have received the greatest attention for application in bone tissue engineering solutions as they possess different geometrical and morphological structures that can alter their physiological responses and thus enhance their potential to function and treat bone defects. Types of carbon-based conductive materials and their preparation strategies Carbon-based conductive materials are divided into zerodimensional buckminsterfullerene (C 60 ), one-dimensional carbon nanotubes (CNTs), and two-dimensional graphene sheets.Buckminsterfullerene (C 60 ), also known as fullerene or buckyball, are hollow spheres typically composed of 60 carbon atoms formed through a layer of stacked sp 2 hybridized carbon sheets arranged in hexagonal rings. 62Buckyballs can exist in other forms and structures, such as ellipsoids or buckytubes, which are also known as carbon nanotubes (CNTs).CNTs are made from single atoms of sp 2 hybridized hexagonal carbon.A single atomic layer of a graphitic sheet can be rolled up into a single hollow cylinder creating a structure commonly referred to as single-walled CNTs (SWCNTs) with typical diameters ranging between 0.5 and 1.5 nm. 63Alternatively, between 2 and 50 graphitic sheets can be rolled up into a coaxial tube with an outer diameter ranging between 2 and 100 nm forming multiwalled CNTs (MWCNTs). 63A major feature of CNTs is that they possess unique mechanical, chemical, and electrical properties that resulted from their tubular shape and sp 2 hybridized C-C bonds. 64Lastly, two-dimensional graphene is composed of a two-dimensional monolayer sheet of carbon in which the carbon atoms are sp 2 hybridized, containing s bonds that create a lattice structure and conjugated p orbitals that form a delocalized electron network providing excellent conductive properties. 65,66arbon-based conductive materials can be produced predominantly based on a technique in which gaseous carbon feedstock reacts in the presence of catalysts to form different shapes of carbon allotropes. 62,670][71] However, MWCNTs were first produced using arc discharge fullerene reactors, 72,73 and were later applied for the synthesis of SWCNTs. 74Currently, the most affordable and scalable technique to produce CNTs is using chemical vapor deposition (CVD), during which a gaseous carbon precursor is thermally decomposed in the presence of metal catalysts and subsequently deposited inside a nanostructured tubing. 75,76The scalability of this approach ensures the use of carbon-based materials and remains an attractive avenue in clinical-scale production.However, it is not the only method through which CNTs can be produced.CNTs and graphene can also be synthesized either using bottom-up or top-down approaches. 77Bottomup techniques include epitaxial growth, 78 pyrolysis, 79,80 and CVD 81 and operate based on the principle of depositing gaseous precursors, typically graphite, onto a substrate.Top-down approaches, however, involve breaking down graphitic layers until obtaining graphene and common techniques use exfoliation and reduction processes. 77,82,83nother set of popular graphene-based materials includes graphene oxide (GO) and its reduced form, reduced graphene oxide (rGO).5][86] The resulting GO has many oxygen-containing functional groups bound to sp 3 carbons, thus containing both sp 2 and sp 3 hybridizations. 87The change in hybridization reduces the electrical conductivity of GO via disrupting the conjugated structure, in turn blocking conductive connecting pathways between the sp 2 domains. 87GO can be subsequently reduced to rGO via many different processes, including thermal annealing, electrochemical reduction, or chemical reduction.The reduction process removes oxygencontaining functional groups resulting in a higher conductivity than GO but lower conductivity than pristine graphene due to the remaining oxygen groups. 88he different preparation strategies of zero-dimensional fullerene, one-dimensional CNTs, and two-dimensional graphene sheets allowed the development of various carbonbased conductive materials that can be incorporated into different biomaterial systems to fabricate bone tissue engineering scaffolds.Their applications in bone repair and regeneration bring significant advantages, leading to the design of novel biomaterials that overcome some of the drawbacks of current bone repair materials (Table 1). Physicochemical properties of carbon-based conductive materials An interesting feature of carbon-based conductive materials is their ability to strongly adsorb most organic compounds. 113ncorporating carbon-based conductive materials into polymers or ceramics is thus beneficial in bone tissue engineering as carbon-based biomaterials have highly delocalized p-bonds on their surfaces and can adsorb proteins. 114The addition of carbon-based conductive materials is an important factor in fabricating a tissue-engineered bone since the grafted scaffold View Article Online Table 1 Types of carbon-based conductive materials and their role in bone tissue engineering Type of carbonbased conductive material Biomaterial system Role in bone tissue engineering Fullerene Polyhydroxylated fullerene (fullerol) 89 Antioxidative capacity promotes osteogenic differentiation and mineralization 89 Polyethylene glycol (PEG)-functionalized C 60 fullerene derivative 90 Good biocompatibility and enhanced osteoblast proliferation 90 Aligned fullerene C 60 nanowhiskers 91 Good osteoblast adherence, aligned oriented cell growth, and low toxicity 91 GelMA-fullerol microspheres and bone marrow-derived mesenchymal stem cell (BMSC)-laden GelMA-fullerol microspheres 92 Antioxidant activity is able to quench intra-and extracellular reactive oxygen species (ROS), promotion of osteogenic stem cell differentiation in vitro and bone healing in rat calvarial defects via modulating the ROS microenvironment 92 CNTs CNT-hydroxyapatite (HA) based nanocomposites 93 Good biocompatibility 93 SWCNTs Functionalization of 3D-printed poly(propylene fumarate) (PPF) scaffolds with single-stranded deoxyribonucleic acid (ssDNA) bound CNTs 94 Improved cell adhesion, proliferation, and differentiation of preosteoblast cells enabling modulation of cell behavior through electrical stimulation 94 MWCNTs SWCNT gel scaffolds with nanofibrous architecture via the pairing of heparin functionalized nucleobases 95 Targeted drug delivery, increased mechanical properties, and improved osteogenic properties through the application of electrical stimulation 95 MWCNT compacts 96 Induction of osteogenic gene expression, increased protein adsorption and mineralization, and the influence of ectopic bone formation 96 MWCNT-COOH reinforced borosilicate BG scaffolds 97 Enhanced mechanical properties, bioactive behavior promoting hydroxyapatite formation, good cell viability, and osteogenic initiation 97 Chitosan-hydroxyapatite MWCNT nanocomposite films 98 Biocompatible, electrically conductive, and good mechanical properties 98 Polycaprolactone (PCL)/MWCNT scaffolds 99 Promotion of thick bone tissue formation in vivo, increased angiogenesis and mineralization of bone through electrical stimulation in vivo, and activation of osteoclastogenesis through electrical stimulation for bone remodeling 99 MWCNT reinforced polyvinyl alcohol/Biphasic calcium phosphate (PVA/BCP) scaffolds 100 Increased mechanical properties, high interconnectivity, and good biocompatibility 100 Bionic mineralized MWCNT scaffolds 101 Improved mechanical properties, enhanced cell growth in vitro and in vivo, increased osteogenic differentiation and promotion of bone defect repair in vivo 101 Graphene Hyaluronic acid-chitosan with simvastatin 102 Biocompatible and bioactive 3D scaffold with improved osteogenic properties 102 GO rGO coated collagen scaffolds 103 Enhanced mechanical properties, good biocompatibility, and proliferation of human bone marrow-derived mesenchymal stem cells (hBMSCs), and increased bone formation after implantation into cranial bone defects in an animal model 103 rGO Graphene hydrogel membrane 104 Guided bone tissue regeneration in a rat calvarial model, diffusion of proteins and nutrients, and promotion of early osteogenesis and mineralization to induce mature bone formation in vivo 104 Gelatin methacrylate, acryloyl-b-cyclodextrin, and b-cyclodextrin-functionalized rGO nanocomposite hydrogel patch 105 Improved mechanical strength, increased conductivity, good biocompatibility, promotion of cell proliferation and osteogenic differentiation, and enhanced in vivo bone defect repair in a rat skull model 105 Vascularized GO-collagen chamber model 106 Improved bone regeneration in vivo, osteoinductive properties and anti-fibrosis effects in an animal model, and improved angiogenic, mineralization and osteogenic differentiation of BMSCs 106 GO-modified silk fibroin/nanohydroxyapatite scaffold loaded with urine-derived stem cells (SCs) 107 Immunomodulation and promotion of bone regeneration in vivo, and enhanced mechanical properties 107 Polylactic acid (PLA)/GO nanocomposite 3D scaffolds 108 Enhanced mechanical properties, good biocompatibility and promotion of cell proliferation and mineralization 108 Graphene/hydroxyapatite nanoparticle composite hydrogels 109 Mechanically strong, electrically conductive, and biocompatible 109 Collagen-rGO coated scaffolds 110 Improved mechanical properties and enhanced osteogenic capability 110 rGO-coated titanium substrates 111 Promotion of the osteogenic differentiation of hMSCs and increased calcium phosphate deposition and osteogenic potential 111 3D-printed b-tricalcium phosphate (TCP)-based scaffolds filled with a freeze-dried gelatin/rGO-magnesium-arginine matrix 112 can interact with the host tissue through the protein absorption properties of the materials permitting osteogenic progenitor cells to adhere, implant, and begin to lay down their extracellular matrix (ECM). 8The establishment of a new ECM is the critical first step in scaffold remodeling and bone tissue regeneration.Adsorption of proteins onto CNTs and graphene surfaces is mediated by several different variables, including the geometry of carbon-based conductive materials and the formation of non-covalent interactions. 114,115The non-covalent interactions or physical adsorption of proteins with carbonbased conductive materials involves the presence of different binding selectivity which is governed by the formation of hydrophobic interactions, p-p stacking interactions (van der Waals forces and dispersion forces), electrostatic interactions, and H-p bonds. 114,116,117ydrophobic interactions occur due to the great affinity of the hydrophobic regions of cell-binding proteins with the hydrophobic carbon lattices present in the conductive material. 118Protein adsorption on CNTs and graphene surfaces strongly depends on the electron density and geometry of protein molecules. 119In the case of p-p stacking interactions, binding interactions occur when the aromatic side chains of amino acids are oriented parallel with the plane of carbonbased conductive materials at different charge states. 115,120eptides possess different aromatic side chains resulting in different polarizability properties, which, in turn, influence the strength of binding with carbon-based conductive materials.In general, the higher the polarizability of the protein aromatic side chains, the greater the binding strength.][123] Another non-covalent interaction is electrostatic binding, which forms in the presence of different charges between cellular proteins and carbon-based conductive materials.Surface charges vary in carbon-based conductive materials due to the type of product synthesized and the variation in the preparation procedures.As an example, GO is a material with a surface rich in negatively charged oxygenated functional groups.The strong negative charge generated by these groups facilitates GO binding with proteins that have either negatively or positively charged surfaces resulting in electrostatic interactions with various degrees of stability. 114The importance of material surface structure and functionalization was demonstrated in a study by Chong et al., 124 in which the strength of protein interactions with various carbon-based conductive materials was assessed. 124The study revealed that GO and rGO have increased ability to adsorb proteins compared to SWCNTs because it is easier for proteins to bind onto the planar surfaces of graphene compared to the curved surfaces of CNTs. 124ncorporation of a carbon-based conductive material into a bone tissue engineering scaffold could be beneficial for bone defect treatments since their ability to adsorb proteins allows osteoblasts to attach to the bone scaffold.This step appeared to be crucial for the remodeling and regeneration of bone tissue and as demonstrated in the following studies.Taale et al. 125 developed bioactive carbon-based hybrid 3D scaffolds composed of either CNT-bioactive glass nanoparticles (BGN) or CNT-hydroxyapatite (HA) nanoparticles to assess their protein adsorption capacity using bovine serum albumin (BSA) as a model protein. 125This study revealed that CNT-BGN had higher protein adsorption ability than CNT-HA scaffolds due to a plausible electrostatic interaction between the high polarity of BSA and the BGN surface etching/sintering process to remove a sacrificial ZnO template. 125Similarly, Fu et al. 126 incorporated GO into poly(L-lactic-co-glycolic acid) (PLGA)/HA nanofiber scaffolds and observed that the addition of carbon-based conductive materials significantly increased the protein adsorption (Fig. 3A). 126PLGA/GO/HA nanofibrous matrices obtained the highest protein adsorption rate of nearly 1.46 and 1.25 times that of PLGA/HA and PLGA/GO nanofibrous matrices, respectively, since the addition of GO and HA improved the surface properties, resulting in higher specific surface areas. 126Using materials that increase protein adsorption can therefore promote cell adhesion and proliferation of preosteoblasts, enhancing bone maturation and mineral deposition. 126Du et al. 127 compared the osteogenic ability of MWCNTs and nanohydroxyapatite (nHA), the main inorganic component of bone, and showed that MWCNTs are more effective materials for the promotion of bone formation. 127The results showed that MWCNT compacts possessed higher ability to adsorb fetal bovine serum (FBS) proteins than nHA (Fig. 3B).High protein adsorption ability had a positive effect in further in vitro studies revealing that human adipose-derived mesenchymal stem cells cultured on MWCNT compacts possessed a higher cell attachment strength and proliferation than nHA specimens. 127In addition, MWCNTs could induce osteogenic differentiation better than nHA since an increased protein concentration modulates the conformation of the adsorbed proteins driving the differentiation of cells toward an osteoblastic lineage by the activation of Notch signaling pathways. 127Translation of in vitro results was further investigated in a rabbit model, where both MWCNTs and nHA compacts were implanted in dorsal musculatures.The results showed that MWCNT compacts were able to induce ectopic bone formation while nHA did not (Fig. 3C) as a result of the increased ability of MWCNT compacts to adsorb proteins and drive the formation of new bone tissue. 127 Electrical properties of carbon-based conductive materials Human cortical and cancellous bone have electrical conductivities of 0.02 S m À1 and 0.07 S m À1 , respectively. 128CNTs and graphene are two of the most attractive materials that are being used in scaffolds for bone repair and regeneration applications [97][98][99]103,105,107,111 as they possess very high electrical conductivities of 10 6 -10 7 S m À1 for pure CNTs and single-layered graphene sheet is arranged in a honeycomb grid of carbon atoms possessing four electrons in the outer shell, three of them are used for covalent bonds while the remaining electron is highly mobile promoting electrical conduction. 130 Therefore, he electrical conductivity of bone scaffold composites is more appropriately explained in terms of the percolation threshold. The percolation threshold is related to the addition of a critical volume fraction of conductive filler within a hybrid material that results in the transition from an insulating state to a significant change in the overall electrical conductivity caused by the formation of a continuous network of conductive particles within the insulating matrix. 131The percolation threshold is described by an empirical model known as scaling law, expressed as s(F) = s 0 (F À F c ) t where F c is the percolation threshold concentration and t is the critical exponent. 132Below the critical volume fraction of the percolation threshold, the conductivity of the composite remains electrically insulating since the conductive particles are dispersed into small clusters.Above the critical volume fraction, however, the material no longer behaves as an insulator, and its conductivity increases by many orders of magnitude. 131The aspect ratio of the conductive fillers determines the percolation threshold value.Graphene and CNTs have a length-to-diameter aspect ratio of 0.01 and 100, respectively, which differs due to their different geometrical structures. 129Therefore, hybrid bone biomaterials containing a low amount of carbon-based conductive materials can significantly increase the overall electrical conductivity, thus requiring a small amount of filler to achieve the percolation threshold. 129he advantages of incorporating a carbon-based conductive material directly onto a polymer bone scaffold could, in principle, eliminate the requirement for electrodes that are normally used for the treatment of bone defects since they possess electrically conductive properties (Fig. 4).Cells are responsive to exogenous electric fields and have been shown to promote key signaling pathways that accelerate osteogenesis and angiogenesis, the main processes for bone regeneration and remodeling, upon application of different field strengths and current densities. 43,133In vitro studies have shown that the application of electrical stimulation through direct, capacitive and inductive coupling induces key molecular pathways at different cellular locations involved in osteogenesis, specifically through the calcium/calmodulin pathway, resulting in minor alternative cellular responses (Fig. 5A). 43,134Direct and capacitive coupled stimulation exerts their effects on the cell membrane, increasing the intracellular Ca 2+ concentration and prostaglandin E2 synthesis through calcium translocation via voltagegated calcium channels. 135Moreover, inductively coupled stimulation through electromagnetic fields achieves its effects in the cytoplasm where intracellular calcium accumulation is released from reservoirs, such as the endoplasmic reticulum. 136pplication of these exogenous stimulations results in cellular responses that increase calcium concentration, thus promoting activated calmodulin levels to drive osteoblast cell proliferation, as well as an increased expression of vascular Fig. 3 Protein adsorption ability of bone tissue engineering scaffolds containing GO and MWCNTs promotes bone formation.Different carbon-based conductive materials have been incorporated into bone scaffolds to test their protein adsorption efficiencies, which can subsequently drive bone differentiation and maturation.(A) Protein adsorption efficiencies of nanofibrous matrices composed of PLGA, GO, and HA were assessed after 24 h using BSA as a model protein.Materials containing GO displayed the highest level of protein adsorption. 126(B) Protein adsorption in compacts composed of either MWCNTs or nHA in an FBS protein model at increasing time points.Compacts containing MWCNTs displayed higher ability to adsorb proteins than those composed of nHA. 127(C) In vivo, compacts containing MWCNTs implanted into the rabbit dorsal muscle pouch displayed higher levels of new bone and collagen formation as evidenced by hematoxylin and eosin (H&E) and type I collagen staining that is not observed in the nHA compacts 127 (used with permission). endothelial growth factor (VEGF) and transforming growth factor (TGF)-b1. 134,137,138he addition of carbon-based conductive materials into bone tissue engineering scaffolds improves the effects of the bone regeneration rate through electrical stimulation. 95,108,139,1402][143] As demonstrated in a study by Liu et al., 144 an injectable conductive hydrogel, BP-CNTpega-gel, composed of CNT-poly(ethylene glycol)-acrylate (CNTpega) with black phosphorous (BP) was developed to support bone regeneration. 144The inclusion of CNT as the conductive component in the hydrogel allowed BP-CNTpegagel to possess electrically conductive properties.The highest conductivity value reported was 0.008 S m À1 at a CNTpega concentration of 16 mg ml À1 and a BP nanosheet concentration of 0.8 mg ml À1 . 144Furthermore, in vitro studies showed that hydrogels with CNTpega effectively respond to exogenous electrical stimulation resulting in increased cell proliferation and ALP activity, as well as an upregulation of osteogenic genes and bone mineralization markers (Fig. 5B-E). 144In another study, 3D conductive scaffolds composed of polycaprolactone (PCL) and MWCNTs (0.75 wt% and 3 wt%) were developed by e Silva et al. 99 to treat large calvarial bone defects in rats. 99Conductive scaffolds were produced through extrusion-based additive manufacturing and cut to fit the bone defect in animal skull models (Fig. 5F). 99he authors applied non-invasive electrical stimulation to the grafted region for 5 min at 10 mA intensity twice a week for a 60 or 120 day period.Prolonged stimulation at this frequency was considered appropriate for future clinical trials in potential long-term treatment patients. 99Histomorphometry results from the study showed that thicker tissue formation was observed in treatment groups that contained scaffolds than in the untreated groups.Furthermore, the groups treated with PCL scaffolds containing 3 wt% MWCNTs and additionally underwent electrical stimulation showed elevated connective and denser bone tissue formation (Fig. 5G). 99Therefore, incorporating MWCNTs into PCL scaffolds and applying electrical stimulation significantly promoted angiogenesis and mineralized bone tissue formation. 99imilar findings have also been reported in 3D printed PCL/ graphene scaffolds in a rat calvarial bone defect model. 145New bone tissue formation was most effective with scaffolds containing graphene and electrical stimulation in vivo, leading to organized tissue deposition and bone remodeling. 145Collectively, these studies demonstrate the translational potential for clinical approaches combining conductive materials into bone scaffolds and electrical stimulation for accelerated repair and regeneration of bone defects. Mechanical properties of carbon-based conductive materials The mechanical properties of carbon-based conductive materials can greatly influence their application and function in different types of biomaterials.CNTs and pristine graphene have different mechanical properties (Table 2) due to their varied morphological and geometrical structures but possess excellent mechanical strength of approximately 100 times greater than that of steel, but with an extremely low density (1.3-2.0 g cm À3 ) compared to metals or ceramics (42.0 g cm À3 ). 146The high mechanical and tensile strength of CNTs set them apart from other carbon-based materials.In comparison to GO and rGO, both CNTs and graphene possess mechanical strengths greater than that of GO and rGO.GO and rGO monolayers possess a Young's Modulus of 207.6 AE 23.4 GPa and 250 AE 150 GPa, respectively. 147,148The lower strength of GO and rGO is predominantly due to the chemical processes used in their production which decrease their stability originating from the sp 2 bond that forms the hexagonal Fig. 4 Carbon-based conductive materials incorporated into bone tissue engineering scaffolds possess electrically conductive properties, eliminating electrodes in bone healing treatments.Carbon-based conductive materials have been incorporated into bone scaffolds and tested for their electrical properties.Electrical conductivities of CNTs and CNTs with water-soluble single-stranded deoxyribonucleic acid (ssDNA) (ssDNA@CNT complex) were tested both in solution at a concentration of 0.5 mg ml À1 (A) and in solid-state in the form of pellets (B). 94(B) Immunofluorescence staining images on MC3T3-E1 preosteoblasts show a cellular response upon the application of electrical stimulation after 7 days post-seeding.Cells were largely elongated and stretched in cell shape. 144(C) Cell proliferation under electrical stimulation was evaluated using hydrogels at 1, 4, 7, and 14 days post-seeding.On days 1 and 4, CNTpega-gels and BP-CNTpega-gels showed significantly higher cell numbers than BP-gels and oligo(poly(ethylene glycol)fumarate) (OPF) as the control. 144BP-gels, CNTpega-gels and BPCNTpega-gels showed significantly higher cell numbers than OPF on day 7.However, BP-CNTpega-gels had the highest cell density. 144(D) Intracellular ALP activity was assessed in cells grown on hydrogels with or without electrical stimulation after 14 days of culture.Cells grown on BP-CNTpega-gels possessed the highest ALP activity. 144After treatment by electrical stimulation, ALP activities increased for all hydrogels; however, the highest increase in ALP activity was shown in cells grown on BP-CNTpega-gels. 144(E) Determination of the osteocalcin (OCN) content in cell culture media after 21 days of culture with and without electrical stimulation.OCN was significantly higher in cells grown on BP-CNTpega-gels in the presence of exogenous stimulation, indicating potential mineralization enhancement. 144(F) Image of the bone defects in a calvarial animal model as well as the implanted PCL/MWCNT scaffold.Subsequent bone tissue formation with and without the scaffold was obtained on day 60 post-implantation. 99(G) The cross-sections of bone tissue regeneration at the bone defects for untreated, PCL and PCL/MWCNT (0.75 wt% and 3 wt%) groups after 60 days and 120 days post-operation. 99PCL/MWCNT 3 wt% subjected to electrical stimulation formed the highest connective and bone tissues on days 60 and 120 99 (used with permission). lattice. 149Mechanical strength is considered one of the most crucial properties in the preparation of a scaffold for bone tissue engineering applications.It is imperative that an implanted scaffold is strong enough to initially withstand the load that the bone tissue would have carried, but it gradually decreases as it is being remodeled by new tissue that eventually takes over the load.Therefore, it is important to know the mechanical properties of different types of bone (Table 2) in order to establish what concentration of the carbon-based conductive material is necessary to design a bone substitute that can support the natural mechanical strength of the defective bone region prior to its regeneration. Carbon-based conductive materials can be used by themselves, but given their overall higher mechanical properties, stress shielding will likely cause bone resorption.Instead, they are incorporated into polymers as secondary structural reinforcing agents to increase the mechanical properties of twoand three-dimensional polymeric bone scaffolds.][162][163] In a study by Lu et al., 104 a multilayered graphene hydrogel (MGH) membrane was developed to investigate whether the biomaterial possessed the ability to guide bone tissue regeneration in a rat calvarial model. 104Within the regenerating region, diffusion of proteins and nutrients took place through the selective permeability of the MGH membranes promoting early osteogenesis and mineralization which resulted in the formation of a mature bone structure surrounded by external and internal cortical bone after eight weeks of implantation. 104lthough MGH membranes are very flexible, they maintain mechanical strength similar to that of a rat braincase with a tensile modulus of 69 AE 5 MPa. 104he mechanical strength of a biomaterial can be modulated by altering the proportion of carbon-based material in the final composition.This principle was demonstrated in a study by Belaid et al. 108 which investigated biocompatible polylactic acid (PLA)-based scaffolds produced by 3D printing and the effect of incorporating different concentrations of GO (0.1, 0.2, and 0.3 wt%) as a reinforcement element for bone healing applications. 108Pure PLA scaffolds presented a Young's modulus of 2 GPa, but this was significantly increased to 2.6 GPa upon the addition of 0.3 wt% GO (Fig. 6A). 108In addition, materials containing 0.3 wt% GO presented the highest tensile strength with a value of 39 MPa, whereas pure PLA presented the lowest tensile strength of 34 MPa (Fig. 6B). 108owever, GO concentrations of 0.1 and 0.2 wt% showed a decreased tensile strength compared to 0.3 wt% GO since a lower filler concentration induced flaws at a local scale resulting in a weaker material. 108Therefore, at higher GO concentrations, the filler is intrinsically stronger than PLA, resulting in a stronger material. 108ian et al. 95 developed a CNT gel scaffold that contained functionalized nucleobase pairing for targeted drug delivery and in vitro osteogenesis. 95The conductive gel scaffold was prepared by functionalizing heparin (HP) with adenine (HP-A) and thymine (HP-T) which were subsequently grafted to aminated CNTs forming CNT-HP-A and CNT-HP-T precursors. 95he mixture of these precursors resulted in a nucleobase paired CNT gel network.Dynamic time sweep rheological tests were performed in order to investigate the gel network evolution during the gelation process at 37 1C.The results showed that the CNT-HP-A/CNT-HP-T mixture formed a dynamic network within less than 5 min which was shown by the crossover of storage modulus (G 0 ) and loss modulus (G 00 ) (Fig. 6C). 95In addition, after 14 min a structurally stable network was formed for both CNT and control (HP-A/HP-T) gels, reaching a G 0 value of around 100 Pa which could translate to the successful development of gels capable of maintaining their 3D shape for therapeutic drug loading purposes. 95The structural integrity of the scaffold gels was also evaluated through compressive tests, and results were compared before and after loading with bone morphogenetic protein 2 (BMP-2) as a potential osteogenic drug loading model. 95The compressive modulus of scaffold gels containing CNTs was significantly higher (256 kPa) than that of HP-A/HP-T gels (83 kPa). 95Loading of gels with 50 ng ml À1 BMP-2 did not affect the overall compressive strength of the gels, but CNT-containing scaffolds presented a final modulus value of 264 kPa (Fig. 6D). 95Furthermore, Bahrami et al. 103 prepared rGO coated collagen (Col-rGO) scaffolds by chemical crosslinking and freeze-drying methods to assess their mechanical strength for implantation into rabbit cranial bone defects. 103Compressive tests were performed on collagen and Col-rGO scaffolds to evaluate the elastic modulus of the scaffolds. 103Col-rGO scaffolds showed an elastic modulus of 325 AE 18 kPa, whereas pure collagen scaffolds presented a modulus of 115 AE 16 kPa, which is not sufficient for rabbit cranial bone structural support. 103The addition of coated rGO on collagen scaffolds not only increased the mechanical strength of the material but enhanced the cell viability and proliferation which translated into increased in vivo bone formation after 12 weeks of implantation into rabbit cranial bone defects. 103 Cellular processing mechanisms of carbon-based conductive materials Ideally, bone scaffolds should degrade at a similar rate to the formation of new tissue to maintain the integrity of the repaired region of bone, which can physiologically and mechanically adapt to the natural environment and local load within the body. 8,11Although carbon-based conductive materials can be degraded through enzymatic oxidation using horseradish peroxidase, or hydrolytically through lipases, 164,165 the complete degradation and fate of CNTs and graphene-based materials in the body are still relatively unknown.However, in vitro and in vivo studies have shown that a variety of cell types such as macrophages, [166][167][168] endothelial cells, 169 pulmonary epithelia, 170,171 intestinal epithelia 172 and neuronal cells 173 can degrade and take up carbon-based conductive materials.Therefore, understanding how carbon-based conductive materials in bone scaffolds are processed and degraded by the specialized cell types they will interact with will be important for establishing safety in clinical translation. 5][176] However, cellular internalization is still reliant either on passive or active transport pathways present on the cell membrane.Passive diffusion transport is a non-energydependent process in which carbon-based conductive materials land on the surface of cell membranes and penetrate the phospholipid bilayer, resulting in subsequent transport into the cytoplasm. 176,177On the other hand, the active pathways are energy-dependent processes and mainly occur through endocytic mechanisms that control the internalization of foreign objects from the cell membrane into cytoplasmic organelles called lysosomes, which can break down the extracellular material. 176,178 been identified to date that governs the method or rate at which carbon-based conductive materials are internalized into cells. 176,179Lacerda et al. 180 investigated the uptake mechanism of functionalized MWCNTs (f-MWCNTs) in the presence of cell uptake inhibitors at different temperatures and concluded that there was no single mechanism responsible for the transportation of CNTs into cells since 30-50% of f-MWCNTs were internalized into the cells through an energy-independent pathway, but the remaining f-MWCNTs entered the cells through endocytosis. 180he diameter of the conductive nanomaterial appears to be a critical factor in determining degradation post-internalization. Several studies have reported that small agglomerates of carbon-based conductive materials are more easily degraded in macrophages through lysosomal and endosomal activity, 175,181,182 in contrast to larger agglomerates of carbonbased conductive materials that are expelled from the cell through exocytosis. 183,184Once internalized, materials on the nanoscale can migrate to other subcellular organelles. 185,186his internalization of materials into cells and subsequently through the subcellular compartments was first demonstrated by exposure of human monocyte-derived macrophages to SWCNTs ranging from 0.6-3.5 nm in diameter. 185,186In these studies, SWCNTs were observed to be localized solely within lysosomes two days after exposure, however after four days, they were observed to have crossed the nuclear membrane, as nanoparticle sizes of less than 40 nm can enter the nuclear pore complex. In addition to the diameter, the configuration of carbonbased materials can also influence how they are processed cellularly.Work by Mu et al. 187 showed that single MWCNT-COOH and MWCNT-NH 2 (20-30 nm diameter and B1000 nm average length) were transported into human embryonic kidney epithelial cells (HEK293) through direct passive diffusion, whereas bundled MWCNTs entered cells through endocytosis. 187The bundled MWCNTs were also subsequently processed and could release single MWCNTs capable of endosomal escape and release into the cytoplasm.Furthermore, those released single MWCNTs of shorter length were also capable of achieving nuclear translocation. 187nternalization, degradation, and externalization of carbonbased conductive materials need to be further investigated in osteoblasts and in vivo bone defects in order to understand their impact on bone tissue engineering applications and targeted bone drug delivery.In addition, migration of carbonbased conductive materials from bone scaffolds could occur as the scaffold is being remodeled and it is imperative to investigate whether their translocation causes any toxic or adverse effects. Processing techniques used to synthesize carbon-based conductive materials influence the physicochemical properties of the material, causing different potential toxicological interactions. 188Of key concern within the field is the potential of a carbon-based conductive material to generate reactive oxygen species (ROS), which can cause subcellular damage to organelles and processes, ultimately resulting in cell toxicity. 189talytic metal impurities left over from the material processing are suggested to be one of the main reasons why ROS are formed. 190The metal catalysts used during synthesis can remain attached to carbon-based conductive materials which can subsequently influence intracellular calcium concentrations, activate transcription factors, and modulate cytokine production via the generation of free radicals creating ROS and thus inducing acute toxicity. 191However, carbon-based conductive materials can undergo various treatments to achieve higher purification to reduce potential metal particles that induce ROS formation.Treatments include chemical selective oxidations and dissolution of metallic impurities or physical purifications that involve the separation of impurities through their physical sizes and aspect ratios. 192n addition to impurities, the sizes of carbon-based conductive materials can influence the immunological effects in cells.Yoon et al. 193 showed that smaller graphene nanoflakes (30.9 AE 5.4 nm) have higher uptake affecting the cell membrane function, thus inducing apoptosis compared to larger graphene nanoflakes (80.9 AE 5.5 nm) which were shown to be less toxic given that they were mostly aggregated on the cell membrane. 193The effects of the length and diameter of CNTs have also been shown to impact toxicity, where shorter CNTs (sub-1 mm) can easily penetrate into cell membranes and internalize, thereby accumulating in cell lysosomes, 194 whereas longer CNTs (48 mm in length and o1.25 mm in diameter) are not engulfed into cell membranes and degraded causing acute inflammation increasing the production of ROS and cytokines thus exerting more significant biological effects. 195However, Zhang et al. 196 showed that larger CNTs were taken up by macrophages and that the rope-like structures of CNTs had similar properties to spherical nanoparticles where cytotoxicity increased upon a higher internalization concentration of CNTs causing cell death with levels above 20 pg per cell. 196Another factor that influences cytotoxicity is the dose of carbon-based conductive materials to cells.The effects of pristine GO concentrations on the viability of bone mesenchymal stem cells (BMSCs) were investigated, which showed that high concentrations (10 mg ml À1 ) of GO inhibited the proliferation of BMSCs, while low concentrations (0.1 mg ml À1 ) enhanced the cell proliferation. 197Similar behavior was observed in biphasic calcium phosphate (BCP) coated with different concentrations of rGO; osteoblast viability was maintained above 80% at concentrations below 62.5 mg ml À1 , but significantly decreased at concentrations above 100 mg ml À1 . 198ltimately, the aim of designing tissue engineering scaffolds containing carbon-based conductive materials is to utilize them for clinical translation.Therefore, understanding the cellular processing mechanisms that one-and two-dimensional carbonbased conductive materials undergo and the factors that influence their performance is important.0][201][202] These studies also provided evidence that the excretion of carbon-based conductive materials from the body is dependent on size and shape.Although CNTs of dimensions over 2000 nm length and over 30 nm diameter, 201 and GO sheets of over 5 nm thickness 202 accumulated in the liver and spleen, they showed very little toxicity in vivo and eventually cleared from the body. 201,202ince incorporating carbon-based conductive materials into bone tissue engineering scaffolds is primarily for implants, they are less likely to enter the bloodstream and translocate to other organs. 203,204Usui et al. 205 investigated the effects of pure MWCNTs, with an average diameter of 80 nm and a length from 10 to 20 mm, in mouse skull and tibial defects to assess their compatibility and influence on bone healing. 205The results showed that MWCNTs caused a reduced local inflammatory reaction, possessed high bone tissue compatibility and were able to integrate into new bone tissue formation. 205lthough in vitro and in vivo research have shown that developing bone scaffolds with CNTs or graphene-based materials has positively influenced cell proliferation, mineralization and bone regeneration with minimal toxicological effects and inflammation, 96,106,203,206 further investigations are still required to better understand their impact on the human body and whether they can be degraded or migrated for their excretion through the kidney and bile ducts for their future use in clinical translation for bone tissue engineering solutions. Conclusions This review highlights the advantages of incorporating carbonbased conductive materials into a tissue engineering bone scaffold to create a more suitable alternative.Incorporating a carbon-based conductive material directly with an implantable bone scaffold increases protein adsorption promoting bone formation as it facilitates the delivery of electrical stimulation to accelerate cell growth and osteogenic maturation, thus allowing interaction with cell-binding proteins to obtain a fully remodeled bone while conferring mechanical strength.Although there are still concerns associated with the cellular uptake and degradation of carbon-based conductive materials in the human body, the benefits of using an electrically conductive carbon-based component in a bone scaffold may outweigh the disadvantages in most in vitro and in vivo studies.However, further research associated with the toxicological effects, material migration and excretion of carbon-based conductive materials in bone cells and bone defects within the human body is required.The development of an electrically conductive bone scaffold is proposed as a promising biomaterial for bone tissue engineering solutions capable of supporting cellular bioactivity, withstanding load, and enhancing bone formation and maturation through the application of electrical stimulation to overcome the limitations of the current treatments of bone defects and extend their applications for clinical translation. Fig. 2 Fig. 2 Generation of electrical potentials through mechanotransduction in bone.Endogenous electrical potentials are generated in bone through the application of mechanical strain, during which interstitial fluid flows through the caniculae canals from areas of compression, generating electronegative potentials, to areas of tension, producing electropositive potentials.Bone formation is induced upon compression.Adapted from Duncan and Turner 30 (used with permission). Fig. 5 Fig. 5 Incorporation of carbon-based conductive materials into bone tissue engineering scaffolds promotes osteogenesis and in vivo bone formation.(A) Cellular response of different electrical stimulation techniques promotes osteogenesis through the activation of the calcium/calmodulin pathway.134(B) Immunofluorescence staining images on MC3T3-E1 preosteoblasts show a cellular response upon the application of electrical stimulation after 7 days post-seeding.Cells were largely elongated and stretched in cell shape.144(C) Cell proliferation under electrical stimulation was evaluated using hydrogels at 1, 4, 7, and 14 days post-seeding.On days 1 and 4, CNTpega-gels and BP-CNTpega-gels showed significantly higher cell numbers than BP-gels and oligo(poly(ethylene glycol)fumarate) (OPF) as the control.144BP-gels, CNTpega-gels and BPCNTpega-gels showed significantly higher cell numbers than OPF on day 7.However, BP-CNTpega-gels had the highest cell density.144(D) Intracellular ALP activity was assessed in cells grown on hydrogels with or without electrical stimulation after 14 days of culture.Cells grown on BP-CNTpega-gels possessed the highest ALP activity.144After treatment by electrical stimulation, ALP activities increased for all hydrogels; however, the highest increase in ALP activity was shown in cells grown on BP-CNTpega-gels.144(E) Determination of the osteocalcin (OCN) content in cell culture media after 21 days of culture with and without electrical stimulation.OCN was significantly higher in cells grown on BP-CNTpega-gels in the presence of exogenous stimulation, indicating potential mineralization enhancement.144(F) Image of the bone defects in a calvarial animal model as well as the implanted PCL/MWCNT scaffold.Subsequent bone tissue formation with and without the scaffold was obtained on day 60 post-implantation.99 (G) The cross-sections of bone tissue regeneration at the bone defects for untreated, PCL and PCL/MWCNT (0.75 wt% and 3 wt%) groups after 60 days and 120 days post-operation.99PCL/MWCNT 3 wt% subjected to electrical stimulation formed the highest connective and bone tissues on days 60 and 12099 (used with permission). Table 2 Comparison between the mechanical properties of carbon nanotubes, pristine graphene monolayer, and cortical and cancellous bone Although temperature and metabolic inhibitors potentially influence endocytosis, no factors have © 2022 The Author(s).Published by the Royal Society of Chemistry Mater.Adv., 2022, 3, 5186-5206 | 5197
2022-05-15T15:14:12.718Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f1299ca60ea361155fbb2518dba00a453eb0d5d9", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2022/ma/d2ma00001f", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fb3f93af006ecdb8e17ad0775747953aadf42fbd", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
252287964
pes2o/s2orc
v3-fos-license
Metabolic targets in cardiac aging and rejuvenation Cardiac aging is accompanied by progressive loss of cellular function, leading to impaired heart function and heart failure. There is an urgent need for efficient strategies to combat this age-related cardiac dysfunction. A growing number of events suggest that age-related cardiac diseases are tightly related to metabolic imbalance. This review summarizes recent findings concerning metabolic changes during cardiac aging and highlights the therapeutic approaches that target metabolic pathways in cardiac aging. INTRODUCTION Due to medical progress and lifestyle changes, life expectancy has been significantly prolonged worldwide. However, multiple diseases, such as cardiovascular diseases (CVD) and metabolic disorders, tend to occur at older ages. For example, the incidence of CVDs, including hypertension, coronary heart disease (CHD), and heart failure (HF), is two-fold higher in the population over 80 years old compared with those who are 40 years of age [1] . The incidence of myocardial infarction (MI) increases sevenfold in the elderly population aged 70 years compared to those aged 40 years [2] . Aging is well recognized as one of the critical risk factors for heart disease. Nearly two-thirds of those suffering from cardiovascular disease are elderly patients [3] . As the aging population worldwide is growing at a remarkable rate, anti-aging strategies to improve cardiovascular health and lifespan are imminently required. Therefore, understanding the mechanisms of cardiac aging is vital for the therapeutic development of CVDs in the elderly population. In the aging process, the heart function degenerates gradually and may eventually lead to HF. Significant structural alteration of the left ventricle and an increase in fibrosis is observed in aged hearts. Moreover, diastolic dysfunction and systolic dysfunction are prevalent in aged hearts. Another consequence is the decline of the cardiac reserves in aged hearts, which contributes to HF with preserved ejection fraction (HFpEF), the most common type of HF in the aged population [4,5] . Although the cardiac physical changes of aging are well characterized, the intrinsic features and pathways driving the age-associated decline of heart function are not fully understood. Intrinsic features, such as mitochondrial dysfunction, inflammation, and reactive oxygen species (ROS), were considered significant drivers of cardiac aging [ Figure 1]. In the aging process, there is a metabolic decline and disruption of nutrient uptake by body tissues. Almost all the hallmarks of aging are affected by cellular metabolic disorders. Mitochondrial metabolism and metabolic pathways were proved to play important roles in cardiac aging. The aged heart exhibits impaired metabolic flexibility, reduced ability to oxidize fatty acids, and an enhanced dependence on glucose metabolism [6] . Therefore, deciphering the molecular mechanisms underlying cardiac metabolic dysfunction could reveal potential interventional targets to attenuate cardiac degeneration caused by aging. This review focuses on the current understanding of metabolic changes and their effects on myocardial aging and discusses the metabolic signaling pathways and metabolites involved in the myocardial aging process, which may provide a roadmap for cardiac rejuvenation and novel therapies for preventing agingrelated heart diseases. METABOLIC CHANGES IN CARDIAC AGING During aging, cellular homeostasis and body function are progressively dysregulated, which could be determined by several cellular and molecular hallmarks of aging [7] . These hallmarks include genomic instability, telomere attrition, epigenetic alteration, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication [7] . Metabolic pathways and regulators were proved to affect all the hallmarks of aging in the heart [ Figure 1]. Mitochondria dysfunction in cardiac aging Mitochondria play a central role in utilizing nutrient materials and energy production. The energy required by the heart is primarily derived from fatty acid oxidation and subsequent ATP production within the mitochondria. The myocardium is highly susceptible to mitochondrial dysfunction due to heavy dependence on mitochondrial oxidative metabolism [8] . Mitochondrial dysfunction can be induced by several pathways related to senescence, such as DNA damage and telomere attrition [9,10] . Mitochondria in aged cardiomyocytes often present an abnormal structure and an increased ROS level. This increased oxidative stress causes a gradual accumulation of mitochondrial damage and electron transport chain (ETC) dysfunction. These changes are associated with diastolic cardiac dysfunction and left ventricular hypertrophy [11] . Therefore, mitochondrial function changes are considered major contributing factors to cardiac senescence [7] . Mitochondria are the major source and target of ROS in cardiomyocytes [12] . Enlarged mitochondria and higher ROS production have been observed in cardiac aging [13] . In particular, the ETC contributes to ROS generation. Oxidative stress induced by high levels of ROS could cause DNA damage and induce mitochondria in the aged heart to undergo permeability transition, accelerate cytochrome C release, and subsequently initiate programmed cell death [14] [ Figure 2]. Moreover, the increased oxidative stress on mitochondria may disrupt cellular homeostasis and create a proinflammatory environment that accelerates aging in mice [15] . Mechanically, the damaged cardiomyocytes exhibit the senescence-associated secreting phenotype (SASP). SASP factors released by senescent cardiomyocytes include CCN family member 1 (CCN1), interleukins (IL1α, IL1β, and IL6), tumor necrosis factor-alpha (TNFα), etc. [16] . Enhanced SASP further promotes the formation of a proinflammatory microenvironment and cardiac aging [ Figure 1]. Moreover, increased ROS could trigger reduced expression and pathological redistribution of connexin-43 (Cx43) [17,18] [ Figure 1]. Cx43 is the main component of the gap junction in the heart and mediates cellular communication and conduction. The altered distribution of Cx43 may lead to lethal cardiac arrhythmias [19] . The increased ROS level could also cause telomere attrition and inhibit telomerase activity [ Figure 1]. Interestingly, the administration of the powerful antioxidant epigallocatechin gallate (EGCG) could reverse the decreased telomere length observed in heart/muscle-specific manganese superoxide dismutase-deficient mice [20] . Aging also impairs mitochondrial oxidative phosphorylation (OXPHOS), the main energy source for cardiac tissue. With age, there is a decline in the activity of complexes III and IV, contributing to the decrease in respiration [6] . Moreover, the uptake of fatty acids is increased, while fatty acid oxidation (FAO) is decreased in aged hearts. The heart generates approximately 70% of its energy from FAO and 30% from carbohydrate metabolism under physiological conditions. β-oxidation of fatty acids primarily occurs in the matrix of mitochondria. β-oxidation of fatty acids could produce nicotinamide adenine dinucleotide, reduced form (NADH). As a metabolic intermediate, glycolysis produces pyruvate in the cytoplasm and then enters the mitochondria [21] . Pyruvate is subsequently metabolized by pyruvate dehydrogenase (PDH) to generate NADH and acetyl-CoA in the mitochondrial matrix. In the aging heart, there is an increased capacity for glucose oxidation by mitochondria mediated by enhanced PDH complex activity [22] . Therefore, the fuel preference between fatty acids and glucose shifts in the aging heart, increasing glucose oxidation at the expense of FAO. During aging and diseases, the metabolic alteration in myocardium is essentially associated with impaired cardiac function [23,24] . In failing hearts, the myocardial energy substrate switches from fatty acids to glucose for ATP production [25,26] . Together, mitochondrial dysfunction and metabolic remodeling contribute to the senescence of cardiomyocytes and cardiac aging. NAD+ metabolism in cardiac aging Nicotinamide adenine dinucleotide, oxidized form (NAD+), is an essential metabolite in cardiac energy and reduction-oxidation (redox) homeostasis. The heart has a high level of NAD+, and precise control of this metabolite is critical for the cardiac bioenergetic process. In mammals, NAD+ is synthesized by two different routes: the deamidated and amidated routes. The deamidated route uses the amino acid tryptophan (Trp) to synthesize NAD+ de Novo [ Figure 3]. In the deamidated route, nicotinic acid mononucleotide (NaMN) is converted to nicotinic acid dinucleotide (NaAD) by nicotinamide mononucleotide adenyltransferase (NMNAT), and NaAD is then converted to NAD+ by NAD synthase (NADS) [ Figure 3]. In the amidated route, nicotinamide riboside kinase (NRK) converts nicotinamide riboside (NR) to nicotinamide mononucleotide (NMN), while nicotinamide phosphoribosyltransferase (NAMPT) activity converts nicotinamide (NAM) to NMN [ Figure 3]. The heart lacks the enzymes necessary for the de novo biosynthesis of NAD+. Instead, cardiac cell salvage NAD+ from NAM and NR through the amidated route (salvage pathway) [27] [ Figure 3]. Nearly all cardiac NAD+ is generated through the salvage pathway [27] . Therefore, the salvage pathway is dominant in the heart for providing the NAD+ required to maintain the high metabolic demands. NAD+ cannot cross the plasma membrane by passive transport because of its size and positive charge. Therefore, cardiac cells import NAD+ precursors for NAD+ synthesis. Among NAD+ precursors, NAM is the smallest and can cross the plasma membrane by passive transport [28] [ Figure 4]. CD73-ecto-5′nucleotidase (CD73) is responsible for the hydrolysis of extracellular NAD+ to NMN and AMP, and then NMN to NR. NR is transported into cells through solute carrier family 29 members 1/2/4 (SLC29A1/2/4) [29] . Recent studies suggest that the cation/chloride cotransporter solute carrier family 12 member 8 (SLC12A8) is a specific NMN transporter [30] [ Figure 4]. Moreover, SLC25A51 was recently discovered as a mammalian mitochondrial NAD+ transporter [31] . Further studies are required to clarify the transport mechanisms of NAD+ and its precursors in the cardiovascular system. Intracellular NAD+ concentrations decline with age in multiple organs, including the heart [32] . The NAD+ concentration decline might be because of a decrease in NAD+ biosynthesis and increased NAD+ degradation. Indeed, age-related downregulation of NAMPT has been observed in mice and humans [33] , which may affect systemic NAD+ levels. For NAD+ degradation, growing evidence suggests that CD38-NAD+-glycohydrolase (CD38) contributes to age-related NAD+ decline in mammals [34,35] . CD38 hydrolyzes NAD+ to NAM and ADP-ribose (ADPR) or nicotinamide mononucleotide (NMN) to NAM. Interestingly, a CD38 inhibitor reverses age-related NAD+ degradation and improves cardiac function in aged mice [36] . CD38 also exhibits an activity that degrades circulating NMN in vivo [34] . Therefore, coadministration of NAD+ precursors and CD38 antagonists might be more efficient than NAD+ precursors alone for cardiac anti-aging therapy. CD38 is predominantly expressed in immune cells [37] , and the proinflammatory cytokines secreted by senescent cells have been shown to elevate CD38 levels and promote the ageassociated decline of NAD+ and NMN [35,38] . In addition to CD38, poly(ADP-ribose) polymerases (PARPs) consume NAD+ to repair age-related DNA damage in aging tissues [39] . Cardiomyocytes are exposed to accumulating metabolic and oxidative damage, which eventually causes DNA damage and PARP activation, thereby reducing NAD+ concentration in the aging heart [40] . NAD+ levels have important biological functions in aging. During aging, the declined cellular NAD+ level can affect DNA repair, epigenetic regulation, autophagy, and redox balance [41] [ Figure 1]. Because NAD+ is a cofactor for various enzymes, loss of NAD+ impacts many cellular processes. For example, NAD+ is required for the activity of epigenetic regulators such as histone deacetylase SIRT1, and a decline in its level causes changes in histone acetylation, which subsequently influences chromatin organization and gene expression [41] [ Figure 1]. NAD+ is also required for DNA repair via PARPs during aging, and the decline of NAD+ could cause DNA damage accumulation [41] . Autophagy is regulated by NAD+ levels via sirtuins (mostly SIRT1). The decline of NAD+ levels reduces overall autophagy [41] [ Figure 1]. Moreover, NAD+ is an important coenzyme in redox reactions. The NAD+/NADH redox balance is required for metabolic homeostasis. Recent evidence suggests that redox-cycling quinone β-lapachone, an exogenous co-substrate of NAD(P)H:quinone acceptor oxidoreductase 1 (NQO1), could regenerate NAD+ from NADH [42] . However, β-lapachone administration induces an imbalance of redox cycle and oxidative stresses in some solid tumors and should be administered with caution [43] . Core metabolic regulators in cardiac aging Sirtuins were initially identified as deacetylases that remove acetyl-lysine modification [44] . However, recent studies suggest that they can also remove acyl-lysine modifications such as malonyl-lysine [45] , succinyllysine [45] , and glutaryl-lysine [46] . In mammals, seven sirtuins (SIRT1-7) have different subcellular locations. SIRT1, SIRT6, and SIRT7 are in the nucleus, while SIRT2 resides primarily in the cytoplasm. SIRT3, SIRT4, and SIRT5 are localized in mitochondria [ Figure 4]. SIRT3 is a lysine deacetylase involved in lipid metabolism and oxidative stress [47] . SIRT4 influences amino acid metabolism and the tricarboxylic acid cycle [48] . SIRT5 is a lysine demalonylase, desuccinylase [45] , and deglutarylase [46] , and it is involved in several metabolic pathways. NAD+ is a major activator of sirtuins; the dependence of sirtuins on NAD+ links sirtuin function to energy metabolism [49] . The effects of SIRT1 on aging and lifespan have been well recognized [50] . Growing evidence links NAD+ supplementation to SIRT1 activation and shows that SIRT1 protects against cardiac aging [51] . The protective effects of SIRT1 in the heart include inhibition of cardiomyocyte apoptosis, reduced inflammation and oxidative stress, and maintenance of energy metabolism [52] . SIRT1 inhibits nuclear factor-kappaB (NF-κB) signaling by deacetylating the p65 subunit of the NF-κB complex, thus repressing NF-κBinduced inflammatory responses in aging [53] . SIRT1 stimulates oxidative energy production via the activation of AMP-activated protein kinase (AMPK) [54] , peroxisome proliferator-activated receptor-alpha (PPARα) [55] , and peroxisome proliferator-activated receptor-gamma co-activator-1 alpha (PGC-1α) [56] simultaneously. The inhibition of SIRT1 disrupts oxidative energy metabolism in aging-related diseases. Aging also induces a pathological phenotype in the hearts of SIRT5-knockout mice [57] . The shortening fraction and ejection fraction of aged SIRT5-knockout mice were significantly decreased compared to the levels of similar aged wild-type control mice. Interestingly, succinylation and subsequent inhibition of the mitochondrial trifunctional protein α-subunit contribute to this phenotype in the hearts of aging SIRT5knockout mice. The mammalian target of rapamycin (mTOR) is an evolutionarily conserved and atypical serine/threonine kinase. The mTOR signaling pathway plays an important role in regulating cell metabolism. mTOR is a serine/threonine protein kinase, which constitutes the catalytic subunit of two distinct complexes known as mTOR complex 1 (mTORC1) and mTORC2 [58] . Since mTORC1 activity is aberrantly elevated in some aged cells, this complex has been the focus of investigation for the last decades [59] . Inhibition of the mTOR pathway by rapamycin treatment, genetic inactivation of mTORC1, or calorie restriction has been shown to extend lifespan [60] . Calorie restriction reduces nutrient intake and pushes mTORC1 towards a catabolic direction [60] . Indeed, calorie restriction failed to confer additional longevity benefits in the context of mTORC1 inhibition, suggesting that calorie restriction counteracts aging through the mTORC1 pathway [61] . Mechanically, mTOR integrates energy and nutrient availability to regulate the synthesis of cellular components [60] . Under amino acid replete conditions, Rag-GTPases serve as nutrient sensing machinery to stimulate mTORC1 kinase activity [62] . In contrast, calorie restriction can deplete cellular stores of ATP and trigger the AMPK complex, which inhibits mTORC1 [63] [ Figure 1]. mTORC1 also suppresses autophagy to prevent the premature breakdown of newly synthesized cellular components [ Figure 1]. This inhibition of autophagy allows damaged proteins to accumulate in the cell and accelerates the aging process [ Figure 1]. Indeed, mTOR inhibition by nutrient restriction or rapamycin treatment could restore declined autophagic capacity in aging hearts [64] . However, rapamycin also has the disadvantages of side effects, such as anemia and acute nephrotoxicity [65] , and needs to be used cautiously. The functions of metabolic regulators within non-myocytes in cardiac aging Myocardial tissues consist of cardiomyocytes and non-myocytes, including endothelial cells, fibroblasts, and immune cells [16] . Metabolic changes under pathological conditions could affect the communications between cardiomyocytes and non-myocytes. For example, metabolic dysfunction in cardiomyocytes could induce the activation of fibroblasts [66] . Moreover, SIRT2 overexpression in cardiomyocytes activated the AMPK pathway and reduced aging-associated fibrosis [66] . Metabolic regulators could modulate the function of fibroblasts in cardiac aging. For example, adiponectin activates AMPK signaling and induces collagen remodeling in cardiac fibroblasts [67] . AMPK activation also increases the content of fibroblasts in the infarcted area [68] . Metabolic factors are also critical for the activation of immune cells. For example, AMPK promotes macrophage fatty acid oxidative metabolism and induces inflammatory macrophage activation in cardiac aging [69] . SIRT1 also regulates the function of macrophages and participates in cardiac aging [70] . Moreover, fatty acid metabolism could modulate T cell activity in cardiac aging [71] . During aging, the metabolism of endothelial cells also changes [72] , and endothelial metabolism plays a critical role in cardiac aging. For example, Liver kinase B1 (LKB1) is an important regulator of energy homeostasis by activating the AMPK pathway [73] . Endothelial cell-specific LKB1 deletion causes endothelial dysfunction and induces cardiomyocyte hypertrophy [74] . Endothelial progenitor cells (EPCs) are circulating progenitor cell populations with angiogenic potential at sites of ischemia, hypoxia, or injury [75] . During aging, the function of EPCs declines [76] . Interestingly, the restoration of either intracellular NAD+ levels or SIRT1 expression could improve the function of aged EPCs [76,77] [ Figure 1]. Therefore, boosting NAD+ levels in EPCs may serve as a possible way to stimulate angiogenesis in aged hearts. The impact of sex differences on cardiac metabolism in the context of aging It is well known that CVD mortality rates are lower in women than men [78] . Although CVD mortality rates increase with age in both genders, female HF patients still have significantly better survival rates than male patients in the aged population [79] . Growing evidence suggests that regulatory pathways in aged female and male hearts are different [80,81] . Estrogen is an obvious regulator of this gender difference. The observation that cardiovascular dysfunction increases when estrogen production ceases further supports this notion [82] . However, the gender difference in cardiac aging is complex and cannot be simply attributed to estrogen alone [83] . Interestingly, the cardioprotective effect of estrogen may be mediated by its regulation of metabolic regulators. For example, the SIRT1 and SIRT3 expression levels are lower in elderly female hearts (50-68 years old) than in young female hearts (17-40 years old) [81] . In contrast, no age-associated changes in SIRT1 and SIRT3 expression are observed in male hearts [81] . In addition, the anti-oxidative enzyme superoxide dismutase 2 (SOD2) in aged female hearts was downregulated, whereas it was upregulated in aged male hearts [81] . Mechanically, estrogen could upregulate SOD2 expression [84] , and the age-associated changes in the estrogen level in female blood (down) and male blood (up) may be involved in the gender difference in SOD2 expression change [81] . The cardioprotective effect of estrogen could also function through metabolic pathways such as the AMPK pathway [83] . Estrogen activates AMPK by phosphorylation in myocardium [85] , which subsequently promotes glucose transport and free fatty acid metabolism. Therefore, higher levels of circulating estrogens in females may contribute to a stronger ability of AMPK activation compared with their male counterparts [83] . This may provide a more robust protective effect when the energy requirement of female patients with age-associated HF increase. Metabolic protection against cardiac aging The main aim of cardiac anti-aging therapy is to find an effective medicine to reverse the features of aged hearts. Several molecules that prevent known cardiac aging features via modulating metabolic regulators have been described [ Table 1]. For example, Alginate oligosaccharide (AOS) has been shown to be an effective agent in alleviating cardiac aging [86] . This agent could improve mitochondrial biogenesis and maintain mitochondrial integrity. In addition, mitochondrially targeted vitamin E (MitoVitE) and mitochondrially targeted coenzyme Q (MitoQ) can target mitochondrial dysfunction [87,88] . Cellular senescence, another important feature in cardiac aging, can be reversed by SIRT6 [89] . This effect was achieved by deacetylation of key metabolic regulators PCSK9 (proprotein convertase subtilisin/kexin type 9), which modulates the plasma LDL cholesterol level. Although resting heart function is not significantly altered, diastolic and systolic dysfunction exists in aged hearts [90] . A recent study showed that mitochondrially targeted peptide SS-31 (elamipretide) treatment can reverse diastolic dysfunction in the rodent model [91] . SS-31 was proved to reduce mitochondrial ROS and protein oxidation in aged hearts via targeting cardiolipin (CL), indicating alleviation of mitochondrial oxidative stress as a potential mechanism [91] . Similarly, overexpression of the antioxidant enzyme catalase can improve the old mice's heart systolic and diastolic function, and this phenotype is partially mediated by mitochondrial oxidative stress [91,92] . β-hydroxybutyrate (βOHB) treatment could attenuate NLPR3 inflammasome formation and antagonize proinflammatory cytokine-triggered mitochondrial dysfunction in aged mice [93] . This protective effect of βOHB is achieved via activation of CS (citrate synthase) and inhibiting fatty acid uptake. Acetylcarnitine treatment could mitigate age-induced metabolic imbalance via improving cardiac OXPHOS levels [94] [ Table 1]. Metabolites and dietary supplements for pharmacological interventions NAD+ repletion can delay several hallmarks of aging and suppress the deterioration of age-related diseases [95] . This suggests a significant potential for the treatment of cardiac diseases in the elderly population with supplementation of NAD+ precursors. Indeed, the dietary intake of NAM could reduce cardiac hypertrophy and diastolic dysfunction in aged mice [96] [ Figure 5]. However, challenges still exist. Several preclinical studies have confirmed that both NA and NAM treatment can cause side effects such as painful flushing sensations [97][98][99] . The effect of NMN treatment was tested on old mouse hearts [100] , and supplementing this metabolite could restore mitochondria and heart function [ Table 2]. However, some potential side effects of NMN have also been proposed, especially concomitant with high-dose administration, such as hepatic pressure and cancer growth [101] . Additionally, NR may be a more suitable NAD+ precursor, since it was not found to be associated with flushing or other severe side effects [102] . Oral administration of NR has been shown to increase NAD+ levels in humans [103] . Moreover, NR can prevent the deterioration of cardiac function and adverse remodeling in a mouse model of dilated cardiomyopathy [104] . However, NR is unstable in blood circulation due to degradation to NAM, thus reducing its availability in the heart after oral supplementation [105] . In addition, the therapeutic value of NR still has certain limitations regarding its production methods, including low yield and the use of expensive or hazardous reagents [106] . In summary, to adapt NR or NMN treatment for therapeutic usage against cardiac aging, it is required to determine oral availability and therapeutic dosage. A recent study identified uridine, a pyrimidine nucleoside, as a metabolite that can rejuvenate aged human stem cells and promote the regeneration of various tissues, including the heart [107] . Interestingly, uridine was proved to have an anti-inflammatory effect via modulating inhibitor of kappa B kinaseα/β(IKKα/β) and nuclear factor-kappaB (NF-κB) signaling [108] . This anti-inflammatory effect of uridine may provide a more amiable environment for aging cardiomyocytes. Oleate, an unsaturated fatty acid, could increase anti-aging metabolites such as NAD+ levels in vivo [109] . Polyamines such as spermine (SP) are essential for cell growth, Inflammation β-hydroxybutyrate Activation of CS (citrate synthase) and inhibition of fatty acid uptake [93] Oxidative stress Antioxidant enzyme catalase Alleviate mitochondrial oxidative stress [92] Metabolic imbalance Acetylcarnitine Improved aging-induced decreases in OXPHOS, complex III and complex IV [6] and their levels decline with age. Interestingly, SP treatment could reverse and inhibit age-related myocardial morphology alterations and apoptosis [110] . SP treatment upregulates the expression of pyruvate kinase M1/2 (PKM), enolase 3 (ENO3), and phosphoglycerate mutase 2 (PGAM2), therefore enhancing cardiac lipid metabolism. SP treatment also downregulates glutathione S-transferase alpha 3 (GSTA3) and dehydroascorbic acid production, inhibiting glutathione metabolism and protecting against cardiac aging [110] [ Figure 5]. Polyamine spermidine (SPD) also has cardioprotective effects [110,111] . SPD treatment could reduce cardiac hypertrophy and preserve diastolic function in old mice [111] . This protective effect was achieved by improving the global arginine bioavailability ratio (GABR), which favors the production of nitric oxide (NO) and subsequently decreases systemic blood pressure [111] . SPD treatment can also increase titin phosphorylation and improve the mechanical properties of cardiomyocytes [111] . In summary, these cardioprotective metabolites could serve as potential clinical therapeutics that target cardiac aging. Dietary supplements offer a convenient resource for restoring cardiac youthfulness in the aging population [ Table 2]. Among them, several naturally occurring molecules targeting longevity pathways could improve mitochondrial physiology. For example, resveratrol has been proven to enhance mitochondrial biogenesis in aging mice [112] . Mechanically, resveratrol activates the cyclic adenosine monophosphate (cAMP)/exchange protein directly activated by the cAMP 1 (Epac1)/AMPK pathway, which subsequently increases the NAD+ level and the activity of SIRT1 [113] . SRT1720, another compound activating SIRT1, has health and lifespan benefits in adult mice [114] . Thymoquinone and curcumin effectively suppressed the aging-associated oxidative alterations in mice hearts [115] . Curcumin could also improve cardiac angiogenesis and promote heart performance in senescent rats [116] . Curcumin could activate AMPK signaling, thereby promoting autophagy and alleviating cardiac apoptosis [117] . Interestingly, a combination of different anti-aging agents may achieve better cardiac rejuvenation. Two different mitochondrially targeted drugs, SS-31 and NMN, were tested on old mouse hearts [100] . Combining them resulted in a synergistic effect on old hearts that best recapitulated the young state. Moreover, a synergistic effect of leucine-resveratrol combinations on glucose homeostasis and insulin sensitivity was observed in patients with prediabetes [118,119] . These cardiac anti-aging strategies are gaining popularity, and optimizing the drug combination or targeting will undoubtedly facilitate the development of anti-aging therapies. Moreover, further mechanistic studies are needed for drug safety and efficacy assessment of cardiac anti-aging strategies. CONCLUSIONS CVDs associated with aging are the leading global healthcare burden in the 21st century. Research focusing on metabolic dysfunction in the aging process might identify novel specific agents. Interestingly, the driving factors of cardiac aging influence each other; thus, strategies targeting multiple driving factors may have a synergistic effect. This review examines metabolic components involved in cardiac aging and how they influence the main aging features. The modulation of these components and correlative pathways could improve human cardiac health and prevent major age-related CVDs. Maintaining healthy mitochondria and metabolic regulation is essential to long-term cardiac health. This review also summarizes different approaches to reversing metabolic changes in cardiac aging. It is challenging to only focus on specific cardiac pathologies because of the multi-organ involvement in age-associated CVDs. Therefore, new agents targeting communications between multiple organs could pave the way to understanding the complex nature of CVDs in the aged population. In conclusion, a thorough understanding of the role of metabolic regulation in human cardiac aging will be needed to combat age-related CVDs. Authors' contributions Participated in research design: Xiao J, Vulugundam G Performed data analysis: Zhang X, Hu M, Lu Y Wrote or contributed to the writing of the manuscript: Liu C, Gokulnath P Availability of data and materials Not applicable.
2022-09-16T15:24:00.650Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "09e89ab0d97fae79aa8c47642db5638d9135051c", "oa_license": "CCBY", "oa_url": "https://cardiovascularaging.com/article/download/5155", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1e7e481b07dc5308cab4998f5e476e18dc378a6d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
14630032
pes2o/s2orc
v3-fos-license
Mechanisms regulating resistance to inhibitors of topoisomerase II Inhibitors of topoisomerase II (topo II) are clinically effective in the management of hematological malignancies and solid tumors. The efficacy of anti-tumor drugs targeting topo II is often limited by resistance and studies with in vitro cell culture models have provided several insights on potential mechanisms. Multidrug transporters that are involved in the efflux and consequently reduced cytotoxicity of diverse anti-tumor agents suggest that they play an important role in resistance to clinically active drugs. However, in clinical trials, modulating the multidrug-resistant phenotype with agents that inhibit the efflux pump has not had an impact. Since reduced drug accumulation per se is insufficient to explain tumor cell resistance to topo II inhibitors several studies have focused on characterizing mechanisms that impact on DNA damage mediated by drugs that target the enzyme. Mammalian topo IIα and topo IIβ isozymes exhibit similar catalytic, but different biologic, activities. Whereas topo IIα is associated with cell division, topo IIβ is involved in differentiation. In addition to site specific mutations that can affect drug-induced topo II-mediated DNA damage, post-translation modification of topo II primarily by phosphorylation can potentially affect enzyme-mediated DNA damage and the downstream cytotoxic response of drugs targeting topo II. Signaling pathways that can affect phosphorylation and changes in intracellular calcium levels/calcium dependent signaling that can regulate site-specific phosphorylation of topoisomerase have an impact on downstream cytotoxic effects of topo II inhibitors. Overall, tumor cell resistance to inhibitors of topo II is a complex process that is orchestrated not only by cellular pharmacokinetics but more importantly by enzymatic alterations that govern the intrinsic drug sensitivity. INTRODUCTION The emergence of drug-resistant tumor cells continues to be a major problem confronting advances in cancer chemotherapy. Resistance to the various classes of anti-tumor agents (Curt et al., 1984) has been suggested to involve reduced drug accumulation and/or retention, conformational changes and/or over production of the target enzyme, and reduced activation and/or increased catabolism of drug. Doxorubicin (DOX) is a clinically effective anti-tumor agent against a spectrum of neoplastic diseases (Carter, 1975;Myers and Chabner, 1990). Although DOX is an inhibitor of topoisomerase II (topo II), multifactorial mechanisms are involved in the cytotoxic response (Siegfried et al., 1985;Louie et al., 1986;Bhushan et al., 1989;Doroshow et al., 1990). Pioneering studies of Kessel et al. (1968) and Biedler and Riehm (1980) established that reduced drug accumulation in tumor cells is a major mechanism involved in resistance to clinically important anti-tumor agents, e.g., anthracyclines and vinca-alkaloids. The overexpression of P-glycoprotein (PGP) in resistant cells, which mediates energy dependent drug efflux across a concentration gradient and is responsible for reduced drug accumulation, was originally described by Ling and Thompson (1974). The cross-resistance to anti-tumor drugs of diverse structure and/or mechanism of action (Endicott and Ling, 1989;Chin et al., 1993) mediated by PGP is now termed multidrug resistance (MDR). MDR in the absence of overexpression of PGP has been demonstrated to be due to the MDR related protein (MRP), which like PGP also belongs to the ATP-binding cassette (ABC) superfamily of membrane proteins (Center, 1993;Cole et al., 1994). MODULATION OF DRUG RESISTANCE BY CHEMOSENSITIZERS Reduced drug accumulation to anti-tumor drugs of diverse structure and mechanism of action has led to the identification of agents that can potentially sensitize tumor cells with the MDR phenotype. The original reports on modulation of MDR by calcium blockers, e.g., verapamil (Slater et al., 1982;Tsuruo et al., 1982) or calmodulin inhibitors, e.g., trifluoperazine (TFP; Tsuruo et al., 1982;Ganapathi and Grabowski, 1983) has been confirmed subsequently by other laboratories in a variety of model systems (Ford and Hait, 1990). Excellent reviews on compounds modulating the MDR phenotype has been published, and sensitization of drug-resistant pre-clinical tumor models in vivo has been observed (Tsuruo et al., 1982;Ford and Hait, 1990). The mechanism of action of the "chemosensitizers" in MDR cells is suggested to involve binding to PGP which results in increased drug accumulation and consequently cytotoxicity. While these chemosensitizers do indeed increase drug accumulation, concentrations of the anti-tumor agent required in resistant cells are significantly higher than those required by the wildtype (sensitive) cells to achieve equivalent cell kill. Based on the promise from pre-clinical studies, clinical trials have evaluated these agents to sensitize drug refractory tumors (Ganapathi et al., 1993a;Lum et al., 1993) but results with a potent inhibitor of PGP indicate that modulation of drug resistance or enhanced clinical activity is not realized (Carlson et al., 2006;Kolitz et al., 2010). Most studies on modulation of MDR have relied on tumor models with high levels of resistance making it difficult to ascertain whether the resistance to anthracyclines and vinca alkaloids was exclusively due to overexpression of PGP. In addition, the observation that resistance to lipophilic anthracyclines was observed without apparent differences in drug accumulation between sensitive and resistant cells suggested a role for alternate mechanisms of resistance (Ganapathi et al., 1984(Ganapathi et al., , 1989. To assess the central role for PGP and probe mechanisms of resistance to DOX we developed progressively DOX-resistant (5-to 40-fold) cell lines of L1210 mouse leukemia and B16-BL6 mouse melanoma (Ganapathi et al., 1987;. Studies with these progressively resistant tumor models revealed that while the IC50 for DOX alone was higher with increasing resistance (0.25-5 μM), significantly lower concentrations of DOX (0.08-0.7 μM) were required in the presence of a non-cytotoxic concentration (5 μM) of the calmodulin inhibitor TFP to achieve equivalent cell kill . In the progressively DOX-resistant L1210 cells expression of the MDR phenotype was observed only at >10-fold but not at fivefold resistance to DOX and role of PGP in these progressively DOX-resistant cells revealed that: (a) effects of PGP on drug accumulation were correlative with vincristine (VCR) rather than DOX resistance (Ganapathi et al., 1991b(Ganapathi et al., , 1993a; and (b) the modulation by TFP of VCR but not DOX cytotoxicity was due to effects on drug accumulation (Ganapathi et al., 1991a,b). Based on the lack of correlation between cellular DOX levels and cytotoxic response, using the progressively DOX-resistant L1210 model system, nuclear levels of DOX were determined following treatment with the IC50 of DOX in the absence or presence of 5 μM TFP (Ganapathi et al., 1991a). Results revealed that significantly higher nuclear levels of DOX were required in the resistant compared to the parental sensitive cells to achieve equivalent cytotoxicity, suggesting that alterations in topo II, a putative target of DOX may be involved (Ganapathi et al., 1991a). TOPOISOMERASE II AND DRUG RESISTANCE The topoisomerases alter DNA topology for the efficient processing of genetic material (Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). The two well characterized topoisomerases, topoisomerase I (topo I) and topo II, which are essential for DNA metabolism are also the targets for the clinically effective anti-tumor agents, e.g., analogs of camptothecin (topotecan, irinotecan), DOX, daunorubicin, etoposide (VP-16), or teniposide (Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). Eukaryotic topo I catalyzes DNA relaxation via a transient single stranded DNA break while topo II will produce a transient double stranded break for the passage of double stranded DNA segments (Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). Anti-cancer drugs which interact with topoisomerases and produce DNA strand breaks, involves the stabilization of a ternary complex with DNA. The single or double stranded break induced by topo II, involves linkage to the 5 -phosphoryl end of the broken DNA, with the 5 broken end protruding precisely four bases with a double strand break (Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). The mechanism of DNA strand breakage induced by topo II inhibitors is based on the stabilization of a cleavable complex which is normally a transient reaction intermediate (Chen and Liu, 1994;Froelich-Ammon and Osheroff, 1995). The cleaved intermediate can be either a single strand or double strand break. Drugs which are topo II inhibitors exert their effects possibly by inhibiting the rejoining step in the breakagerejoining cycle, thus shifting the equilibrium toward a cleavable complex (Chen and Liu, 1994;Froelich-Ammon and Osheroff, 1995). The agents which inhibit topo II and stabilize cleavable complex formation can be intercalative, e.g., DOX, amsacrine (m-AMSA), mitoxantrone, or non-intercalative, e.g., VP-16, teniposide (VM-26), and isoflavone derivative genistein (Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). Mammalian topo IIα (170 kDa) and topo IIβ (180 kDa) isozymes exhibit similar catalytic, but different biologic, activities. Whereas topo IIα is associated with cell division, topo IIβ is involved in differentiation (Chung et al., 1989;Drake et al., 1989;Woessner et al., 1989Woessner et al., , 1990. The 170 kDa topo II isoform is encoded on chromosome 17q21-22 while the 180 kDa topo II isoform is encoded on chromosome 3p24 (Chen and Liu, 1994;Watt and Hickson, 1994). As a target for anticancer agents, there is more information on the interaction with the 170 kDa topo IIα protein, although a possible role for alterations in the 180 kDa topo IIβ isoform in mitoxantrone-resistant and m-AMSA-resistant HL-60 cells has been reported (Harker et al., 1991;Chen and Liu, 1994;Froelich-Ammon and Osheroff, 1995;Herzog et al., 1998). A number of in vitro studies using purified or recombinant topo II enzyme have addressed determinants of drug interaction with topo II (Fry et al., 1991;Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995), and in cell systems the focus has been on enzyme levels as the determinant of drug action (Fry et al., 1991;Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). The proliferative state of tumor cells is also an important determinant of sensitivity to inhibitors of topo II, and a correlation exists between proliferation, cell cycle stage and cytotoxicity (Nelson et al., 1986;Sullivan et al., 1986;Estey et al., 1987;D'Arpa et al., 1990). Resistance to inhibitors of topo II reported in a number of tumor model systems is also prevalent in clinically refractory tumors (Chen and Liu, 1994;Froelich-Ammon and Osheroff, 1995). Based on the evaluation of tumor models with intrinsic or acquired resistance to the topo II inhibitors, as well as cell lines selected for resistance which express decreased levels of topo II (Chen and Liu, 1994;Froelich-Ammon and Osheroff, 1995) it has been proposed that levels of topo II are an important determinant of drug sensitivity. Although the levels of enzyme are obviously critical, there are several examples wherein drug sensitivity is not correlative with the 170 kDa topo IIα protein level (Ganapathi et al., 1991a(Ganapathi et al., , 1996Chen and Liu, 1994;Pommier et al., 1994;Watt and Hickson, 1994;Froelich-Ammon and Osheroff, 1995). Alternatively, altered sub-cellular distribution of the 170 and 180 kDa isoforms may also be involved with insensitivity to topo II inhibitors (Fernandes et al., 1990;Juenke and Holden, 1993;Mirski et al., 1993;Danks et al., 1994;Feldhoff et al., 1994;Zini et al., 1994). While the identification of a mutant enzyme associated with drug resistance has been reported (Takano et al., 1992;Pommier et al., 1994) an analysis of leukemic cells from patients who have relapsed from etoposide or teniposide therapy revealed that resistance does not have to be associated with mutations in the topo II gene (Danks et al., 1993). Also mutations identified in cultured cell lines were not found in the patient samples (Danks et al., 1993). Point mutations similar to those observed with topo IIα in m-AMSA-resistant cell lines (Hinds et al., 1991) has been reported in patients with small cell lung cancer treated with etoposide (Kubo et al., 1996). In addition to these mutations that have possible relevance to patient tumors refractory to therapy, several other mutations in topo IIα and β (induced or observed in drug-resistant tumor models) have been described that can confer resistance to drugs that target the enzyme (Chikamori et al., 2010). While studies with these mutant forms of topo II are informative, their functional role remains controversial, since they are generally not observed in patients with tumors that are clinically resistant to drugs that target the enzyme. ANTHRACYCLINES, TOPOISOMERASE IIα, AND BREAST CANCER The recognition that topo IIα is a putative target of DOX, a clinically active anthracycline in the treatment of breast cancer, has led to several reports correlating anthracycline sensitivity with topo IIα expression. Major focus has been on human epidermal growth factor receptor 2 (HER2) and topo IIα expression based on their localization in chromosome 17 as well as determinants of sensitivity to trastuzumab and anthracyclines, respectively. Indeed several reports have established expression of topo IIα in predicting sensitivity to adjuvant anthracycline therapy (Oakman et al., 2009;Brase et al., 2010;Kawachi et al., 2010;Di Leo et al., 2011;Du et al., 2011;Nikolényi et al., 2011;O'Malley et al., 2011). The evaluation of tissue inhibitor of metalloproteinase (TIMP-1) with HER2 or topo IIα has also suggested that a HT profile (HER2 amplified and/or TIMP-1 negative) or 2T profile (topo IIα aberrant and/or TIMP-1 negative) with substantial reduction in mortality but not relapse free survival events following adjuvant anthracycline containing therapy (Ejlertsen et al., 2010;Hertel et al., 2012). Overall, while topo IIα expression is possibly a determinant of response to anthracycline containing therapy, robust assay methodology for topo IIα and well defined prospective clinical trials will establish the predictive value. PHOSPHORYLATION OF TOPOISOMERASE II The proliferation and cell cycle phase dependent post-translational modification by phosphorylation of the 170 and 180 kDa topo II protein (Heck et al., 1989;Kroll and Rowe, 1991;Saijo et al., 1992;Burden et al., 1993;Burden and Sullivan, 1994;Kimura et al., 1994a,b), is also linked to increased enzyme activity and DNA cleavable complex formation (Heck et al., 1989;Kroll and Rowe, 1991;Saijo et al., 1992;Burden et al., 1993;Burden and Sullivan, 1994;Kimura et al., 1994a,b). Since phosphorylation of topo II during the cell cycle regulates activity of the enzyme (Ackerman et al., 1985;Sahyoun et al., 1986;Saijo et al., 1990;Cardenas et al., 1992Wells et al., 1994), the role of altered topo II phosphorylation in drug resistance has been studied. Takano et al. (1991) reported hyperphosphorylation of topo II in etoposide-resistant cells based on phosphorylation normalized for a 10-fold reduced enzyme level in the etoposide-resistant subline compared to parent cells. Since the phosphorylation of topo II is essential for catalytic events of unknotting and decatenation during cell replication, the observed hyperphosphorylation could represent a compensatory event for the reduced protein level. The topo IIα in these etoposide-resistant cells has a Ser861-Phe mutation, suggesting that these cells which hyperphosphorylate topo IIα, also express mutant serine residue (Kohno et al., 1995). Hypophosphorylation of topo IIα in the teniposide-resistant cells was >twofold compared to the parental cells, with serine being the primary phosphorylated amino acid in the sensitive or resistant cells (Chen and Beck, 1995). Subsequent studies by Ritke et al. (1994aRitke et al. ( , 1995 in etoposide-resistant K562 human leukemia cells have suggested that hypophosphorylation of topo IIα in these cells is due to decreased levels of the protein kinase C isoform ß II . Studies in vitro using Drosophila melanogaster topo II have demonstrated that phosphorylation of topo II by casein kinase (CK) II and protein kinase C can decrease drug stabilized DNA cleavable complex and increase DNA religation, suggesting that phosphorylation can confer relative drug resistance (DeVore et al., 1992). While this effect may be due to the simultaneous use of two different kinases, a role for site specific phosphorylation differences was not discussed. Phosphorylation of topo IIα by CKII has also been reported to not affect the DNA relaxing or DNA unknotting activity (Kimura et al., 1996). In contrast to these reports we have demonstrated, following metabolic labeling of cells with [ 32 P]orthophosphoric acid, the hypophosphorylation of 170 kDa topo II in the absence of any decrease in steady state topo II protein levels in three different model systems resistant to topo II inhibitors (Ganapathi et al., 1991a(Ganapathi et al., , 1993b(Ganapathi et al., , 1996. FUNCTIONAL ROLE FOR INTRACELLULAR CALCIUM AND SITE-SPECIFIC PHOSPHORYLATION OF TOPOISOMERASE IIα Potential mechanisms affecting reversibility of the drug-induced DNA cleavable complex (Hsiang and Liu, 1989) have been reported, and in resistant sublines a specific role for cleavable complex instability has been suggested (de Jong et al., 1993;Ritke et al., 1994b). The incubation of Chinese hamster DC3F cells in calcium-free medium or chelation of extracellular calcium with [ethylenebis(oxyethylenenitrilo)]tetraacetic acid (EGTA) has been reported to protect against the cytotoxicity of VP-16 (Bertrand et al., 1991). However, under these same conditions, VP-16-induced DNA single strand break frequency in calcium-depleted cells was reported to be comparable to control cells (Bertrand et al., 1991). The amount of phosphorylated topo IIα in cells is obviously a balance between kinase and phosphatase activity, and our data on hypophosphorylated topo IIα in etoposide-resistant cells may be linked to enhanced phosphatase activity. We have previously reported that okadaic acid an inhibitor of protein phosphatases 1 and 2A does not affect the cytotoxicity of the topo II inhibitor DOX (Kawamura et al., 1996b). The activity of protein phosphatase 2B (calcineurin) is enhanced following phosphorylation by calcium-calmodulin protein kinase II (Hashimoto and Soderling, 1989;Sacks et al., 1995) or suppressed by calmodulin inhibitors (Klee et al., 1988). Thus, enhanced phosphorylation of topo IIα (Kawamura et al., 1996a) in the presence of the inhibitors of calcium-calmodulin regulated processes, e.g., TFP or 1-[N,O-bis(1,5-isoquinolinesulfonyl)-Nmethyl-L-tyrosyl]-4-phenylpiperazine (KN-62) possibly involves inhibition of calcineurin activity, which leads to potentiation of DNA cleavable complex formation and cytotoxicity of topo II inhibitors. Based on the potentiation of DOX cytotoxicity and DNA damage by TFP and other inhibitors of calcium-calmodulin regulated cellular events we sought to determine whether intracellular calcium could be involved in affecting DNA damage induced by drugs that target topo II. Manipulating intracellular "free" calcium was achieved with the chelator (Ganapathi et al., 1996) 1,2-bis(o-aminophenoxy)ethane-N,N,N ,N ,tetraacetic acid tetra(acetoxymethyl) ester (BAPTA-AM). In wildtype cells pre-treatment with BAPTA-AM followed by the topo II inhibitor etoposide (VP-16) led to significant reductions in drug stabilized DNA cleavable complex formation and cytotoxicity (Ganapathi et al., 1996). These results on reduced DNA cleavable complex formation following buffering of intracellular calcium, in general support the original observation of Osheroff and Zechiedrich who reported that in experiments with the purified enzyme in vitro, calcium was able to promote high levels of D. melanogaster topo II-mediated DNA cleavage (Osheroff, 1987;Osheroff and Zechiedrich, 1987). Also, pre-treatment of wildtype cells with BAPTA-AM led to hypophosphorylation of topo IIα (Ganapathi et al., 1996). In order to determine whether the hypophosphorylation of topo IIα was site-specific, we carried out 2D mapping with tryptic digests of immunoprecipitated topo IIα from in DOX-resistant or wild-type cells pre-treated with BAPTA-AM (Ganapathi et al., 1996;Chikamori et al., 2003). Interestingly, we found that site specific hypophosphorylation of topo IIα in DOX-resistant or wild-type cells pre-treated with BAPTA-AM was comparable (Ganapathi et al., 1996;Chikamori et al., 2003). Using liquid chromatography-tandem mass spectrometry, we identified the hypophosphorylated site as serine 1106 in topo IIα (Chikamori et al., 2003). To establish the functional role for serine 1106 in topo IIα, mutation of serine 1106 to alanine (S1106A) was carried out and found to abrogate phosphorylation of the phosphopeptides that were found either in the DOX-resistant cells or wild-type cells treated with BAPTA-AM. Using purified wildtype or mutant (S1106A) topo IIα expressed in BJ201 cells, we observed decreased decatenation activity as well as etoposide stabilized DNA cleavable complex formation with the mutant enzyme (Chikamori et al., 2003). A functional role in vivo for serine 1106 in resistance to inhibitors of topo II was also established using the yeast system wherein resistance to the cytotoxic effects of etoposide and m-AMSA was observed (Chikamori et al., 2003). Since serine 1106 is flanked by CKI consensus sequences, and phosphorylation of this site is regulated by calcium, we probed the effect of inhibitors of CKI (Grozav et al., 2009). Treatment with CKI-7 or IC261 that inhibit CKI activity, both hypophosphorylation of serine 1106 and decreased etoposide stabilized DNA cleavable complex formation was observed, suggesting a potential role for CKI phosphorylation of topo IIα (Grozav et al., 2009). In the CKI family, a functional role for calcium regulatable CKIδ and/or CKIε in phosphorylating serine 1106 and affecting drug stabilized topo II DNA cleavable complex formation was established using small interfering RNA (siRNA) that target these isozymes of CKI (Grozav et al., 2009). Although a precise role for site specific hypophosphorylation of topo IIα and resistance to inhibitors of topo II in patient tumors has not been established, in our preliminary studies with early passage cultures of acute myeloid leukemia and non-small cell lung cancer from patients, we have observed a correlation of site specific hypophosphorylation of topo IIα and decreased drug stabilized DNA cleavable complex formation and/or cytotoxicity with inhibitors that target topo II. In addition to phosphorylation, other post-translational modifications of topo II include sumoylation and ubiquitination (Chikamori et al., 2010). Sumoylation of topo II that is induced by inhibitors targeting the enzyme also affects cellular localization (Chikamori et al., 2010). A role for ubiquitination-proteasome pathway in regulating enzyme function has also been reported (Chikamori et al., 2010). Interestingly, in human non-small cell lung carcinoma cells, proteasome inhibition with, e.g., MG-132 following treatment with etoposide leads to enhanced apoptosis and decreased arrest of cells in the G 2 +M boundary, without apparent alteration in degradation of topo II (Tabata et al., 2001). In contrast, pre-treatment with the proteasome inhibitor followed by etoposide leads to decreased apoptosis, possibly due effects on apoptotic signaling (Tabata et al., 2001). Neither, pre-or posttreatment with the proteasome inhibitor affected DNA damage induced by etoposide, suggesting that downstream events, e.g., apoptotic response may be another strategy to enhance anti-tumor activity of topo II inhibitors. FUTURE DIRECTIONS In summary, it is apparent that multifactorial mechanisms govern the sensitivity of tumor cells to the DNA damaging and cytotoxic effects of clinically useful inhibitors of topo II. Much progress has been made in identifying agents and developing strategies for enhancing cellular accumulation of topo II inhibitors in tumors with the MDR phenotype. However, differences between "acquired" and "intrinsic" resistance as well as insights on mechanisms that lead to reduced activity of topo II or compromised activation of cell death pathways in tumors from patients resistant to clinically active topo II inhibitors is an underexplored area. Thus, development of targeted drugs that can activate topo II activity and cell death pathways without enhancing treatment-induced toxicity, have considerable potential in combination therapy for clinically improving the anti-tumor efficacy of topo II inhibitors. ACKNOWLEDGMENTS Authors gratefully acknowledge the valuable contributions of the talented post-doctoral fellows and technicians at the Cleveland Clinic Foundation that are summarized in this review. Supported by USPHS Grants RO1 CA35531 and RO1 CA74939.
2016-06-17T21:53:24.339Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "7df8c0387e28774713caad826c8d321bf61e3a2e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2013.00089/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7df8c0387e28774713caad826c8d321bf61e3a2e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
207925963
pes2o/s2orc
v3-fos-license
Effect of Polysaccharides from Bletilla striata on the Healing of Dermal Wounds in Mice Bletilla striata has been largely used in traditional folk medicine in China as a wound healing agent and to treat gastritis and several other health problems. Some studies have shown that plant polysaccharides may have the ability to promote wound healing. The aim of this work was to evaluate the wound healing activity of the polysaccharide extracted from Bletilla striata. Firstly, a Bletilla striata polysaccharide was extracted by water extraction and alcohol precipitation and characterized by Fourier transform infrared spectroscopy. The Bletilla striata polysaccharide was then tested for cell migration and proliferation using the mouse fibroblast cell line. Then, the Bletilla striata hydrogel was fabricated for acute wound health care of the mouse full-thickness excision. The results showed that the BSP enhanced the proliferation and migration of L929 cells. The superior wound healing capacity of the BSP hydrogel was demonstrated that it significantly accelerated the wound healing process in vivo in full-thickness skin defect wounded models. Compared to the saline group, the BSP hydrogel could accelerate wound healing and promote re-epithelialization and collagen deposition by means of TGF-β/Smad signal pathway activation. Taken together, BSP hydrogel would be a useful pharmaceutic candidate for acute cutaneous wound health care. Introduction Wound healing involves four overlapping phases of coagulation, inflammation, proliferation, and tissue remodeling and is a dynamic and complex process controlled by many factors [1]. e way and time of wound healing depends on the degree of injury, tissue regeneration ability, necrotic tissue, foreign body infection, and so on [2]. With the in-depth study of wound healing process in recent years, various strategies have been developed to attain skin lesion closure, including antibacterial ointments containing enzymatic substances, synthetic growth factors, polyurethane, hyaluronic acid hydrogels, and alginate fiber dressings [3]. However, the ideal curative features such as efficacy in absorbing wound exudates, flexibility, durability and adherence, and low cost are not fully attained yet [4,5]. Natural products from plants and marine lives, as an alternative source of drugs which modulate the inflammatory process in a shorter time period and promote skin tissue regeneration in the treatment of skin lesions, have become a popular topic of scientific research, such as honey [6], propolis [7], marine fungus [8], squid ink polysaccharide [9], and so on. ese natural products are believed to offer effective, affordable, and accessible forms of treatment. Among these natural products, polysaccharides have attracted widespread attention and have been demonstrated to promote skin tissue repair on the healing of dermal wounds by ameliorating oxidative stress, inflammation, and secondary trauma [10]. Bletilla striata polysaccharide (BSP) is a water-soluble polysaccharide extracted from the plant Bletilla striata and it is a polymer consisting of several 1,4-chain mannitol residues and 1,4-chain glucose residues [11,12]. e BSP has good pharmacological activities, such as promoting wound healing, antibacterial, antitumor, antifibrotic, and so on [13,14]. Simultaneously, the BSP has great potential and application prospects in biological delivery systems, wound dressings, and other biomaterials because of its great biodegradability and biocompatibility [15][16][17]. In our previous study, the BSP has been found to play a nontrivial role in promoting oral ulcer healing as a component in buccoadhesive wafers [11]. Nevertheless, the promoting wound healing effect of the BSP on dermal wound injury and its related mechanisms remain largely unclear. In this study, we extracted the BSP by the method of water extraction and alcohol precipitation. en the wound scratch and cytotoxicity assays were carried out to ensure the BSP's efficacy and toxicity profiles. Furthermore, according to the properties of the BSP, the polysaccharide hydrogels were prepared to repair dermal wounds in mice. In consideration of cut as the most common acute wound, murine full-excisional skin wound was employed to evaluate the wound healing effect of the BSP hydrogel. Meanwhile, the potential healing promoting mechanism of the BSP in vivo were also explored. In the light of these results, the present study wound provide a useful and efficient dosage form for topical wound healing of the BSP for the first time. We hope to develop a natural plant polysaccharide hydrogel product for promoting skin wound healing through our research. Preparation of BSP. e BSP was prepared as described previously [18]. In brief, the dry root of Bletilla striata was homogenized and dispersed in boiled water for 4 h. After removal of impurities by filtration, the possible polysaccharide fractions were precipitated by three volumes of cold ethanol (4°C) overnight. e precipitate was repeatedly washed with ethanol solution (v/v: 95%), resuspended in distilled water, and added with Sevag's solution (1/3 volume chloroform/n-butanol (4 : 1, v/v)), followed by rigorous agitation to precipitate proteins. After centrifugation, the aqueous phase was successively collected to repeat the Sevag deproteinisation process until no obvious denatured protein precipitate was found in the n-butanol interlayer. e deproteinated aqueous fraction was dialyzed against the membrane with molecular weight cut-off (MWCO) of 5 KDa. e resultant liquor was precipitated with 3 times the volume of 95% ethanol and placed in 4°C for 24 h. Precipitate of the polysaccharides was collected by centrifugation at 5000g for 10 min. e precipitate was washed with 95% ethanol and water-free ethanol, respectively, after suction and lyophilized in vacuo. Attenuated Total Reflection Fourier Transform Infrared Spectroscopy (ATR-IR). e ATR-IR spectrometer of the polysaccharides used was a Nicolet 460 FT-IR with Smart Golden Gate Diamond ATR ( ermo Scientific, Germany). About 3-5 mg of powdered sample was placed on the sample holder and compressed under pressure to form a pellet. e Spectrums of the powdered form of the BSP were measured in the wave number range of 500-4000 cm − 1 using 32 scans. ermogravimetric-Differential ermal Analysis (TG-DTA) . TG-DTA of the BSP powder was carried out simultaneously using a NETZSCH simultaneous thermal analyzer STA 449F3 Jupiter equipped with a TG-DTA sample carrier type supporting a ptRh10-Pt thermocouple (Netzsch, Germany). TG-DTA thermograms were taken using a standard Al 2 O 3 pan. Nitrogen was used as a sweeping gas, and the heating rate was 20°C/min. e polysaccharide sample (20 mg) was loaded in a pan without further treatment. e initial and end temperatures are 25°C and 500°C, respectively. In Vitro Cell Culture Assays. Mouse fibroblast cell line (L929; CCL-1, Mus musculus) purchased from the American Type Cell culture/ATCC was cultivated in DMEM with 10% fetal bovine serum and 1% penicillin streptomycin at 37°C in a humidified incubator with 5% carbon dioxide. MTT Cell Proliferation Assays. e cell proliferation induced by the BSP was determined using a tetrazolium salt (MTT) assay in vitro. Briefly, cells were seeded in a volume of 200 μL (3000 cells/well) on 96-well plates after cultivation with different concentrations of BSP. e culture medium containing serum was replaced by MTT every 24 h. A final MTT concentration of 0.5 mg/mL was added to the wells followed by incubation for 4 h at 37°C. e supernatant was discarded and replaced with DMSO (150 μL/well). e optical densities (OD) were measured at 570 nm with a microplate reader. e experiment was repeated in triplicate. e viable concentration was calculated using GraphPad Prism 5.0. Wound Scratch Assay. e assay was conducted as described by the literature method with slight modifications [19]. e wound healing assay mimics the proliferation and migration of cells during wound injury in vivo. It serves as an excellent in vitro assay for the study of cell-cell or cell-matrix interactions in cellular proliferation and migration to initiate the wound repair process. L929 cells were seeded in the 6well plate, and 10% FBS growth medium containing the serum-free medium supplemented with the BSP was grown to 90% confluence. After treatment for 48 h, the culture medium was removed and the monolayers were scratched using a 200 μL pipette to create a uniform cell-free wound area. Debris was removed by gently washing with sterile PBS. Cell movement into the wound area was monitored and photographed at 0, 24, and 48 h using an optical microscope. Animal Experimental Protocol. Healthy adult male Kunming mice (18∼22 g) were maintained individually in the IVC mice cages, in a temperature-controlled room (25°C) under a 12-h light and 12-h dark cycle, to avoid the bacteria interference. e animal experiments were performed in compliance with the experimental protocols approved by the Animal Investigation Committee of the Institute of Pharmacy College of Chengdu University of TCM. Preparation and Structural Properties of the BSP Hydrogel. e lyophilized BSP were dissolved in distilled water by magnetic stirring for 60 min, then swelled overnight at 4°C, and finally prepared into BSP solutions of different concentrations (10, 20, and 40 mg/mL). e crosssection of the hydrogel was examined using scanning electron microscopy (SEM; ZEISS SUPRA 40, Germany) operated at 20 kV acceleration voltage. Before observation, the hydrogels were freeze-dried at − 80°C for 2 days, fractured using a blade to obtain the hydrogel sheet, and Au sputter coated with ∼30 nm coating layer. Excision Animal Model. All mice were anesthetized by IP. injection 2% pentobarbital (45 mg/kg body weight), and the dorsal hair was shaved using a shaving machine. e surgical area was disinfected with Betadine. Full-thickness excisional wounds were created using 8 mm biopsy punch modified tools in the dorsal site of Kunming mice. Animals were divided into two groups (20 mice per group). e BSP hydrogel was applied to cover the wound area every day for 12 days, while the control group received no treatment (dose of BSP 0.4 g/Kg). e kinetics of wound closure was treated through digital photography, and wound area and percentage of wound closure were measured immediately after wounding and every day until 12 days using the following equation [20]: where A O was the original wound area after surgery and A n was the wound area on day n after wounding. At day 6 and 12, all the BSP dressings were removed, and animals were scarified under anesthesia. Tissue specimens of the incised skin in each group were collected for histopathological examinations and biochemical analysis. Serum Biochemical Factor Determination. At day 12, all the rest of mice were sacrificed by overdose anesthetics. Serum samples were obtained by blood centrifugation at a rotational speed of 3500/min for 10 min. Cytokines in serum samples including TNF-α, IL-1β, iNOS, and SOD were measured using various test kits. All procedures were according to the instructions of the manufacturer. Histological Analysis of Wound Healing. One-third of mice in each group were euthanized and sacrificed on day 6 and day 12 post injury. Wound lesion tissues were excised, fixed overnight in 4% buffered formalin solution, and embedded in paraffin. Tissue sections (5 mm) were stained with hematoxylin and eosin (H&E) for morphological assessment. e collagen analysis of skin wounds was performed using Masson's Trichrome Stain Kit (Sigma) and Siriusred staining. Related Gene Expression by RT-PCR Analysis. e TGF-β/Smad pathway was involved in collagen deposition, which was an important step in wound healing [21,22]. e expression levels of three genes, i.e., TGF-β1, Smad2, and Smad4, were analyzed in the collected wound lesion on day 6 and day 12 by RT-PCR technology. e specific operation is as follows: firstly, total RNA in tissues were isolated using TRIzol ™ Reagent (Invitrogen Co., Ltd., California, USA). Statistical Analysis. Data were expressed as mean value ± standard error. All the data were statistically analyzed by one-way analysis of variance (ANOVA) using IBM SPSS Statistics analysis of variance (20.0) and GraphPad Prism software. Statistical difference was considered significant when probability values were less than 0.05 (p < 0.05). Preparation and Characterization of the BSP Hydrogel. e polysaccharide content (65.3%) in the extracts was determined using the phenol-sulfuric acid method. e ability of Bletilla striata polysaccharides to form hydrogels is Evidence-Based Complementary and Alternative Medicine one of its important characteristics, and according to our preliminary experiments, BSP cannot form hydrogels at low concentrations (10 mg/mL and 20 mg/mL). However, concentration of 40 mg/mL of BSP water solution can form the hydrogel through water swelling. Eventually, we employed concentration of the BSP (40 mg/mL) in this study. SEM was used to evaluate the topographic characteristics and morphology of the BSP hydrogel as they are related to swelling, dissolution, and release characteristics. Representative SEM images of the lyophilized BSP hydrogel are shown in Figure 1(a) and 1(b). Micrographs of the BSP hydrogel presented a porous interconnecting network. Some pores formed continuous channels, while others exhibited overlapping sheet-like structures. is porous structure would provide a large surface area to accelerate the process of swelling and dissolution. ATR-IR Analysis. e ATR-IR spectra of the BSP is shown in Figure 2 Figure 2(c). Various thermal effects and enthalpy changes of the polysaccharide were exhibited, and an early endothermic event located between 90-100°C was attributed to water evaporation and loss of mass. e second weight loss region located between 290-330°C is attributed to the degradation and thermal decomposition of the polysaccharide. e third weight loss event is located in the range of 340-460 due to the oxidation of organic matter. Effect of BSP on Cell Proliferation In Vitro. MTT cell proliferation assay was employed to detect the cytotoxicity of the BSP on L929 cells. Even when 100 μg/mL BSP was given, there was no significant effect on viability of L929 cells both in 24 h and 48 h treatment ( Figure 3). Notably, 5 μg/mL and 10 μg/mL of the BSP could slightly promote cell growth, compared with the control group. ese results demonstrated the noncytotoxicity of the BSP on gastric mucosa epithelial cells. Wound Scratch Assay. To determine whether the BSP promoted L929 cell migration in skin, we performed wound healing assays. At 0 h, the cell scratch spacing of the control group, 5 μg/mL group, and 10 μg/mL group was basically the same. At 24 h and 48 h, the cell scratch spacing of each group became narrower. Scratches even disappeared at 48 h in 5 μg/ mL and 10 μg/mL groups. e cell scratch test showed that the BSP could significantly improve the migration of L929 cells and facilitate wound healing (Figure 4). Accelerating Full-ickness Excision Wound Healing. e wound healing benefit activity of the BSP hydrogel was evaluated by a full-thickness excision wound model ( Figure 5(a)). Compared to the control group, the BSP hydrogel could promote the healing process after wound induction. It would be attributed to the favorable water absorption ability and water vapor permeability of the hydrogel to maintain the moisture in the environment in wound tissue. On days 3 and 6, the unhealed area of the BSP hydrogel group was smaller than that of the control groups ( Figure 5(b)). Only the BSP hydrogel achieved wound closure within 12 days, indicating that the BSP hydrogel possessed a more efficient healing effect on full-thickness wounds ( Figure 5(a)). To better understand the effect of BSP hydrogel treatment on wound healing, the epidermal migration and collagen deposition, which are the most common indications in wound healing process [23], at full-thickness excision wound was analyzed by Masson's trichrome staining [24]. e results showed that the thickness of the epidermis in the BSP hydrogel group remarkably increased than that of the control group at day 12, and the length of the newly formed epithelium in the BSP hydrogel treated group was significantly longer than that in the control group at day Evidence-Based Complementary and Alternative Medicine 7 6 and 12, respectively ( Figure 6). Furthermore, the collagen fibers in the BSP hydrogel were more extensive and orderly arranged than those in untreated control groups, showing more blue area at either day 6 or day 12. Also, the collagen deposition at wounds showed the time-dependent manner, in which the treated groups at day 12 displayed more collagen levels. e contents of TNF-α, IL-1β, iNOS, and SOD in mice tissues at day 12 post injury in various groups were compared. In accordance with the histomorphological results, mice treated by BSP hydrogel groups exhibited the fewest contents of inflammatory cytokines (TNF-α, IL-1β, and iNOS) compared to the control group (Figure 7(a)-7(c)). On the contrary, SOD as one of the body's primary internal antioxidant defenses plays a critical role in reducing internal inflammation and lessening pain associated with conditions [25]. BSP hydrogel treatment could significantly elevate the SOD production in wound tissue (Figure 7(d)). ese results contributed to the optimal wound healing profiles in mice treated with the BSP hydrogel. At day 12 post injury, mice's wounds in the BSP hydrogel group have been almost healed up. Furthermore, the intracellular Smad protein is well known to transduce the extracellular TGF-β signal to the fibroblast nucleus for collagen production [26]. us, TGF-β/Smad signaling axis has been demonstrated the important role in collagen production [27]. Herein, mRNA levels of TGF-β1, Smad2, and Smad4 at day 6 and day 12 were quantitatively measured using RT-PCR analysis. e mRNA expression as densitometry band intensities of the target gene relative to GAPDH was shown in Figure 7(e) and 7(f ). Results indicated that especially BSP hydrogel treatment exhibited the highest expression of TGF-β1and Smad2 at day 6, and the highest expression of TGF-β1at day 12. erefore, these results indicated that the BSP hydrogel could increase the Smad2 heterocomplex translocation into the nucleus and induce the TGF-β-specific transcriptional response. Conclusion In this manuscript, the BSP, prepared by water extraction and alcohol precipitation, was found to be the most promising wound healing agent, as shown in the wound scratch assays. According to the properties of BSP, the BSP hydrogel was successfully fabricated for acute wound health care of full-thickness excision. e in vivo study demonstrated that the BSP as a wound healing patch significantly improved wound contraction, epithelization and the collagen deposition, and inflammatory response. Notably, to activate TGF-β1/Smad2 pathway was the related wound healing mechanism of the BSP hydrogel. Our results indicated that the BSP hydrogel can be a promising therapeutic approach for topical application in treatment of cutaneous wounds. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
2019-10-31T09:15:00.019Z
2019-10-24T00:00:00.000
{ "year": 2019, "sha1": "021089ae7a97a355ba657b8590d12e8101242746", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2019/9212314.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23df3081e5a7389e53531ec4ca440df2d32e38d5", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
248887394
pes2o/s2orc
v3-fos-license
Feedback effect on the observable properties of $z>6$ AGN Active galactic nuclei (AGN) feedback has a major impact onto the supermassive black-hole (SMBH) growth, the properties of the host galaxies, and their cosmic evolution. We investigate the effects of different kinetic feedback prescriptions on the observable properties of AGN and their host galaxies at $z>6$ in a suite of zoom-in cosmological simulations. We find that kinetic feedback decreases the column density of the interstellar medium (ISM) in the host galaxy by up to a factor of $\approx10$, especially when the SMBHs reach high accretion rates ($\approx10-30\,\mathrm{M_\odot\,yr^{-1}}$). In particular, kinetic feedback is required to extend the ISM size to $>1$ kpc and match the observed sizes of the gas reservoirs in $z>6$ AGN host galaxies. Moreover, it produces unobscured lines of sight along which the AGN can be detected in the rest-frame UV band with magnitudes consistent with observed values of $z>6$ AGN. The assumed geometry of the outflow plays an important role in shaping the observed properties of high-redshift AGN. We find that a biconical geometry is favored over a spherical one to reproduce the observed properties, but it overestimates the number of multiple AGN systems detectable in X-ray observations. This result suggests that simplistic BH seeding recipes widely employed in cosmological simulations produce too many X-ray detectable multiple AGN at $z=6-7$, thus soliciting the adoption of more physically motivated seeding prescriptions. INTRODUCTION The existence of super-massive black holes (SMBHs) with masses larger than billion solar masses at z 6 (e.g., Bañados et al. 2018;Matsuoka et al. 2018a;Wang et al. 2021a), when the Universe was < 1 Gyr old, challenges our current understanding of SMBH and galaxy formation and evolution, and is thus one of the most pressing open issues in modern astrophysics (e.g., Woods et al. 2019). Their distance and faintness make observations of these objects difficult and strongly biased towards the most luminous and massive accreting SMBHs. A complementary approach is to use numerical simulations as tools to study the largely unknown phases of SMBH growth in the early Universe (e.g., Tanaka & Haiman 2009;Sijacki et al. 2009;Habouzit et al. 2016Habouzit et al. , 2019. However, observed properties of high-redshift accreting SMBHs, or active galactic nuclei (AGN), and predictions of numerical simulations have been compared only seldom (e.g., Ni et al. 2020;Habouzit et al. 2021;Di Mascia et al. 2021a, Zana et al. accepted). An important ingredient entering in numerical simulations focused on the early growth of SMBHs is the effect of AGN feedback (e.g. Costa et al. 2014Costa et al. , 2020Barai et al. 2018;Habouzit et al. 2019;Valentini et al. 2021), as it is often considered to have a major role in shaping the evolution of AGN and galaxies along the whole cosmic history (e.g., Fiore et al. 2017). In particular, optically-selected luminous quasi-stellar objects (QSOs) in the early Universe often present evidence for the launching of fast and massive multi-phase outflows (e.g., Maiolino et al. 2012;Cicone et al. 2015;Bischetti et al. 2019;Carniani et al. 2019;Schindler et al. 2020;Izumi et al. 2021; but see also, e.g., Decarli et al. 2018;Novak et al. 2020;Meyer et al. 2022), which are expected to affect the observable properties of the QSOs themselves and their host galaxies, such as X-ray oscuration, UV extinction, and gas content (e.g., Brusa et al. 2015;Ni et al. 2020). Outflows observed in QSOs are though to originate from fast nuclear winds, which, in turn, may be accelerated by several physical mechanisms, including radiation pressure, due to UV photons produced in the accretion disc, on dust grains or on partially ionized gas mediated by UV transitions, and magnetic effects (e.g. Proga et al. 2000;Murray et al. 2005;Fabian et al. 2008;Yuan et al. 2015;Ricci et al. 2017). The physical scales involved in these processes are those of the accretion disk (e.g., Giustini & Proga 2019). Since such scales cannot be resolved by large-scale cosmological simulations, different authors have modeled AGN feedback using several different recipes (e.g., Barai et al. 2018;Costa et al. 2020;Ni et al. 2020). Moreover, the effect of the outflow on the surrounding material can potentially depend on its geometry (e.g., Zubovas et al. 2016). Since the exact acceleration physics, and thus launching direction, of nuclear winds is not well understood, numerical simulations typically assume ei-ther spherical (e.g., Feng et al. 2016) or bi-conical (e.g., Sala et al. 2021) outflow geometry as study cases. Beside the properties of the individual galaxies hosting accreting SMBHs, numerical simulations provide also information on the environment of high-redshift luminous AGN. While these objects are expected to reside in the peaks of the dark matter halo distribution, which are generally characterized by large overdensities of galaxies (e.g., Costa et al. 2014;Wise et al. 2019), although with some scatter (e.g., Habouzit et al. 2019), observations struggle to provide us with a clear view of typical high-redshift QSO environment. In fact, z > 6 QSOs have been reported to reside in a variety of environments, including underdense, normal, and overdense regions (e.g. Ota et al. 2018;Mazzucchelli et al. 2019;Overzier 2021). The first spectroscopically confirmed galaxy overdensity around a z > 6 QSOs was presented recently by Mignoli et al. (2020), followed by a tentative confirmation of another structure by Overzier (2021). A significant fraction (≈ 40%) of z 6 QSOs has ALMAdetected dusty companion galaxies at distances of a few kpc (e.g. Willott et al. 2017;Decarli et al. 2018;Neeleman et al. 2019;Venemans et al. 2020). These satellite galaxies might host heavily reddened and buried AGN (e.g., Di Mascia et al. 2021a), although currently there is no strong observational evidence for the presence of accreting SMBHs in their centres (e.g., Connor et al. 2019Connor et al. , 2020Vito et al. 2019aVito et al. , 2021. Such objects would be typically brighter than inactive galaxies, expecially in the X-ray band. Therefore, their predicted number in numerical simulations can be tested against observational results to infer how well simulations approximate reality. In this paper, we present a study of the effect of AGN kinetic feedback on the observable properties of z > 6 AGN in cosmological simulations. In particular, we analyse a set of numerical simulations presented by Barai et al. (2018, hereafter, B18) with different kinetic feedback prescriptions, focusing on the most massive SMBH at z = 6 and its surrounding environment. We extract multiwavelength observables such as column density and radial extent of the gas distributed in the host galaxies, UV and X-ray AGN fluxes, and number of satellite AGN detectable over small (i.e., a few kpc) distances from the central SMBH. We compare these properties with results from multiwavelength observations. The paper is structured as follows. In § 2 we describe the numerical setup of the simulations, the AGN selection, and the method used to measure the gas column density and distribution. In § 3 we discuss the redshift evolution of the column densities for the considered AGN. In § 4 we present the observable properties of the simulated AGN and their host galaxies, and we compare them with empirical findings. In § 5 we investigate the presence of multiple AGN systems over scales of a few kpc, and we compare their detectability rates in the X-ray band with results from observations of high-redshift AGN. Finally, in § 6 we discuss and interpret the results, and in § 7 we provide a summary. All quoted distances are physical unless otherwise noticed. We adopt a flat ΛCDM cosmology with H0 = 67.7 km s −1 and Ωm = 0.307 (Planck Collaboration et al. 2016). Numerical model We consider the simulation runs AGNcone and AGNsphere by B18, which include kinetic feedback. We provide here a summary of the numerical setup and refer to the original works for an in-depth discussion. B18 used a modified version of the Smooth Particle Hydrodynamics (SPH) N-body code gadget-3 (Springel 2005) to follow the evolution of a comoving volume of (500 Mpc) 3 , starting from cosmological initial condition generated with music (Hahn & Abel 2011) at z = 100, and zooming-in on the most massive (i.e., 4 × 10 12 M ) dark matter (DM) halo, corresponding to a ≈ 3σ overdensity (e.g., Barkana & Loeb 2001), inside the box down to z = 6. Therefore, the final zoomed-in simulations focus by construction on a highly biased cubic region, with a volume of (5.21 Mpc) 3 . The highest level of the simulation has a mass resolution of mDM = 7.54×10 6 M and mgas = 1.41×10 6 M for DM and gas particles, respectively. The softening length for gravitational forces for these high-resolution DM and gas particles is R soft = 1h −1 ckpc. The code accounts for gas heating and cooling (including metal-line cooling) depending on the gas metal content, based on eleven element species (H, He, C, Ca, O, N, Ne, Mg, S, Si, Fe) that are tracked in the simulation (Tornatore et al. 2007). Star formation in the inter-stellar medium (ISM) is implemented following the multiphase effective subresolution model by Springel & Hernquist (2003), adopting a density threshold for star formation of nSF = 0.13 cm −3 . The simulations include stellar winds, supernovae feedback, and metal enrichment, and assume a Chabrier (2003) initial mass function in the mass range 0.1 − 100 M (Tornatore et al. 2007;Barai et al. 2013;Biffi et al. 2016). When a DM halo that is not already hosting a black hole (BH) reaches a total mass of M h 10 9 M , a MBH = 10 5 M BH is seeded at its centre. BHs are treated as collisionless sink particles and are allowed to grow by accretion of the surrounding gas or by mergers with other BHs. Gas accretion onto BHs is modelled via the classical Bondi-Hoyle-Littleton accretion rateṀ Bondi (Hoyle & Lyttleton 1939;Bondi & Hoyle 1944;Bondi 1952), capped at the Eddington rateṀ Edd : Accreting BH radiate away a fraction r of the accreted restmass energy, with a bolometric luminosity where c is the speed of light. B18 fixed the radiative efficiency to r = 0.1, a fiducial value for radiatively efficient, geometrically thin, optically thick accretion disks around a Schwarzschild BH (Shakura & Sunyaev 1973). A fraction f = 0.05 of the total output energy is distributed to the surrounding gas in a kinetic form 1 . In AGNcone the kinetic energy is distributed along two cones with a half-opening angle of 45 • . The direction of the cone axis is chosen randomly for each BH at the seeding time, and is kept fixed throughout the simulation (Barai et al. 2018), similarly to what is done in Zubovas et al. (2016). Instead, the AGN feedback in AGNsphere pushes away the gas particles along random directions, thus mimicking a spherical geometry. AGN selection We analyse the simulation snapshots in steps of ∆z = 0.2 from z = 10 to z = 8 and ∆z = 0.1 from z = 8 to z = 6. In particular, we follow the most massive SMBH at z = 6 in each simulation set, and consider a box with side size of 60 kpc centred on it. We refer to all of the SMBHs in the box accreting atṀ BH > 0.02 M yr −1 (i.e., L bol ≈ 10 44 erg s −1 ) as AGN. Fig. 1 presents the BH mass evolution of AGN in the two simulations. Each AGN is labelled with the initial letter of the run (C for AGNcone, S for AGNsphere). AGNcone forms two very massive (> 10 9 M ) BHs at z < 7, while only less massive BHs are formed in the AGNsphere run. This behaviour is linked to the implementation of the feedback: AGNcone allows the gas to accrete continuously along the equatorial directions, while the lack of a preferential direction along which the outflow is launched in AGNsphere does not allow for a steady and efficient accretion onto the SMBH. This effect can be appreciated in Fig. 2: the accretion rate of AGNcone is generally higher than that of AGNsphere, at least up toṀ ≈ 1 − 30 M yr −1 . At higher accretion rates, which are reached by the most accreting BHs at z < 7, AGN feedback prevents further increase of the accretion rate. Hereafter, we focus our analysis on the AGN that reach z = 6 with MBH > 10 8 M and L bol > 10 46 erg s −1 ; (see filled symbols in Fig. 1 and Fig. 2), which we refer to as "bright AGN" (i.e., C1, C2, and C3 in AGNcone; S1 and S2 in AGNsphere). These BH mass and luminosity values are typical of known z > 6 QSOs (e.g., Yang et al. 2021), allowing us to compare the physical properties of simulated and observed AGN in a consistent way. We note that, since the simulations focus on a single cosmic region at high redshift, the derived expectations on the AGN observable properties might be affected by cosmic variance. Gas column density and radial distribution Here we describe the method that we use to derive the distribution of hydrogen, helium, and metal column densities in the ISM for galaxies hosting AGN in the considered simulations. We make use of the hydrogen column density in the remaining of the paper to derive the observational properties predicted by the two considered simulations. We estimate the distribution of the column densities for the bright AGN in the simulations by launching 1000 randomly selected lines of sight (LOSs) toward each AGN from a distance d = 30 kpc. Each LOS is considered as the axis of a cylinder with basis radius of R soft . We note that the resolutions of the simulations do not allow us to probe structures on smaller scales, as, for instance, the existence of a dusty torus on pc scales. Then, each cylinder is divided along its length into bins of l bin = 0.25 kpc width, for a total of d l bin = 120 radial bins. We compute the density of each chemical element in a bin of the cylinder from the mass carried by each particle included in that bin. With this approach, we also obtain the radial distribution of the gas density. Finally, we integrate along the cylinder to compute the total column density of hydrogen (NH ) and of the other elements. The resulting total NH is not sensitive to reasonably different values of l bin (i.e., from 0.25 kpc to 1 kpc). Therefore, we used l bin = 0.25 as this value allows us to sample well the radial distribution of the gas (see § 4.2). Fig. 3 (upper panel) presents an example of the derived column-density map centred on the QSO C1 in AGNcone. Each circle represents one of the 1000 random LOSs, which sample homogeneously the entire solid angle as seen from C1. To assess the effect of feedback on the column density ( § 3), we also consider an additional simulation run presented in Barai et al. (2018), that is identical in terms of initial conditions and physical prescriptions to the AGNcone and AGNsphere, except that BHs are not seeded. The only type of feedback in this run, which we refer to as noAGN , is due to supernovae explosions (see Barai et al. 2018 for detailed discussion). We associate each AGN in a simulation to the corresponding galaxy in the noAGN runs following a method similar to that described in Zana et al. (accepted): first, we identify the DM halo hosting the AGN as the one having its centre of mass closest to the position of the SMBH. Then, we identify the corresponding halo in the noAGN run by crossmatching the DM particle IDs in the two runs, and selecting the halo in noAGN which shares the largest fraction of particles with the initial AGN halo, further imposing that the mass difference must be within a factor of 10-50.% 2 Finally, we repeat the procedure described above on the selected halo in noAGN , and derive the column density distribution in absence of AGN feedback. At z > 8, the redshifts at which the noAGN snapshots are taken are significantly different from those of the runs including AGN, making the DM-halo match procedure highly uncertain. Thus, we limit the identification of the AGN-hosting galaxies counterparts in the noAGN run to z < 8. Fig. 4 presents the evolution of the column density for bright AGN in the AGNcone and AGNsphere simulations. Considering the AGNcone simulation, the AGN column densities are similar to, or slightly lower than, those derived for the corresponding galaxies in the noAGN run until the AGN accretion rate reachesṀ ≈ 10 − 30 M yr −1 . This happens at z ≈ 7 for C1 and C2, and z ≈ 6.3 for C3 (see Fig. 2). At later times, the AGN column density drops significantly by up to ≈ 1 dex and the accretion rate starts to oscillate. The 10% and 90% percentiles span up to one order of magnitude, especially at z = 6 − 7, when the accretion rates reach the maximum values, producing the most powerful conical outflows. COLUMN DENSITY EVOLUTION Instead, the column densities of the corresponding galaxies in noAGN (grey stripes in Fig. 4) keep on increasing relatively smoothly. This finding confirms the AGN NH drop and the presence of unobscured LOSs to the effect of the conical kinetic feedback. At low accretion rates the produced outflow cannot stop the infall of material, but once the accretion rate reaches high enough values, the energy carried by the outflow impacts a significant part of the gas in the halo, hindering further infalling, especially along the conical outflow directions. As a result, the NH decreases, as well as the AGN accretion rate, until more material is allowed to accrete, producing a new burst of powerful feedback. Such a cyclic activity explains the decreasing median NH , the wider NH distribution, and the oscillatingṀ behaviour at later cosmic times. This result is in qualitative agreement with the self-regulation scenario discussed by, e.g., Sijacki et al. (2009) Trebitsch et al. (2019), according to which the AGN feedback controls the growth of the black hole and limits the duration of high accretion episodes by emptying the host galaxy gas reservoir, provided that the accretion rate is sufficiently high. However, we note that the physical interpretation of our results is complicated by the effect that one AGN may have on other AGN-hosting galaxies passing through its feedback cone. In fact, C1, C2, and C3 in AGNcone at z < 7 are always closer than 30 kpc, and reach minimum distances as small as 4 kpc. At these distances, powerful outflows launched from one AGN may affect nearby galaxies (e.g., Zana et al. accepted). As an example of the feedback effect on the column density, in Fig. 3, we compare the NH map centred on C1 with the radial velocity map of all particle within 10 kpc from C1. The maps correspond to z = 7.1, when C1 reaches a local maximum in accretion rate before the strong AGN feedback starts to impact significantly the NH (Fig. 4) andṀ starts to oscillate. Comparing the column density map (upper panel) with the map of the radial velocity of individual particles (lower panel), we notice that the two conical outflows, identified as regions with positive radial velocities, correspond to LOSs with low column densities. Such LOSs are those along which high-redshift AGN are more easily to be detected in the rest-frame UV band, as we investigate in details in § 4.3. Fig. 5 presents the fraction of LOSs along which NH < 10 22 cm −2 (solid lines) and NH < 10 23 cm −2 (dashed lines) for each bright AGN. Hereafter, we use the widely used threshold NH = 10 22 cm −2 to separate obscured and unobscured AGN. 3 For instance, Merloni et al. (2014) found that such a value returns the best agreement between samples of obscured AGN as defined in optical (e.g., narrow emissionline AGN) and X-ray bands. From Fig. 5 we infer that only at z 7 a fraction of the LOSs would appear as unobscured. In particular, at z 7 C1 presents unobscured LOSs over 10 − 40% of the solid angle, while this fraction is much more variable with redshift (i.e., 0 − 80%) for C2 and C3. The most massive BH in the AGNsphere simulation, S1, follows a somewhat similar NH evolution to that of the AGN in AGNcone: a roughly constant median NH value up to z ≈ 7 followed by a slightly decreasing and wider NH distribution (Fig. 4), and the appearance of unobscured LOSs (Fig. 5) at later cosmic times. However, some differences exist: first, the AGN NH is always significant lower than that of the corresponding galaxy in the noAGN run (grey stripe), even at z > 7. Secondly, the column density drop at z < 7 is not as strong as in the AGNcone case. Finally, at z > 7 the accretion rate of S1 is not as smooth as in the AGNcone case, Figure 4. Evolution of column density for bright AGN in the AGNcone (C1, C2, C3) and AGNsphere (S1, S2) simulations. We show the median value (solid line, color coded according to the AGN bolometric luminosity and accretion rate), and the 10% and 90% percentiles (dashed lines) computed by launching 1000 lines of sight. The gray stripes enclose the 10% to 90% percentiles of the column densities of matched galaxies in the same simulation sets where, however, BHs have not been seeded (i.e., the noAGN case). To compare with observational results ( § 4.1), the red arrows mark the 3σ upper limits derived for X-ray detected QSOs with > 10 counts from Nanni et al. (2018) and Connor et al. (2019). and keeps on increasing even at z < 7. These differences may be due to the prescripted geometry of the kinetic feedback in the AGNsphere case, in which gas particles are accelerated in a random direction during every accretion event. Therefore, in contrast with the AGNcone case, there is no preferential direction (i.e., the equatorial plane of the conical outflow) along which material can keep on accreting undisturbed for long periods of time at z > 7. In particular, the accretion rate of S1 never exceeds ≈ 10 M yr −1 , which is the approximate threshold after which the AGN kinetic feedback affects more evidently the NH distribution and the accretion rate of AGN in the AGNcone run. The column density evolution of S2, instead, does not appear to be strongly influenced by the AGN feedback. Although the median NH is slightly lower than the values found in the noAGN case, it remains constant with time, and does not drop even at z < 6.5, when S2 reaches similar accretion rate to S1. As a result, S2 would never appear as an unob-scured AGN. We note that the typical column density of S2 is a factor ≈ 3 higher than that of S1 at any redshift, and its accretion rate rises smoothly from z = 7 to z = 6. These properties suggest that higher accretion rates than the values reached by S2 are required in order to launch outflows powerful enough to sweep away the gas in the case of large column densities (e.g., Trebitsch et al. 2019), even when kinetic energy is distributed along random directions by the AGN feedback. The median values of NH we derive from the Barai et al. (2018) simulations are consistent with typical values found by Lupi et al. (2022). However, the resolution of that work is ×85 higher than our simulations, and allows the authors to sample compact regions of dense gas with NH 10 24 cm −2 , especially at z > 8, when AGN feedback has not yet affected significantly the ISM distribution and density in the host galaxies. One of the main methodological differences with that work is that we compare the ISM densities in the same Figure 5. Fraction of lines of sight obscured by column densities < 10 22 cm −1 (solid lines) and < 10 23 cm −1 (dashed lines) as a function of redshift for the bright AGN in the AGNcone (C1, C2, C3) and AGNsphere (S1, S2) simulations. The symbols are color coded according to the AGN bolometric luminosity and accretion rate. galaxies in which SMBHs are actively accreting or are not seeded at all. Thus, we probe directly the effect of AGN feedback on the ISM in the host galaxy. COMPARISON WITH OBSERVATIONS In this section, we compare the observable properties derived from the NH distributions of the AGN predicted by the simulations ( § 3) with observational results. In particular we focus on the comparison with constraints from X-ray observations ( § 4.1), the radial distribution of the gas reservoirs ( § 4.2), and the observed UV magnitudes ( § 4.3). X-ray obscuration X-ray observations are routinely used to constrain the column density of obscuring material along the LOSs of AGN. Low and moderate values of column densities (NH 10 22 cm −2 ) can absorb soft X-ray photons (rest-frame energies 2 keV), whereas larger column densities are required to absorb a high fraction of more energetic photons. However, X-ray observations of high-redshift QSOs (e.g. Vito et al. 2019b;Wang et al. 2021b) sample rest-frame energies E > 3 keV, and are thus sensitive only to high column densities (NH 3 × 10 23 cm −2 ), at least at the sentivities of currently available facilities. Moreover, all of the known z > 6 QSOs have been selected based on their unobscured rest-frame UV emission (i.e., they are optically classified as type 1 QSOs), and thus are not expected to be heavily obscured in the Xray band. For these reasons, existing X-ray observations of bright z > 6 QSOs provide us with only loose upper limits of NH . The downward-pointing red arrows in Fig. 4 are the observed upper limits on NH derived for a sample of z > 6 QSOs by Nanni et al. 2017 andConnor et al. 2019, with typical luminosities L bol = 10 46 − 10 47 erg s −1 . The column densities derived for bright AGN in all of the considered simulations are lower than, or consistent with, such loose upper limits. Although the NH values found for the noAGN case are typically higher, they are still consistent with some measured upper limits. Therefore, the constraints on NH ob-tained from X-ray observations of high-redshift QSOs only marginally favour the presence of kinetic feedback. We note that constraining AGN obscuration using X-ray observations requires an assumption on gas metallicity, as Xray photons are mainly absorbed by metal atoms. Typically, solar metallicity is assumed, whereas the ISM metallicity of the host galaxies of the AGN in the B18 simulations is subsolar (e.g., by factors of ≈ 2 − 3 at z = 6; e.g., Zana et al. in prep.). This consideration reinforces the overall consistency between the NH values constrained from X-ray observations and found in the simulations, as significantly larger column densities would be required in the case of sub-solar metallicities to produce X-ray obscuration in excess to that observed in real QSOs. In § 5 we discuss the X-ray detectability of the QSOs in the simulations. Gas radial distribution We investigate the effect of kinetic feedback on the observable sizes of the gas reservoirs in high-redshift QSOs. From the radial distribution of NH derived for each LOS in § 2, we computed the radius from the centre of the galaxy which includes 90% of the gas contributing to the total NH . Then, for each galaxy, we computed the median value considering all of the 1000 LOSs, and define it as R90. We use such a quantity to quantify the size of the gas reservoir in a galaxy. Fig. 6 presents R90 as a function of redshift for every bright AGN in the AGNcone and AGNsphere simulations, as well as for the matched galaxies in the noAGN runs. All of the bright AGN in the AGNcone simulation (C1, C2, C3) have a similar evolution of R90: their gas reservoir sizes are constant (≈ 1 kpc) at z 7. At lower redshift, where NH decreases due to strong effect of the kinetic feedback, which is proportional toṀ and L bol (see the color-code of the circles in Fig. 4 and Fig. 6), R90 increases up to several kpc. This behaviour is expected considering that the AGN feedback applies a mechanical push to the surrounding gas particles. In fact, the size of the gas reservoir in the noAGN run, where the AGN feedback lacks (grey dashed lines in Fig. 6), remains constant or tends to even decrease at later cosmic times. The evolution of R90 for S1 in the AGNsphere simulation is similar to that of the AGN in the AGNcone simulation. However, the increase of R90 is stronger and begins at earlier cosmic times. We recall that the accretion rate of S1 is typically lower than that of the AGN in AGNcone (see Fig. 2), and therefore the stronger evolution of R90 is not due to intrinsically stronger outflows launched by the AGN, but, as discussed in § 3, to the different geometry of the outflow: being launched along random directions at every accretion event, the outflow is more likely to transmit the kinetic energy to the gas particles in the galaxy even at low or moderate accretion rates. Instead, S2 does not follow the same evolution as S1. On the contrary, R90 decreases to sub kpc values approaching z = 6. As discussed in § 3 we ascribe this behaviour to the relatively low accretion rate, which does not produce feedback strong enough to efficiently affect the gas distribution in the host galaxy. We compare our findings with the observed extent of the [C II] emission of 25 z > 6 QSOs presented by Venemans et al. (2020), assuming that the [C II] emission line is a good tracer of the spatial extent of the total gas reservoir (e.g., Zanella et al. 2018;Sommovigo et al. 2021). We used the major axis of the deconvolved [C II] emission size (Tab. 3 of Venemans et al. 2020), which represents the FWHM of the emitting source, and converted it into the radius that includes 90% of the [C II] light, assuming a Gaussian distribution. 4 The resulting values are reported in Fig. 6) as black ticks at the redshift of each QSO. The AGN in the AGNcone simulation have R90 consistent with the observed values, while the median gas size of S1 is larger at nearly every redshift. S2 have a size consistent with the most compact QSOs in the Venemans et al. (2020) sample. However, this comparison is not fair: the ISM in S2 produces very large column densities at all redshifts and all LOSs (Fig. 4), and thus large expected values of dust extinction. All of the QSOs studied in Venemans et al. (2020) are instead rest-frame UV selected objects: we lack observational information about the extent of the gas reservoirs of buried high-redshift QSOs, as is S2. In all cases, the median gas size of the noAGN control galaxies are smaller than the observed values for QSOs, suggesting that kinetic feedback is required to produce the gas extents observed in real QSOs. UV magnitudes In § 4.1 we discussed how the available X-ray observations of z > 6 QSOs are not sensitive to the column density values that we derived for bright AGN in the simulations. Instead, the rest-frame UV emission of high-redshift AGN is expected to be severely affected by dust extinction even for low values of NH . In this section, we compare the expected rest-frame UV magnitudes of bright AGN in the simulations with the observed values of known z > 6 QSOs. We assumed that the intrinsic (i.e., unextincted) rest-frame UV spectra of the AGN-hosting galaxies in the simulations are dominated by the AGN (i.e., we do not include stellar emission) and are well represented by the Vanden Berk et al. We assumed a simple uniform slab of dust located in front of each AGN and an SMC extinction curve, and computed the measured rest-frame UV flux as where τ λ = k λ Σmf dust , k λ is the extinction cross section at wavelength λ, Σm is the mass column density of metals, which we computed in § 2.3, and the fraction of metal mass locked into dust is assumed to be f dust = 0.15 as in Di Mascia et al. (2021b). Finally, we computed the apparent magnitude at the wavelength corresponding to rest-frame 1450 A • , that is m1450. For all considered AGN, the metal mass is computed from Among the considered simulations, AGNcone produces the UV brightest AGN, which are consistent with the magnitudes of known QSOs at z 7. As discussed in § 3, such redshift range corresponds to the period when the AGN strong kinetic feedback strongly affects the gas column density in the host galaxy, strongly suggesting that known, optically selected z > 6 AGN are indeed observed preferentially along directions where AGN feedback has cleared the LOS of most of the gas and dust. This prediction is hard to be tested ob-servationally. Not only estimating the outflow direction is a difficult task, but the incidence of outflow in high-redshift AGN itself is still a matter of debate (e.g., Maiolino et al. 2012;Cicone et al. 2015;Bischetti et al. 2019;Novak et al. 2020;Izumi et al. 2021;Meyer et al. 2022). Moreover, z > 6 QSOs might have been detected along LOSs which have been previously cleared of most of the gas and dust by past outflows. In this respect, a caveat arises from the numerical implementation of the ISM properties in the Barai et al. (2018) simulations, which, as described in § 2.1, follow the prescription of Springel & Hernquist (2003). This model does not capture the ISM porosity and therefore is not able to resolve clumpy structures on pc scales. Resolving such structures might decrease the effective opacity of the medium and possibly produce more unobscured lines of sight, even in the absence of AGN feedback. In Fig. 7, only < 50% of the LOSs of an individual AGN have extinction values small enough to reproduce the observed magnitudes. We computed the probability that multiple AGN appear as UV bright (i.e., m1450 24) sources along the same LOS, and found that it is negligible. This result is consistent with observations, according to which, to date, no such a system of multiple UV-bright AGN has been discovered at high redshift. The most luminous AGN in the AGNsphere run, S1, reaches magnitudes as bright as the observed values only at z ≈ 6.5, while it fails at reproducing the magnitudes of z > 6.5 QSOs. This is due to the lower accretion rate, and thus lower intrinsic luminosity, of S1 than the accretion rates of bright AGN in AGNcone. The large column density of S2 results in dramatic extinction levels along all of the LOSs, such that only along a small fraction of the LOSs S2 has apparent magnitude consistent with those of observed highredshift QSOs, despite its intrinsic luminosity being similar to that of S1 at z < 6.5 (Fig. 2). MULTIPLE HIGH-REDSHIFT AGN ON 1-10 ARCSEC SCALES Typical separations between AGN in the Barai et al. (2018) simulations are ≈ 5−50 kpc, corresponding to only a few arcseconds in projection. To date, no multiple AGN system has been discovered observationally at z > 6 (e.g., Greiner et al. 2021), with the highest redshift AGN pair being recently discovered at z = 5.7 (Yue et al. 2021). This result could be due to dust extinction preventing the detection of other possible accreting SMBHs close to high-redshift QSOs, as we found in our simulations ( § 4.3). Alternatively, QSOs observed at z 6 intrinsically have no AGN satellite. The latter hypothesis implies that the simulations overpredict the number of bright AGN, due to, e.g., the specific numerical setup and seeding prescription. In addition, as discussed in § 2.1, the simulations focus on an overdense region, which maximizes the probability of forming multiple SMBHs, and thus bright AGN, in a small volume. To investigate better the relation between the predicted and observed number of systems of multiple AGN at high redshift, in § 5.1 we produce mock X-ray observations with the Chandra X-ray observatory 5 based on the AGNcone and AGNsphere simulations. Then, in § 5.2 we compute the probability of detecting multiple AGN on small angular separations, and compare the findings with observational results. Finally, in § 5.3 we investigate the potential of future X-ray facilities in detecting possible multiple faint AGN over small scales around bright high-redshift QSOs. Mock X-ray observations As discussed in § 4.1, the column densities that we derived in § 3 for simulated z > 6 AGN have a negligible effect on the X-ray emission at the observed-frame energies probed by X-ray telescopes, allowing us to factor out the effect of varying NH along different LOSs. However, we have to take into account another effect related to the specific choice of the LOS: the emission of different AGN might be blended along some LOSs due to projection effects, and appear as a single X-ray source. This effect might be important as the projected angular separations of the AGN in the considered simulations are comparable with the angular resolution of Chandra (i.e., ≈ 0.5 ), which is the existing X-ray observatory with the sharpest view. We produce mock observations using the SOXS v. 3.0 software, 6 using Chandra response matrices and ancillary files suitable for Cycle 20. SOXS accounts for three background components: a uniform Galactic component, a cosmic background due to point-like sources, and an instrumental component. For each simulation, we produce two sets of mock images, assuming an exposure time of 30 ks or 50 ks, which are typical lengths of real Chandra observations of z > 6 QSOs (e.g., Vito et al. 2019a;Wang et al. 2021b). For each set, we considered 100 random LOSs, along which all AGN have been projected on the sky plane according to their tri-dimensional positions in the simulations. This allows us to statistically take into account 1) the possible blending of multiple sources due to projection effects, and 2) the Poisson fluctuations of the number of detected X-ray photons at a given intrinsic flux. We convert the bolometric luminosities of AGN in the simulations into X-ray luminosities in the rest-frame 2 − 10 keV 6 https://hea-www.cfa.harvard.edu/soxs/ energy band using the Duras et al. (2020) relation. Then, we compute the fluxes in the 0.5-7 keV band (i.e., one of the standard energy bands used to analyse Chandra observations) for every AGN, and use them as input values to simulate the images. We adopt intrinsic powerlaw emission with photon index Γ = 2. This is a typical value for AGN up to z ≈ 6.5 (e.g. Nanni et al. 2017;Vito et al. 2019b), although Vito et al. (2019b) and Wang et al. (2021b) find hints for a steepening at higher redshifts. We also include absorption due to the measured value of column density along the considered LOS, although, as discussed above, the produced obscuration is negligible for our high-redshift objects, and a Galactic absorption component with NH = 5×10 20 cm −2 . These computations have been performed with XSPEC v.12.11 (Arnaud 1996; model phabs × zvphabs × powerlaw) 7 . Fig. 8 presents the expected X-ray flux of every AGN in the simulations as a function of redshift. X-ray detection of multiple AGN We ran a blind source detection procedure on the Chandra mock observations in the 0.5-7 keV band using the wavdetect tool in CIAO v.4.12 8 (Fruscione et al. 2006), with a significance threshold of 10 −5 , over an area corresponding to < 30 kpc from the central QSO, to be consistent with the volume considered throughout this work (see § 2). We repeated this procedure for all snapshots in the z = 6 − 7 range, which includes most of the z > 6 QSOs observed with Chandra, thus allowing for a fair comparison with real observations. Fig. 9 presents the number of AGN detected in the mock Chandra observations with 30 ks and 50 ks exposures, averaged over the 100 LOSs, for each simulation. AGNcone predicts an average of ≈ 1 detectable AGN already with relatively short exposures (30 ks) and multiple detected X-ray sources using slightly longer observations (50 ks) over all of the considered redshift range. Instead, according to the AGNsphere run, 30 ks (50 ks) Chandra observations of z 6.2 (z 6.5) should typically return no detected source, but the probability to detect one or more AGN increases quickly approaching z = 6. In order to compare these results with real data, we collected all of the available Chandra observations of z = 6 − 7 QSOs with exposure times of 20-40 ks and 40-80 ks (Tab. 1). The median exposure time of the 20-40 ks (40-80 ks) observations is 38 ks (54 ks) and the median redshift of the targeted QSOs is z = 6.4 (z = 6.5). These values are well matched to our sets of 30 ks and 50 ks mock images, respectively. We repeated the detection procedure described above on the real Chandra observations, considering only an area of R < 30 kpc from the targeted QSO, to allow for a fair comparison with the mock image results. We stress that the blind detection procedure prevents any bias related to rest-frame UV pre-selection of possible X-ray sources. The last column of Tab. 1 reports the number of detected sources in the real observations, 9 which are almost equally split between no detected source and one detected source (i.e., the targeted QSO): the average numbers of detected Xray sources in one observation are 0.50 and 0.56 for the 20-30 ks and 40-80 ks samples, respectively. Similar values are obtained by splitting each sample according to its median redshift. Comparing these results with the expected numbers of detected sources in simulations (Fig. 9), we find that AG-Ncone overestimates the number of detectable AGN at all 9 We note that for almost all of the QSOs considered here, the results of the blind detection procedure agree with what reported in the literature, but for J084229.43+121850.5. Vito et al. (2019b) reported a detection of X-ray emission from this QSO, while here we report it as undetected. This apparent discrepancy is due to the different detection procedure (i.e., blind detection vs. rest-frame UV pre-selection of the target position) and significance threshold. redshifts, assuming both 30 ks and 50 ks exposure times. Instead, AGNsphere underestimates such number assuming 30 ks observations, while shows a strong dependence on redshift for longer exposures: at z > 6.5 and z < 6.5 it underestimates and overestimates, respectively, the average number of detected X-ray sources. Due to the small sample sizes of real QSO observations and the narrow range covered by the number of detectable X-ray sources, it is difficult to provide a quantitatively robust comparison with the predictions from simulations. Nonetheless, we attempt to do it by comparing the normalized histograms of detected sources in the mock and real observations over the entire z = 6 − 7 range (Fig. 10). This is justified by the relatively flat redshift distribution of the QSOs targeted by real observations (Tab. 1). For each set of mock images, we computed the two-sample Anderson-Darling test. 10 The null hypothesis is that the mock and real observations are drawn from the same parent population, for what the number of detected X-ray sources is concern. We found that the null hypothesis can be rejected with high significance (i.e., Anderson-Darling test sigificance level 0.001) for almost all combinations of simulations and exposure times: Fig. 10 confirms that AGNcone and AGNsphere overestimate and underestimate, respectively, the number of detectable X-ray sources. Mock simulations of AGNsphere with texp = 50 ks is the only set for which the null hypothesis cannot be rejected, although this simulation is not consistent with real observations for texp = 30 ks. It is worth noting that few z > 6 QSOs have been pointed with long Chandra exposures (100-500 ks; e.g. Nanni et al. 2018, Connor et al. 2020, Vito et al. 2021). Some of these observations were performed to check the presence of faint and possibly obscured AGN around z > 6 QSOs, for which companion galaxies have been detected with ALMA and HST. However, to date, no solid detection of such satellite AGN has been obtained (Vito et al. 2019a(Vito et al. , 2021Connor et al. 2019Connor et al. , 2020. Predictions for future X-ray facilities The high sensitivities of future X-ray facilities will allow us to push the search for AGN satellites of luminous optically selected QSOs at z > 6 down to intrinsic luminosities significantly lower than those probed with Chandra. In Fig. 8 we report as dotted grey lines the approximate expected sensitivity limits of future missions such as Athena/WFI (Nandra et al. 2013), AXIS (Mushotzky et al. 2019;Marchesi et al. 2020), and Lynx /HDXI (Gaskin et al. 2019), each one computed assuming 10 ks exposure time, and compare them with the sensitivity of a 50 ks Chandra observation. We computed these values by simulating X-ray observations of an X-ray source, assuming a simple power-law spectrum with photon index Γ = 2 and varying flux. In particular, for each instrument, we loaded response matrices and background files 11 in XSPEC, and computed the expected source and background count rates in a region including ≈ 90% of the expected point (1) ID of targeted QSO; (2) (4) Chandra observation ID considered in this work; (5) Exposure time; (6) number of detected X-ray sources according to the procedure described in § 5.2. * These QSOs have been observed with multiple ObsIDs, resulting in longer total exposure times than those reported here. We only consider the reported ObsIDs to allow for a fair comparison with our 30 ks and 50 ks mock observations. spread function (PSF); i.e., R = 1 for Chandra, AXIS , and Lynx , and R = 5 for Athena. Then, we computed the flux that returns a binomial no-source detection probability (i.e., PB; Weisskopf et al. 2007) such that (1 − PB) = 0.997, corresponding to 3σ in the Gaussian approximation. Fig. 8 shows that all of the considered next-generation Xray mission will provide us with a huge improvement in the capability of detecting faint AGN at z > 6, including satellite AGN around bright QSOs at z > 6, in a fraction of the time of a typical Chandra observation. Fig. 11 presents simulated X-ray observations with Chandra (50 ks), Lynx (10 ks), AXIS (10 ks), and Athena (10 ks) of a representative snapshot (i.e., z = 6.5) and LOS of the two simulation runs. The satellite AGN will appear as multiple X-ray sources on a few arcsec scales. This implies that, in addition to high sensitivity, excellent angular resolution, such as that provided by AXIS and Lynx , is required to detect them individually. To probe this issue, we performed a blind detection run with wavdetect on these images, and compared the detected sources (black stars in Fig. 11) with the input AGN (colored circles): the identification of close objects like C1 and C2 is difficult even with missions with ≈ 0.5 arcsec angular resolution. The problem is clearly more evident with Athena, due to its PSF of a few arcsec. DISCUSSION As discussed in § 2.1, the outflow directions in the considered simulations are assumed not to be physically related to the host-galaxy properties and to be time-independent. In particular, the AGNcone simulation does not assume the outflow to be perpendicular to the plane of the host galaxy, as suggested by several observations of kpc-scale outflows or radio jets in the local universe (e.g., García-Burillo et al. 2014;Cresci et al. 2015;Morganti et al. 2015;Venturi et al. 2021), where the outflow geometry can be studied in details, and by some numerical simulations (e.g., Hopkins et al. 2012). Several physical mechanisms can concur in the acceleration of winds at sub-pc scales that eventually produce largescale outflows, including magneto-hydrodynamic effect (e.g., Sadowski et al. 2013), thermal driving (e.g., Proga 2007), radiation pressure acceleration, either applied on dust (e.g., Ishibashi & Fabian 2015) or mediated by UV transitions (e.g. Proga & Kallman 2004;Mizumoto et al. 2021), which might produce outflows with different geometries. Moreover, the outflow geometry might be affected by interactions with the surrounding environment as the outflow expands (e.g. Nelson et al. 2019;Talbot et al. 2021), and might change with time. Cosmological simulations cannot describe in detail such a complex, and largely unknown, physics and evolution of outflows with relatively simple numerical recipes. The goal of this paper is to investigate the effect of two particular large-scale outflow geometries (i.e., a spherical outflow and a bi-conical outflow parametrized as described in § 2.1) on the observable properties of high-redshift AGN, regardless of the sub-grid physical mechanisms responsible for their acceleration. Extensive numerical simulations with identical initial conditions and physics except for the outflow parameters would be required to check whether and how the results are sensitive to different choices of the outflow parameters. Kinetic feedback produced during the phases of fast accretion of SMBHs in the Barai et al. (2018) simulations has a significant impact on the surrounding material and is required to match the predicted observable properties of bright AGN with observational results. One of the strongest piece of evidence is represented by the study of the gas extent in the AGN host galaxies (Fig. 6): the gas reservoirs in the noAGN case (i.e., in absence of AGN feedback) are always more compact than those derived from ALMA observations of z > 6 QSOs (see also, e.g., van der Vlugt & Costa 2019). The effect of AGN feedback pushes the gas in the host galaxies to larger distances (i.e., up to a few kpc) from the centres, in agreement with observations (e.g., Cicone et al. 2015;Bischetti et al. 2019;Venemans et al. 2020;Izumi et al. 2021). Although other mechanisms related to AGN feedback may produce such an observable, by, for instance, preventing gas infall from large scales (e.g., Trussler et al. 2020) or causing fluctuations in the gravitational potential, which may lead to a radial migration of the material (e.g., van der Vlugt & Costa 2019), Barai et al. (2018) found that the mechanical removal of gas from the inner region of the host galaxies is the main process that affects their gas content in their simulations. We underline that also some 5 < z < 7 star-forming (1 − 70 M yr −1 ) galaxies have been found to show both an extended [C II] halo (e.g., Fujimoto et al. 2020) and broad wings in the [C II] emission-line profile (e.g., Gallerani et al. 2018;Ginolfi et al. 2020), suggestive of outflows possibly powered by a yet undetected accreting MBH (e.g., Orofino et al. 2021). At z < 7 the feedback produces a general decrease of the NH (Fig. 4), allowing for the appearance of unobscured (i.e., NH < 10 22 cm −2 ) LOSs (Fig. 5). Such directions are most probably those along which known z > 6 QSOs are preferentially observed, as the rest-frame UV selection of these objects requires low dust extinction. In fact, at z 6.5, when the feedback effect is the strongest, bright AGN in the AGNcone simulation are able to reach the UV magnitudes observed for known z > 6 QSOs (Fig. 7). However, such LOSs represent only a fraction of the total LOSs of an AGN (see also, e.g., Ni et al. 2020;Trebitsch et al. 2019;Lupi et al. 2022): more than half of the LOSs would appear too faint to be selected as high-redshift objects in current optical/near-IR surveys, suggesting that a large fraction of the high-redshift, intrinsically luminous QSO population is observationally missed due to strong UV extinction produced by the ISM only. The presence of a dusty torus on pc scales, which is not included in the simulations we have analysed, would further increase such a fraction. The outflow geometry likely plays an important role: in the case of a conic outflow, SMBH accretion proceeds at maximum efficiency through equatorial infalling of gas until M ≈ 10−30 M yr −1 (Fig. 2), producing BHs with masses of > 10 9 M at z = 6 − 7 (Fig. 1). At these accretion rates, the feedback regulates further accretion and reduces the typical obscuring column density, in particular along the cone direction (Fig. 3). In the case of outflows launched along random directions, the feedback can affect the growth of the SMBH and the NH distribution even at lower accretion rates, resulting in < 10 9 M BHs at z = 6, provided that the gas in the host galaxy is not too dense, as in the case of S2. Thus, the ISM properties (i.e., NH and radial size of the gas) of the brightest AGN in the AGNsphere run is in agreement with observations. However, hindering the formation of > 10 9 M BHs, the spherical geometry of the feedback in AGNsphere prevents AGN from reaching intrinsic luminosities comparable to known z > 6 QSOs at most redshifts (Fig. 7). Interestingly, even the most luminous AGN in AGNcone cannot explain the detection of UV-bright QSOs at z ≈ 7.5 (Fig. 7), due to the combination of the relatively small BH masses, and hence low accretion rates, which, by construction, are capped at the Eddington rate, and typically high NH at that early cosmic time in this simulation. The existence of bright QSOs at z ≈ 7.5 (e.g., Bañados et al. 2018;Wang et al. 2021b) requires different physical conditions for the SMBH formation and mass growth from those adopted in the considered simulations. 12 Future numerical simulations may Figure 11. Simulated X-ray observations in the 0.5-7 keV band of the most-massive AGN at z = 6.5 and the surrounding satellite AGN in the AGNcone (upper row) and AGNsphere (lower row) simulations. From the leftmost to the rightmost columns, we simulated observations with Chandra/ACIS-S (50 ks), Lynx /HDXI (10 ks), AXIS (10 ks), and Athena/WFI (10 ks). For presentation purpose, the angular scale of the Athena image is different from the other cases, due to the larger PSF. The circles mark the location of the simulated AGN for a representative line of sight, and are color coded as in Fig. 1. The black stars mark the position of X-ray detected sources obtained with a blind detection procedure. explore such conditions as viable ways to reconcile the expected and observed properties of z > 7 AGN. Non-mutually exclusive possibilities are: (a) different BH seeding mechanisms, that is, bright and massive QSOs discovered at z ≈ 7.5 may be grown from more massive BH seeds or have been seeded at earlier redshift than the SMBHs in the simulations. (b) Sustained periods of super-Eddington accretion at z > 7.5, whereas in the simulations the SMBH accretion rate is capped at the Eddington limit. (c) Mass accretion characterized by a lower radiative efficiency than the value used in the simulations (i.e., r = 0.1). In this case, the mass that is not converted into radiation contributes to the growth of SMBH, which can reach higher masses than those found in simulations at a given time. For instance, Davies et al. (2019) report observational evidence for possible low radiation efficiency ( r ≈ 0.001) in highredshift QSOs. (d) High-redshift AGN typically reside in regions which are even more overdense than that investigated in the Barai et al. (2018) simulations, thus favouring the formation of SMBHs at earlier epochs. However, this possibility would arguably make the discrepancy between the observed and expected number of multiple X-ray detected AGN on small scales even worse. In addition, observational studies return contradictory results on the typical large-scale environment of high-redshift sions, as the simulations focus on a single cosmic region at high redshift. The analysis that we have performed demonstrates that the comparison between several observable properties of AGN predicted by the Barai et al. (2018) simulations and the observational results, including both the properties of the individual galaxies and the environment, can help us to validate the recipes and assumptions adopted in numerical simulations. In particular, we found that AGN in the considered simulations match the gas radial distributions and apparent UV magnitudes of high-redshift QSOs. In addition, the same set of simulations has been demonstrated to reproduce well a number of physical properties of z > 6 QSOs, such as dust properties (Di Mascia et al. 2021b), multi-wavelength spectral energy distribution (Di Mascia et al. 2021a), and the number of UV-detected and [C II]-detected satellite galaxies (Zana et al. accepted). However, we also found that the predicted number of X-ray detectable satellite AGN located over small scales around luminous high-redshift QSOs both in the AGNcone and AGNsphere simulations does not agree with the observational results. This observable is relatively easy to estimate from simulations as it depends primarily on the BH accretion rate only, once a suitable conversion to X-ray luminosity is assumed. Moreover, gas and dust absorption does not affect significantly the observed X-ray emission from high-redshift AGN, as opposed to UV emission, up to high column densities (log N H cm −2 ≈ 23.5 − 24.0; see § 4.1 and § 4.3). The mismatch between the number of multiple X-ray detected AGN on small scales between simulations and observations may be related to numerical issues and physical pre-scriptions. In particular, the simplistic BH seeding recipe implemented in the considered simulations (i.e., a 10 5 M BH is placed in the centre of a galaxy when this reaches a given mass threshold) naturally leads to the formation of a large number of SMBHs, that would appear as bright AGN at later cosmic times. Similar seeding recipies have been commonly adopted by most cosmological simulations (e.g., Costa et al. 2014, Di Matteo et al. 2017, Barai et al. 2018, Smidt et al. 2018, Lupi et al. 2019, Valentini et al. 2021, and typically mimic the "heavy seed" formation channel for SMBHs (e.g., Lodato & Natarajan 2006;Ferrara et al. 2014). However, theoretical models of "heavy seed" formation require stringent physical conditions on, e.g., metallicity, physical state of the gas, ad radiation fields (e.g., Ferrara et al. 2014). Accounting for such conditions in cosmological simulations is particularly difficult, but would reduce the number of formed SMBHs, and thus the discrepancy with observational results. Another possibility is that observed QSOs at high redshift do not reside in regions as dense as those probed in the analysed simulations (but see, e.g., Zana et al. accepted). In this case, the formation of multiple SMBHs is expected to be hindered, helping us reconcile the expected number of X-ray sources with observational results. In addition, we would also expect to form less massive BHs, with direct consequences on the observational expectations discussed in this paper, as the BH mass is tightly linked with the maximum accretion rate, and thus AGN luminosity and feedback strength. Qualitatively, we would expect to derive fainter rest-frame UV and X-ray fluxes, weaker feedback, and, as a consequence (see Fig. 6), more compact gas reservoirs (i.e., similar to the noAGN case) than the values discussed in § 4.2, § 4.3, and § 5. Future X-ray facilities will provide us with the required sensitivity and angular resolution to investigate the presence of multiple faint AGN around bright high-redshift QSOs down to unprecedented flux limits (see § 5.3). SUMMARY AND CONCLUSIONS We studied the observable properties of z = 6 − 10 bright AGN in a suite of zoom-in cosmological simulations by Barai et al. (2018) characterized by the inclusion of AGN kinetic feedback with either bi-conical (namely, AGNcone) and spherical (AGNsphere) outflow geometry. We focused our investigation on the gas column density and size in the host galaxies, the AGN rest-frame UV magnitude and X-ray fluxes, and the detectability of systems of multiple AGN over a few kpc scale in the X-ray band. We compared these quantities with a control simulation in which SMBHs are not seeded (i.e., noAGN ), and observational results of z > 6 AGN. We summarize our findings as follows. • AGNcone produces three bright AGN that grow up to 5 × 10 8 < MBH < 5 × 10 9 M at z = 6. These objects are characterized by a steady increase of their accretion rate up to ≈ 10−30 M yr −1 . Once such high values are reached (at z ≈ 6.5 − 7), the strong AGN feedback prevents further increase of the accretion rate. This behaviour is linked with the biconical geometry of the outflow, that allows steady infalling of material along the equatorial directions, at least until the feedback grows strong enough to affect most of the gas in the galaxy halo. In AGNsphere, the spherical geometry of the outflow affects gas accretion already at low and moderate SMBH growth rate. For this reason, the two bright AGN produced in AGNsphere reach lower values of BH masses (i.e., 2 × 10 8 < MBH < 5 × 10 8 M ) and accretion rates (Ṁ < 10 M yr −1 ) than objects in AGNcone. • AGN host galaxies in AGNsphere have gas column densities of NH ≈ 10 23 cm −2 from their formation up to z = 6.5−7, when NH presents a remarkable drop due to the strong AGN feedback. In fact, the NH in matched galaxies in noAGN continues to increase during the entire considered redshift range. The brightest AGN in AGNsphere presents a similar behaviour as those in AGNcone, although the NH is typically slightly lower. We interpret this difference again as due to the assumed spherical symmetry of the outflow. Instead, the second bright AGN in AGNsphere do not reach accretion rate sufficiently high to significantly affect the gas in the host galaxy. Our findings are consistent with the upper limits on NH recently reported for a set of z > 6 AGN observed in the X-rays. • Kinetic feedback is required to match the gas extent reported for high-redshift QSOs (i.e., up to a few kpc). In fact, galaxies in noAGN present typical gas sizes of < 1 kpc, while the extents of the gas reservoirs of AGN in AGNcone and AGNsphere increase up to the observed values of a few kpc at z 7. The exception is the second bright AGN in AGNsphere, due to its relatively low values of accretion rate. • All AGN in the simulations would appear as obscured (i.e., NH > 10 22 cm −2 ) along all lines of sight (LOSs) at z > 7. These objects would be missed by currently employed UV-based selection methods, which are heavily affected by dust extiction, and would require observations in different bands (e.g., X-ray or infrared) to be unveiled. At later cosmic times, a fraction of LOSs (up to ≈ 80%, depending on the specific AGN and redshift) have NH < 10 22 cm −2 . These are the preferential directions along which known, UV-selected z > 6 QSOs are observed. • Under simple, but reasonable, assumptions on the gasto-dust mass scaling and dust distribution, we estimate the apparent UV magnitudes (m1450) of the AGN in the simulations along different LOSs. We found that AGN in AGNcone have m1450 consistent with those observed for real highredshift QSOs (i.e., m1450 < 25) along 50% of the LOSs at z < 7. AGN in AGNsphere, instead, have fainter magnitudes, due to the lower instrinsic luminosities, and, for the second AGN, the high extinction levels along most of the LOSs. No AGN in the simulations can reproduce the observed UV magnitudes of the few z ≈ 7.5 QSOs known to date, whose formation and accretion history are likely not well captured by the prescriptions assumed in the simulations. • The presence of multiple bright AGN over scales of a few kpc led us to investigate their detectability in X-ray observations with Chandra, and to compare the results with real observations of z > 6 QSOs. We found that the AGNcone run significantly overpredicts the number of X-ray detected multiple AGN at high redshift. Instead, AGNsphere produces AGN with lower rate of X-ray detection than typical values derived in relatively shallow (i.e., 30 ks) observations, while it is consistent with the results obtained with longer (i.e., 50 ks) observations. These results demonstrate that the AGN in the considered simulations have physical properties consistent with those of real QSOs for what the column density and extent of the gas in the host galaxies and the UV magnitudes are concerned. A bi-conical geometry for the outflow is favored over a spherical geometry, as it reproduces AGN with the high luminosities and SMBH masses observed for z = 6 − 7 QSOs. However, both simulations cannot explain the recent discovery of luminous QSOs at z ≈ 7.5, which may have been formed at higher redshift than the assumed seeding time in our simulations, or may have undergone extensive periods of super-Eddington accretion. Moreover, we showed that the number of multiple AGN detectable in X-ray band over few kpc scales is the observable property that the considered simulations struggled the most to reproduce. We propose that this issue can be due to the simplistic BH seeding methods generally implemented in cosmological simulations, that do not account for the complex physics related with the formation and rapid growth of massive BHs in the early Universe. Future X-ray observatories will provide us with the sensitivity required to investigate the possible presence of multiple faint AGN satellites around luminous QSOs at high redshift.
2022-05-20T01:16:04.709Z
2022-05-18T00:00:00.000
{ "year": 2022, "sha1": "297ced05fcbad5a2c14ff012039a73bdf6a94cb8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "297ced05fcbad5a2c14ff012039a73bdf6a94cb8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265567415
pes2o/s2orc
v3-fos-license
Families Braulidae and Streblidae (Insecta: Diptera) as ectoparasites of mammals . Introduction Carnoidea is a poorly defined superfamily.In 1989, ten synapomorphies were described for the group, but most of these have later been challenged.As of 2006, the following synapomorphies were described: the uppermost fronto-orbital bristle(s) of the head is acclimate; the phallus of the male is flexible, unsclerotized, simple, and elongate; and the phallus is microtrichose Braulidae are associated with honey bees, with larvae developing in beeswax while adults attach to bees and feed from bee mouthparts.Braulidae, or bee lice, is a family of true flies (Diptera) with seven species in two genera, Braula, Nitzsch, 1818, and Megabraula Grimaldi & Underwood, 1986.They are found in honey bee colonies due to their phoretic, inquiline, and kleptoparasitic relationships with the bees (Figure 1) [1][2][3]. Objective The objective of this bibliographical production is to understand the biological, ecological, and taxonomic characteristics of the Braulidae and Streblidae families. Methods In terms of the type of research source, we worked with scientific articles published in national and international journals.This type of production, in addition to being commonly the most valued in all bibliographic production, is the most easily accessed.Access to articles was through virtual libraries such as SciELO, ResearchGate, Hall, USP, UNB, CAPES, Qeios, and LILACS. Geographical distribution This fly has spread through: Africa: Congo, Egypt, and Morocco.Asia: India and the Soviet Union, Australia: Tasmania.Europe: for the most part.South America: Argentina, Chile, Brazil, Trinidad and Tobago, Venezuela.United States: Alabama, Delaware, Illinois, Maryland, Minnesota, York, North Carolina, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, and Wisconsin.In Florida, only a single specimen collected from a queen bee is known [1][2][3][4]. Stages  Egg: The eggs are white, oval in shape, and with two flat lateral ridges, parallel to the long axis of the egg.Imms reported that the average length, without the ridges, is 0.78 to 0.81 mm and the width 0.28 to 0.33 mm.Including the flanges, a typical egg measured 0.84 mm by 0.42 mm.The eggs are laid in various places in a hive, in empty cells, in the breeding boxes, or in the gnawed wax that is usually on the floor of a colony.The incubation period for eggs varies between two days during summer and 7.4 days during winter. Larva: The larvae emerge from the eggs beginning to build a tunnel under the opercula and sometimes in the walls and bottom of the cells of the combs.These tunnels show the infested paintings as if they had fine fractures.The larvae feed on honey and pollen inside tunnels in the wax.The larvae take between 7.1 and 10.8 days to finish developing, depending on the season of the year. Pupa: The prepupa takes 1 to 2.7 days and its color is creamy white.The pupae are white or yellowish, 1.4 to 1.7 mm long by 0.5 to 0.75 mm wide. Adult: The adult has rudimentary eyes on the antennae that appear as pale dots on the surface of the cuticle surrounded by darker pigmented chitinous rings.There is no trace of wings or rockers.The tarsus has five segments; Each terminal segment contains a comb-like structure, divided in the center, with a variable number of teeth.The combs allow Braula to cling firmly to the host [1][2][3][4][5][6]. Biological Cycle At the beginning of her biological cycle, the ovoviviparous female lays eggs on the inner surface of the opercula that cover the honey-filled cells or on their walls.Here larval development takes place, with the larvae excavating tunnels when they hatch and feed of honey and pollen.Then the pupa passes and, as an adult, attaches itself to the body of the adult bee.Affects all castes, however, it has a preference for workers that feed young larvae and the queen, as the latter has a richer diet in terms of quality and quantity, and moves less.Braula is a genus of insects in the order Diptera of the family Braulidae.These insects are very unusual, they are wingless and flat, making them almost unrecognizable as Diptera [1][2][3][4][5][6][7][8][9]. Biology and Ecology The Braula coeca Nitzsch, 1818 species is a pest of honey bees, called bee lice, its larvae make tunnels through the wax combs in hives and the adults attach themselves to the bees' bodies.There is debate among experts as to whether B. coeca actually causes significant harm to bees.These insects can be found in places where bees congregate, such as flowers or water and mineral sources (salt pans), waiting to grab hosts from uninfected hives.An adult Braula is about 1.6 mm long (Figure 2) [1][2][3][4][5][6][7][8][9][10][11][12][13]. Epizootiology. The bee louse overwinters in the hives.The ovoviviparous female deposits the eggs, white and elliptical, 70um long, on the inner side of the opercula that cover the cells filled with honey.Sometimes she also deposits eggs on their walls, but never in the brood cells.Development is completed under these opercula and the larva makes a tunnel, which is small in diameter at first and increases in size when there is an increase in the size of the parasite, which obtains food from the honey and stored pollen. Once the larval development is complete, the Braula becomes a pupa within the tunnels it has excavated, and when it reaches the adult state, it attaches itself to the body of the bee.Department of the cycle, from oviposition to the emergence of the adult parasite reaches 21 days.When the beekeeper carries out inspections of his hives.Parasitized bees, especially queens, are restless, nervous, weakened, and at intervals shake their legs or rub their bodies with their wings, in order to get rid of the parasites, but without any success. There is an isolated treatment that is performed on the queen and that consists of removing the lice from her body using a toothpick dipped in honey.When parasitism is important, it is recommended to carry out a general treatment, which may include those indicated.  Nicotine vapors: The spout and the interior of the hive are smoked with nicotine through the lid and below it.Then, cover the hive again and wait a few minutes for the tobacco to act on the lice.The effect of tobacco numbs the lice, so they break free from their host and fall to the floor of the hive.Therefore, either the floor is changed once the tobacco has done its job or it is protected with a sheet of newspaper before beginning to smoke the hive.  Naphthalene: With mothballs, it is applied to the bottom of the hive and proceeds in a similar way to treatment with nicotine vapors. There is a method to eliminate the proliferation of the parasite, based on the hypothesis that the insect only has one generation per year, so it is enough to uncap the honey boxes present in the hive during the larval period of the parasite, in order to break their biological cycle [13][14][15][16][17].Braula coeca is an external commensal that is located on the body surface of the bee, more precisely on the back, between the junction of the thorax and abdomen.It mainly parasitizes the queen, less frequently in the workers, and almost never in the drones and pecoriators.The reason for the greater presence of lice on the queen is due to her permanent presence in the hive, and not to a predilection of lice (Figure 3).The damage that these insects cause to the hive occurs in two moments, the first during their development, when the larvae dig tunnels by digging the wax, and the second, in their adult phase, when they are on the queens in high frequency and number, being able to reach to the area of the proboscis and thus with its movements irritate the queen and cause regurgitation of the food content on which it feeds.Furthermore, the presence of lice on the queen's body causes discomfort that leads to a considerable decrease in oviposition, resulting in a decrease in the number of larvae and therefore the family [13][14][15][16][17]. Study 2 Genus Braula: Vector of the parasite Nosema apis (=Varroa jacobsoni) Zander, 1909 (Microspora: Nosematidae) Nosema apis parasitic mite is a spore-forming protozoan that causes nosemosis in honeybees.It infects adults of honeybees.Sick bees cannot fly.They wait in front of the hive hole.Peak infection occurs in early spring.The spores of N. apis are light-refraction and oval.They are 2-4;3-7µ in size.They develop within the epithelial cells in the stomachs of adult bees.That's why it is an adult bee disease nosemasis does not pose a great danger in bees with normal conditions, but when it occurs together with another disease, it causes mass deaths in bees (Figures 4-5).Other factors that play a role in transmission are briefly as follows: Use of infected frames at the end of summer Crushing of bees between covers Transferring colonies to other places.As a result of this infection, the lifespan of adult worker bees is reduced by half in spring and summer.Because they have to feed the babies during this period, they are less resilient. Since the baby feeding glands are degenerated, the babies cannot be fully fed.Since these sick bees have a large amount of water in their stomachs, their risk of contracting dysentery increases.If the queen bee is infected, they show sensitivity, their ability to lay eggs decreases and they die, while some continue to lay eggs even if the disease is severe. Nosema is transmitted to the queen bee by worker bees.N. apis spores in water at 58 °C for 10 min.They also die.They are sensitive to stink.They die in approximately 20 hours when exposed to direct sunlight.A microscope is used to identify N. apis spores.The stomachs of the bees are removed, crushed with physiological saline, and stained with Giemsa, and spores are tried to be seen under the microscope. Additionally, when the intestines crushed with physiological saline are stained with 0.1% Nigrosin, the spores appear white and shiny and the medium appears black.Again, native examination (physiological water) and bile (physiological water-1% safranin-methylene blue) examination methods are used. At necropsy, the stomach of bees that die from nosemosis is white.However, the stomach of healthy bees is yellow or yellow-green in color.Protection and control: It is always necessary to ensure that the colonies are in strong condition.Old mothers should be replaced by young and strong mothers.Bees feed on fumagillin syrups new preparations (Fumidil B).The temperature of the syrup should not exceed 49 °C.For the treatment of Nosematose, fumigillin is added to the syrup at a ratio of 1/844 and given in early spring and late autumn feeding [13][14][15][16][17][18][19][20]. Family Streblidae It is still uncertain whether bat flies and other parasitic arthropods of these flying mammals play a relevant role in the transmission of zoonotic diseases associated with bats and are of importance to humans and other animals.ectoparasitic viruses are ecologically and epidemiologically linked to bats but are rarely found in bats, they may actually represent viruses from bat flies or other bat ectoparasites.Ectoparasites and their hosts constitute very appropriate systems when one wishes to study issues related to diversity and abundance patterns related to intrinsic factors of spatial and temporal interaction between different species [21][22]. Several factors influence the diversity of the ectoparasitic community, among which the size and type of shelter of the host species stand out.The microclimatic favoring that the shelter provides to bats strongly influences ectoparasites.Shelters such as caves and artificial cavities have high environmental stability, thus favoring both the host and the organism parasite [23][24][25]. Ectoparasitic arthropods of bats belong to five different orders (Siphonaptera, Diptera, Hemiptera, Dermaptera, and Acari).However, they are not necessarily restricted to bats.About 690 species of insects are known ectoparasites of bats, of which six families (of four orders) are exclusively found in bats.Among Diptera, two families are exclusively ectoparasites of bats: Nycteribiidae and Streblidae [26][27][28]. Bats comprise one of the most diverse groups of mammals in the Neotropics and many of the parasite species associated with them range from specific to generalist.The biology, systematics, and phylogenetic aspects involving the host bats will be more well understood through knowledge of their ectoparasites.Such knowledge can also help in understanding the epidemiological aspects of the transmission of some diseases among bats [29][30]. The Streblidae family is formed by hematophagous dipterans ectoparasites of bats.They are found parasitizing 14 bat families worldwide, mainly associated with tropical environments, with only two species occurring in hibernating bats.Most Streblidae, 62.5% of the 251 species, occur exclusively in the New World.Despite being extremely adapted to the habit of parasitic life, these insects are quite mobile and about 78% of the total species have functional wings and can fly.The Streblidae are viviparous, having three larval stages that develop in the female's uterus, the pupa that develops in the shelter and the adult that is parasitic and hematophagous [31][32][33]. Description One of the characteristic features of streblid bat flies is their variable degree of eye reduction.The compound eyes are highly, but variably reduced, with some species containing only rudimentary eye spots.Ocelli are absent in all species. Wing morphology also significantly varies within the family with some species containing fully functional wings, while others contain either reduced (non-functional or functional) wings or no wings at all (Figure 6) [34][35]. Biology They are viviparous ectoparasites, obligatory and exclusive to bats, which instead of laying eggs or larvae, what they do is put an already developed pupa.Viviparity is adenotrophic; that is, the larvae feed on glandular secretions in the uterus.In the third instar, the larva is deposited in the shelter of the host (Figures 7-8) [36][37].Streblidae one larva develops inside the female and feeds on secretions from the accessory glands, which are highly specialized.Eventually, the third-instar larva is deposited as a sessile prepupa in a substrate.The pupa forms and remains in this state for at least four weeks; then, the adult emerges and proceeds to locate its new host.Streblidae family, which are parasites, are themselves infested by fungi of the order Laboulbeniales; these fungi are thus hyperparasitic (Figure 9) [38][39][40]. Distribution Most of the species are Neotropical with a well-defined distribution.However, some species may present a disjunct distribution; that is, species found in the southern United States may be found in northern South America, but not in Central America [38][39][40][41]. Classification Within Brachycera and the Muscomorpha infra order is the Streblidae family, which are ectoparasitic flies of bats, along with the Hippoboscidae family.These dipterans are divided into five subfamilies: Ascopterinae, Nycterophiliinae, Nycteriboscinae, Streblinae and Trichobiinae, the first three being exclusive to the New World and the rest to the Old World.They are found parasitizing 14 order Chiroptera (Mammalia: Placentalia) and are mainly associated with tropical environments.The Streblidae family is relatively well known on the American continent, with records of several species distributed in Panama (69 species), Colombia (54 species), Venezuela (119 species), Peru (59), and Brazil, about 70 species [38][39][40][41].Among the recorded species of Streblidae, Three species were added to the State's list.from Rio de Janeiro: Strebla curvata (Wenzel, 1976), Trichobius angulatus (Wenzel, 1976) and Trichobius dugesii (Townsend, 1891).Some bats were parasitized by Acari, however, they were not contagious identified, and unidentified.Only A. geoffroyi, A. caudifer, and P. recifinus were not parasitized by any ectoparasitic taxon analyzed.The most parasitized species was G. soricina (N=78). Host and Distribution: Anoura caudifer (Geoffroy, 1818), is the type host of this ectoparasitic and there is already a record for the State of Rio de Janeiro on this species. Strebla curvata Wenzel, 1976.Note: First record of this species in the State of Rio de Janeiro.Its primary host is Glossophaga soricina (Pallas, 1766). Host: Has as primary host C. perspicillata. Subfamília Trichobiinae. Trichobius anducei Guerrero, 1998.Note: Second record for Brazil and for the State of Rio de Janeiro: C. perspicillata is the primary host. Trichobius angulatus Wenzel, 1976.Note: First record for the State of Rio de Janeiro and the Atlantic Forest.It was described as parasitizing Platyrrhinus aurarius (Handley & Ferris 1972). Conclusion Braula is considered to cause little damage to bee colonies and its potential as a pathogen vector has so far been ignored.Among pathogens, the Acute bee paralysis virus (ABPV) and the Israeli acute paralysis virus (IAPV) have shown a strong correlation with winter bee colony losses in at least two long-term.It is important to point out that the prevalence of virus infections increases dramatically when the Varroa mite is inside the colony because this mite has been shown to be a mechanical and biological vector of honey bee viruses.Streblidae one larva develops inside the female and feeds on secretions from the accessory glands, which are highly specialized.Eventually, the third-instar larva is deposited as a sessile prepupa in a substrate.The pupa forms and remains in this state for at least four weeks; then, the adult emerges and proceeds to locate its new host.Streblidae family, which are parasites, are themselves infested by fungi of the order Laboulbeniales; these fungi are thus hyperparasitic. Figure 3 Figure 3 a) The bee louse Braula, Nitzsch, 1818 attached to the head region of its host, the honeybee Apis mellifica mellifica silvarum Goetze, 1964.(b) Experimental setup.The bee louse (bc) was attached to the tip of a strong needle which was mounted to a Fort25 force transducer (ft) and pulled off from the bee by actively pulling the bee away from the force transducer.(c) Representative force-time curve of the attachment force.(d) Schematic drawing of the measured base (α) and tip (β) claw angles.bc, B. coeca; bh, bee hair; F, maximum pull-off force; ft, force transducer; gl, queen bee marking glue Figure 4 Figure 5 Figure 4 Microscope image.(A) Light microscope image of Nosema spp.spores on the tape.(B) Nomarski interference contrast microscope image of spores on the tape.(C) Nosema spp.spores on the tape visible under a scanning electron microscope Figure 7 4 Figure 8 Figure 7 The mite species Monunguis streblida Wharton, 1938 (Neothrombidiidae) is reported in association with dipteran ectoparasites (Streblidae) of Brazilian bats for the first time.A 1-year study of two populations of the bat Anoura geoffroyi Gray, 1838, in caves in the state of Minas Gerais, Brazil, found them to be parasitized by four species of streblids, three of which were parasitized by M. streblida.Three hundred and thirty-two individuals of M. streblida were collected in association with 135 individuals of Anastrebla modestini Wenzel, 1966, two individuals of Anastrebla caudiferae Wenzel, 1976, and two individuals of Trichobius sp.(dugesii complex)
2023-12-04T17:41:12.842Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "1b45eceac8fe9ca01098f341820d2e2e8791806b", "oa_license": "CCBY", "oa_url": "https://frontiersrj.com/journals/ijfstr/sites/default/files/IJFSTR-2023-0066.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "32c97adc0cb750b4da26c738969b538b6a215627", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
233717430
pes2o/s2orc
v3-fos-license
The clinical effects of combining postural exercises with chest physiotherapy in cystic fibrosis: A single-blind, randomized-controlled trial Objectives This study aims to investigate the effects of postural exercises as an adjunct to chest physiotherapy program on respiratory function, exercise tolerance, quality of life (QoL), and postural stability in patients with cystic fibrosis (CF). Patients and methods In this single-blind, randomized-controlled trial, 19 pediatric CF patients (11 males, 8 females; mean age: 9.36 years; range, 6 to 14 years) were randomly allocated to chest physiotherapy and postural exercise program (Group 1, n=10) or chest physiotherapy program alone (Group 2, n=9) between March 2017 and October 2017. Respiratory functions were assessed with pulmonary function tests, whereas exercise tolerance with the Modified Shuttle Test (MST), quality of life with the Cystic Fibrosis Questionnaire-Revised Child Version (CFQR), and postural stability with the Limits of Stability Test (LOS). All tests were performed before treatment and six weeks, three months, and six months after treatment. Results Respiratory functions were improved in both groups; however, these changes were not statistically significant. The MST increased after treatment in both groups (p<0.001 and p=0.003 respectively), without a significant difference between the groups. Emotional function and treatment difficulties subdomains in CFQR were significantly increased only in the group with postural exercises (p<0.05). Conclusion The postural exercise program in addition to chest physiotherapy in pediatric CF patients whose postural changes were not taken place did not cause significant changes in respiratory function, exercise tolerance, and postural stability; however, it affected the emotional state well and improved the compliance with the treatment. Cystic fibrosis (CF) is an autosomal recessive disease which mainly affects the respiratory system. [1] The life expectancy for CF has dramatically increased in the recent decade with advancements in medical care. [2] Pulmonary complications can be reduced and prevented by a regular chest physiotherapy program that mainly involves bronchial hygiene techniques. However, with the increased life expectancy, most of adults with CF have postural deformities, which cannot be prevented by conventional chest physiotherapy alone. Postural deformities in these population are claimed to be caused by muscle imbalance due to increased work of breathing as lung disease progresses, and associated with deteriorated pulmonary function, back pain, and impaired quality of life (QoL). [3] In a study, it has been shown that none of the children with CF under the age of five years had a postural deformity; nevertheless, all of the patients above 13 had increased thoracic kyphosis, and some had accompanying scoliosis and chest abnormalities. [4] Postural exercises which started before the deformities develop are thought to be preventive. Therefore, as rehabilitation professionals, we have begun to implement exercise therapies to rehabilitation programs for providing a complete approach to patients with CF in recent years. The muscles that are involved in the stabilization of the body are also involved in respiration. [5] Thus, it is assumed that postural exercises in children with CF would increase respiratory function, exercise tolerance, and QoL with preventing postural deformities. Although these assumptions are reasonable and understandable, there are no data about the effectiveness of postural exercise in children with CF. In the present study, we hypothesized that the addition of a postural exercise program to a chest physiotherapy program would be effective in improving respiratory function, exercise tolerance, QoL, and postural stability compared to chest physiotherapy alone. We, therefore, aimed to evaluate the effects of additional postural exercises on these aspects in patients with CF. PATIENTS AND METHODS This single-blind, prospective, randomizedcontrolled study was conducted at Marmara University School of Medicine University, Physical Medicine and Rehabilitation Department between March 2017 and October 2017. Inclusion criteria were as follows: age between 6 to 14 years; having a diagnosis of CF, and forced expiratory volume in 1 sec (FEV1) greater than 30% of predicted. Exclusion criteria were as follows: presence of cor pulmonale, history of spinal fracture, currently being under intravenous (IV) medication, and severe gastroesophageal reflux. A total of 22 patients who met the inclusion criteria were included in the study. Two of them (one in each group) were lost to follow-up. Another patient was hospitalized due to an acute exacerbation with IV medication need and the treatment program was discontinued. Finally, a total of 19 pediatric CF patients (11 males, 8 females; mean age: 9.36 years; range, 6 to 14 years) were included ( Figure 1). A written informed consent was obtained from each patient. The study protocol was approved by the Marmara University School of Medicine Ethics The patients who did not previously participate in a structured chest physiotherapy program and other techniques regularly before the study were equally randomized into two groups according to the sealed opaque envelope system with blocking. One group was planned to be treated six weeks with chest physiotherapy and postural exercise program (Group 1, n=10), while the other group was treated with chest physiotherapy program alone (Group 2, n=9). Both groups received treatment by a single physiotherapist. If a patient could not attend to the treatment program due to interruptions such as an acute exacerbation, he/she was removed from the final analysis. Respiratory function was assessed with FEV1, forced vital capacity (FVC), FEV1/FVC, and peak expiratory f low (PEF), whereas exercise tolerance with the Modified Shuttle Test (MST), QoL with the Cystic Fibrosis Questionnaire-Revised Child Version (CFQR) and postural stability with The NeuroCom Balance Master® device (NeuroCom International, Clackamas, OR, USA) Limits of Stability Test (LOS). All tests were performed before treatment and six weeks, three months, and six months after treatment. The Cobb and modified Cobb angles were measured on radiographic examinations to assess scoliosis and thoracic kyphosis before treatment and six months after treatment. All evaluations were performed by a blinded independent rehabilitation specialist. Data were collected at Marmara University School of Medicine, Physical Medicine and Rehabilitation Department, Pediatric Rehabilitation Clinic. Chest physiotherapy program Active cycle of breathing techniques (ACBTs) was performed for chest physiotherapy. This is a commonly used method for secretion removal in CF patients. Every ACBT cycle consists of a sequence of breathing control, thoracic expansion exercises and forced expiration and cycles are repeated, until the secretion is completely discharged. [6] The ACBT was applied by a therapist once per week for six weeks. This interval between chest physiotherapy programs was arranged to provide isolation of each patient according to infection control recommendations. [7] Therefore, a daily schedule was given for the other days of the week once per day to ensure the permanence of the treatment. After the treatment period, families were encouraged with weekly telephone calls to continue their child's chest physiotherapy program for six months. Postural exercise program Thoracic vertebra mobilization, pectoral stretching, scapula and thoracic extensor muscle strengthening, and core stability exercises were applied by a therapist once per a week for six weeks. A daily schedule was also given including pectoral stretching and core stabilization exercises for the other days of the week once per day to ensure the permanence of treatment. After the treatment period, families were encouraged with weekly telephone calls to continue their child's exercise program for six months. Modified Shuttle Test The MST is a simple test which measures exercise tolerance and cardiorespiratory status in children with CF. The test has 15 levels, and the patient is asked to walk rapidly between two fixed objects within a 10-meter distance, starting with normal walking speed and increasing the pace at the beginning of each min (level). An audio signal is used to maintain the required speed. The test is continued until the end of 15 levels or is ended when the patient cannot keep pace with audio signal any longer or feels tired. Peripheral oxygen saturation (SpO 2 ) is measured during the test period using a pulse oximeter, and the interruption criteria were determined as SpO 2 lower than 75%. [8,9] Maximum walking distance expressed in meters was used for analysis as an outcome. Cystic Fibrosis Questionnaire-Revised The CFQR is the most commonly used QoL measurement tool in patients with CF and is found to be valid and reliable in Turkish. The child version of this test consists of 35 questions about physical function, emotional function, social function, body appearance, eating disorders, treatment difficulties, respiratory and digestive symptoms. The total score is calculated between 0 and 100, and higher scores indicate the better condition. [10] Limits of Stability Test This is a test of NeuroCom Balance Master® device that consists of a 18¥60-inch pressure platform and a computer system connected to the platform. The patient is asked to stand on the platform barefoot and watch the image which can be moved by trunk movement on the computer the monitor, and with a command, move the image toward to target point which is in eight different directions. Reaction time, movement velocity, endpoint excursion, maximum excursion, and direction control parameters are calculated for each direction during these trunk movements. [11] Statistical analysis To depict a 20% increase of the distance covered in MST by patients, the projected sample size was calculated as approximately eight patients and eight controls with an alpha=0.05 and power=0.80. [8] Statistical analysis was calculated using the IBM SPSS for Windows version 20.0 software (IBM Corp., Armonk, NY, USA). Descriptive data were expressed in mean ± standard deviation (SD), median (min-max) or number and frequency. Normal distribution of quantitative values was assessed by histogram, Q-Q graph, and Shapiro-Wilk test. Since the distribution was not normal, the Mann-Whitney U test was used to compare the measurement values between the two groups, and the Friedman test was used to compare the intra-group parameters, while the Wilcoxon test was used to compare pre-treatment and post-treatment sixth-week, third-month, and sixth-month values separately. The Bonferroni correction was considered statistically significant at p<0.010 for evaluating the results of the Wilcoxon test. Otherwise, a p value of <0.05 was considered statistically significant. RESULTS Baseline demographic and clinical characteristics of the patients are shown in Table 1. There was no statistically significant difference in the baseline demographic characteristics between the patient groups. None of the patients in both groups had scoliosis or increased thoracic kyphosis. Except for the patients excluded from the analysis, none of the patients experienced an acute exacerbation leading to treatment discontinuation. None of the patients complained about pain before or during the treatment. The MST distance significantly increased at six weeks in both groups, and it continued to increase at three and six months ( Table 2). Respiratory function tests were also improved after treatment in both groups, although these changes were not statistically significant ( Table 3). The CFQR emotional function and treatment difficulties subdomains significantly increased at the end of six weeks in Group 1. DISCUSSION In the present study, postural exercises added to a conventional chest physiotherapy program increased treatment compliance and improved emotional status, while exercise tolerance, respiratory functions, and postural stability remained unchanged. In patients with CF, conventional chest physiotherapy improves respiratory functions, as demonstrated time and again in the literature. [12,13] Similarly, in this study, although no statistical significance was reached, the conventional chest physiotherapy program increased respiratory functions in both groups. However, postural exercises did not add a significant benefit. This contradicts the common hypothesis of the bilateral relationship between respiratory and postural muscles which make postural muscles accessories in respiration. [14] A previous study showed that thoracic kyphosis angle was correlated with FEV1 and vital capacity. [15] However, Sansund et al. [16] documented that, in patients with a high thoracic index, postural exercises added to a conventional chest physiotherapy program could decrease the thoracic index, but did not significantly change respiratory functions. Likewise, Schindel et al. [13] showed that home-based aerobic exercise and stretching improved postural alignment without significant change in respiratory functions. Postural deformities increase with age in patients with CF. [17] The mean age of the study that conducted by Schindel et al. [13] was four years older than the present study. One of the reasons that this study could not show any beneficial effects of postural exercises on respiratory functions is that the patients included in the study did not have spinal deformities. It must be kept in mind that, in an older patient population, the outcomes may differ. In the current study, the MST distance increased after treatment in both groups, and this increase continued in the long-term. Although MST distance in children and adolescent has been shown to increase with age, [9] the increase at six weeks and three months can be easily attributed to the effect of chest physiotherapy. However, the long-term increase in MST may be due to the natural effect of growing up, leading to the assumption of some contribution to the longer-term positive results. Doeleman et al. [18] demonstrated a strong relationship between FEV1 and MST distances in adult patients with CF. In the study by Cox et al,. [19] 28 patients with CF were admitted for pulmonary exacerbations and, after the administration of IV antibiotics, the MST distance improved significantly. There are existing data regarding the improvement of exercise tolerance after medical treatment. However, there are no available data regarding the effectiveness of chest physiotherapy programs on exercise tolerance. While FEV1 is an important follow-up criterion in the clinical setting of CF, exercise tolerance tests may give better ideas about the actual functionality of the patient, which makes them more important in the rehabilitation setting. To the best of our knowledge, this study is the first to use exercise tolerance tests for the evaluation of functionality of CF patients. In this study, none of the patients in both groups had apparent postural impairment as evidenced by imaging studies. The absence of change in postural alignment at six months is not sufficient to demonstrate the effectiveness or ineffectiveness of postural exercises on posture. However, this is beyond the scope of this study and a much more extended follow-up period would provide such a kind of effect. On the other hand, we aimed to examine the effect of postural exercises on postural stability. The LOS is a test which evaluates the distance and direction control of a person, when it is attempted to reach various directions by trunk movement. [11] These parameters of LOS were shown to be improved with postural training, even in healthy young adults. [20] Therefore, an improvement was expected with postural exercises in patients with CF, although they did not have apparent problems with their posture or postural stability. The lack of change in postural stability parameters in this study may be due to the intensity and frequency of the exercise program. However, since it is not advised for CF patients to frequently admit to the hospital due to colonization and infection risk, more than once a week could have increased the incidence of acute exacerbations. Future research can focus on homebased interventions and their effectiveness in this regard. There were significant improvements after postural exercises in the emotional status and treatment difficulty subdomains of the CFQR. However, these changes did not last after six weeks, when the treatment program was ended. On the other hand, conventional chest physiotherapy alone did not change any parameters of QoL at any time. Similar to these findings, Schmidth et al. [21] showed that, in adult patients with CF, an exercise program improved emotional status and treatment compliance subdomains. Mandal et al. [22] investigated the effect of additional exercise program to conventional chest physiotherapy in 30 patients with bronchiectasis and showed that pulmonary physiotherapy alone could not change the QoL and exercise tolerance. In the study by Sansund et al., [16] addition of postural exercises to a conventional chest, physiotherapy did not change the QoL in patients with CF. This may have resulted due to the fact that only physical function subdomain was investigated in the QoL measures in this study. These findings are consistent with our findings, indicating that postural exercises in the course of the therapy period improve emotional status and treatment compliance in patients with CF. To the best of our knowledge, this is the first randomized-controlled study to investigate the effect of postural exercises added to a conventional chest physiotherapy program on respiratory functions, exercise tolerance, QoL, and postural stability in patients with CF. However, due to the nature of studies using exercise as therapy, it was designed as a single-blind study. This is an important bias factor. On the other hand, randomization process of patient selection, outcome assessor blinding, and no dropouts decrease the overall bias of this study. One of the main strengths of this study is its long-term follow-up duration, while limitations of this study can be stated as the relatively small sample size. Future studies with an increased patient population and a longer follow-up period would definitely show more concrete results about the effectiveness of such rehabilitation programs. In the setting of physiatry, an exercise regimen initiated early in the course of the disease has been shown to be effective in a spectrum of pathologies. [23][24][25] Although CF should be no different, the data that would support the hypothesis of the importance of early intervention in this population are still lacking. A pivotal study can include patient groups who can and cannot attend to these programs and, therefore, the change in postural deformities in the long-term can be documented. Moreover, the improvements in the emotional state and treatment compliance can also have an additional positive effect on CF-related outcomes, such as pulmonary function. The effect of increased treatment compliance and improved emotional state should be investigated on its effect on the frequency of acute exacerbations and hospitalization need. Therefore, particularly through the adolescent to adulthood transition period, when the patient care also transits between different practitioners, long-term protocols with an extended duration of supervised postural exercise as the cornerstone of care would give us valuable information about the effectiveness of such programs. This is also essential for patient care and daily practice, as it must be remembered that CF is a chronic, lifelong condition in nature. In conclusion, our study results indicate that postural exercises do not significantly improve respiratory functions, exercise tolerance, and postural stability in these young patients without apparent postural changes. However, postural exercise program significantly improves emotional status and treatment compliance, and chest physiotherapy improves exercise tolerance in these patients.
2021-05-05T05:19:44.392Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "531c2017042d4259a7ebf86852a12478a24aadd4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5606/tftrd.2021.5214", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "531c2017042d4259a7ebf86852a12478a24aadd4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258821631
pes2o/s2orc
v3-fos-license
Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960-2021 Analysing historical patterns of artificial intelligence (AI) adoption can inform decisions about AI capability uplift, but research to date has provided a limited view of AI adoption across various fields of research. In this study we examine worldwide adoption of AI technology within 333 fields of research during 1960-2021. We do this by using bibliometric analysis with 137 million peer-reviewed publications captured in The Lens database. We define AI using a list of 214 phrases developed by expert working groups at the Organisation for Economic Cooperation and Development (OECD). We found that 3.1 million of the 137 million peer-reviewed research publications during the entire period were AI-related, with a surge in AI adoption across practically all research fields (physical science, natural science, life science, social science and the arts and humanities) in recent years. The diffusion of AI beyond computer science was early, rapid and widespread. In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to cover over half of all research fields by 1972, over 80% by 1986 and over 98% in current times. We note AI has experienced boom-bust cycles historically: the AI"springs"and"winters". We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained. I. INTRODUCTION The field of artificial intelligence (AI) is generally considered to have got its name at the Dartmouth University conference in the United States in the summer of 1956 [37]. Since that time the field has experienced some ups and downs but has, overall, grown robustly as covered by numerous historical accounts [9], [21], [25], [38], [61]. However, there has been an explosion of AI activity in recent times. The past few years have seen a surge of investment, research, education, training and scholarly publishing in AI and machine learning [8], [58]. Since 2017 over 700 AI policy initiatives have been launched by over 60 national governments and sub-national jurisdictions [45], [56]. Collectively, these announcements were estimated to include over US$62 billion of new spending [22]. During 2020, private-sector investment in AI increased by a record 9.3% reaching US$40 billion [77]. Five out of the seven most influential papers announced by Google Scholar for 2020 were about AI [10]; afterwards, papers related to the COVID-19 pandemic dominated. During 2017-2020 the number of university courses teaching AI increased by 103% at the undergraduate level and 42% at the postgraduate level [77]. Out of all industry and economic sectors, the science and research sector is among the earliest and most enthusiastic adopters of AI technology. AI is a general-purpose technology that can improve the cost-effectiveness, speed, safety and quality of research in practically all fields of endeavour [23]. However, AI may be more than just useful; it could be paradigm-shifting. Some researchers [5] argue that AI will "reshape the nature of the discovery process and affect the organisation of science". A recent workshop hosted by the Organisation for Economic Cooperation and Development (OECD) in Paris [43] examined the potential for AI to address the ongoing productivity slump in science where more research effort is being expended to achieve the same, or lesser, outcomes [6], [7]. Understanding patterns of AI adoption can help researchers anticipate the future potential of this general-purpose technology and invest wisely in capability uplift. However, while there has been much work to explore and understand AI adoption within select fields of research, there have been comparatively few studies examining the diffusion of AI technology across all fields of research. Prior work has also taken narrowly scoped definitions of AI (e.g., machine learning) and limited time-periods relating to the past few decades only. We seek to contribute by analysing the adoption and diffusion of AI technology across all fields of physical sciences, natural sciences, life sciences, social sciences as well as the arts and humanities over history from 1960 to 2021. We have a broad definition of AI encompassing 214 phrases that capture practically every facet of this vast technological capability. We continue this paper by reviewing prior research relating to the application of bibliometric analysis to analyse patterns of AI adoption within and across fields. We explain how our study contributes. We next describe our methods, including our main data source, The Lens. The Lens may be a comparatively new tool for many researchers alongside well-known databases such as Scopus, Web of Science and Google Scholar. We describe what The Lens is, how we used it and how it compares to existing databases. We then present our results relating to the development, application and diffusion of AI across the fields of research over history. This is followed by a discussion exploring the implications of AI for approaches to human knowledge discovery. We explore whether the AI boom-bust cycles of the past are likely to return and the issue of productivity uplift. The paper concludes by arguing that the future impact of AI on knowledge discovery will land somewhere on a spectrum from useful through to paradigmchanging for researchers in most disciplines. II. RELATED RESEARCH AND OUR CONTRIBUTIONS There have been several studies using bibliometric analysis, and related approaches, to examine the adoption and diffusion of AI in various contexts. For example, the Stanford University 2020 AI Index used the Scopus database of peer-reviewed literature to find that 3.8% of all publications were on the topic of AI by 2019 with steep growth in recent years [77]. This is up from 0.82% in the year 2000. The Stanford University study finds the total number of peer reviewed AI publications increased almost 12 times over the 20-year period leading up to 2019. Another recent prior study [5] examined the diffusion of "neural networks" (NN) -a subset of AI in our schemaacross 6 research fields of technology, physical sciences, life sciences, biomedicine, health sciences, social sciences, as well as the arts and humanities. The authors tracked adoption trends during 1990-2018. They identified 260,459 documents on NN in total based on 30 search phrases. They found a "burst of research activity" leading up to 2018 in all research fields. They concluded that AI would likely reshape the process of scientific discovery and change the way science is organised. They also argued that AI will emerge as a new general method of invention. An earlier study by Frank et al. [17] examined the extent to which major research fields were cited within AI research. Using a bibliometric analysis, it was found that mathematics and computer science were most commonly cited in modern AI research references, with fewer references to philosophy, geography and art. The authors argued that AI research needed to bring in more of the social sciences, and the arts and humanities, to ensure that it would hold relevance to policy makers and society more broadly. A related study using bibliometric analysis with Web of Science data examined AI publishing patterns across countries, academic institutes, collaboration networks, research sponsors and scientific disciplines [34]. This study found that diverse disciplines contributed to the multi-disciplinary development of AI technology. There have been numerous studies using bibliometric analysis into the impacts of AI within specialised disciplines. For example, Palos-Sánchez et al. [47] examined 73 articles in Web of Science and Scopus in the field of human resources management. Using the Bibliometrix tool they found that AI applied to human resources management was growing constantly and this was likely to continue into the future. They also found that AI applications within this field were focused on topics relating to recruitment and job-applicant selection. They noted an opportunity to expand AI research into other sub-fields within human resources management. Other subjectspecific bibliometric analyses have examined AI application in engineering contexts [62], healthcare settings [19], supply chains [54], renewable energy [78] and education [65]. These studies generally concluded that AI adoption is growing within the given field of research and that it's enabling and changing processes of knowledge discovery. Most studies also pointed towards the likelihood of continued increased AI adoption. Our analysis supports and extends upon the previous research summarised above. We examine the development and application of AI across practically all fields of research. Our focus is upon the differences and complementarities between research fields; not within a single field of research. We have also introduced new methods and datasets to enable complementary insights in three main ways. First, our analysis is from 1960 to 2021 which covers a longer timespan than previous studies (e.g., 1991 to 2020 by Liu et al. [34], and 1990 to 2018 by Bianchini et al. [5]), including the first two AI springs and winters and the early diffusion of AI outside of computer science fields in the 1980s. Second, we use a more comprehensive set of 214 AI search phrases derived from multiple expert working groups at the OECD [3]. This compares to 30 phrases used by Bianchini et al. [5] which relate to neural networks only (a subfield of AI), and a search strategy by Liu et al. [34] involving partial phrases which, by our estimate, accounts for under half of our AI phrases. Lastly, we examine diffusion across all fields of research with a comprehensive classification system widely used by Scopus [59], called the All Science Journal Classification (ASJC). This captures a much wider range of research disciplines at a more granular scale. We are able to apply the ASJC consistently over the entire 62 year period by using The Lens database [50]. III. METHODS AND DATA SOURCES As the volume, variety and velocity of research publishing continues to grow, bibliometric analysis is becoming an increasingly popular and effective method for understanding patterns and trends in various fields of research [26], [63]. Bibliometric analysis can help by handling large amounts of data (e.g., scholarly publications or citations) and provide quantitative insights into the structural relationships that exist in fields of interest [14]. It has been used to study knowledge diffusion patterns in blockchain technology [48], biotechnology [13] and digital transformation [51]. We used the approach for analysing patterns associated with AI adoption over the past 62 years. Consistent with the bibliometric analysis procedure developed by Donthu et al. [14], this section outlines the methodology for the bibliometric analysis, including the data sources, research field taxonomy, AI search strategy and reporting metrics used in the analysis. A. The Lens Database of Scholarly Publications Scholarly publication data was sourced from The Lens database (version 8.2), a global database which contains over 224 million scholarly publications and over 137 million intellectual property patents with records dating back to the 1950s [50]. With early work commencing in 1998, The Lens database resulted from a partnership between the Queensland University of Technology and Cambia (both based in Australia). Cambia is a not-for-profit organisation aiming to create tools and technologies that facilitate knowledge sharing and problem solving. The Lens has a non-commercial nature and receives funding from the Bill and Melinda Gates Foundation, the Rockefeller Foundation, and other organisations. The Lens database has previously been used for bibliometric analysis for genetic science [28] and COVID-19 research [27]. Scholarly publication data were extracted for records published up until 31 December 2021. Data in The Lens was accessed using its graphical user interface (GUI) as well as its application programming interface (API) via Python scripts. We used the API to perform customised searches. Data in The Lens is sourced from Microsoft Academic Graph [69], the CrossRef Open Researcher and Contributor IDentifier [20], PubMed [72], Impactstory [46] and Connecting Repositories (CORE) [52]. We note numerous repositories containing research publication data which can be used for bibliometric analyses (e.g., Scopus, Web of Science and Google Scholar). These databases have been reviewed and compared in prior research [68]. The Lens is a relative newcomer and while it is being used by researchers for bibliometric analyses [27], [28] and is well documented [50] it has not yet featured in comparative analyses with existing mainstream databases. Nevertheless, we used The Lens due to its open-access non-commercial model, comprehensive dataset on scholarly publishing drawing upon multiple databases, and high levels of transparency on data provenance. Moreover, some databases like Google Scholar do not have a publicly accessible API. This requires manual searches which are not feasible in a bibliometric analysis with thousands or millions of publications [15]. The Lens provides both an API with comprehensive functionality and a GUI with detailed metadata. The Lens removes some of the constraints built into commercial databases which limit transparency and data access, which in turn enables improved bibliometric analysis. We used the Elsevier ASJC taxonomy [59] to examine the diffusion of science. The ASJC has three levels. At the most detailed level the ASJC contains 333 unique fields of research. These are grouped under 26 subject level fields which are further grouped under 4 fields of physical sciences, life sciences, health sciences, social sciences and the arts and humanities. Research publications in The Lens are assigned one or more of the 333 third-level fields. The information is derived from the International Standard Serial Number (ISSN) descriptions in the Crossref metadata. The pros and cons of the ASJC versus other subject matter classification systems have been explored by researchers [71]. Both classifications are widely used and well-accepted; we consider the ASJC suitable for our purposes, but note other classifications are possible. C. Defining Artificial Intelligence and Identifying Publications for Bibliometric Analysis There are several publications reviewing paradigms, approaches and concepts about the definition of AI [32], [64], [70]. Consistent with previous analyses conducted by the OECD [3], in this study the definition of AI is operationalised via a set of search phrases. These search phrases are used to identify publications that are related to AI. We used the list of 214 AI-related phrases provided by the OECD [3]. This OECD list of AI-related phrases was developed from a bibliometric analysis of publications classified as AI in the Scopus database, which were then interrogated and refined further using text mining techniques. The candidate list of AIrelated phrases was also validated by a panel of AI experts working in business and academic sectors [3]. Other authors, such as Liu et al. [34], have compiled lists of AI search phrases for literature review and bibliometric analysis purposes. We compared the Liu et al. [34] and the OECD lists, and found 113 of the 214 OECD phrases had no matching entry in Liu et al. [34], 76 had a possible matching entry, and 25 had an exact matching entry. These differences are due to the OECD list of AI-related phrases including more granular subfields of AI, such as pattern recognition and computer vision, which comprise a large share of AI research and publishing. We adopted the OECD list of phrases as we sought a more comprehensive and inclusive definition of AI. The initial dataset contained all scholarly publications on The Lens during 1960-2021. We chose 1960 as the start year as this was relatively soon after the 1956 summer workshop at Dartmouth University in New Hampshire where the field of AI was first formally given a name. This dataset was filtered by document type, including only records that corresponded to peer-reviewed books, book chapters, journal articles and conference papers/proceedings. These scholarly publication records were then refined using our search strategy to identify AI-related publications. To be selected as an AI-related scholarly publication, a paper needed to contain one or more of the 214 AI phrases developed by the OECD in the title, abstract or keywords. A total of 224 million scholarly publication records were identified in The Lens database between 1960 and 2021, approximately 87 million records of which were eliminated as they did not correspond to one of the included document types. A further 137 million records titles, abstracts and keywords were screened for AI-related terms. The final pool consisted of 3,126,436 records which were included in the bibliometric analysis (Fig. 1). IV. RESULTS In the year 1960 there were 48 research fields with AIrelated publications, representing 14% of 333 ASJC fields. Most of these were in the fields of computer science, engineering and decision science. AI soon spread into other fields; by 1972 over half of all research fields were related to AI. In 1986 over 80% of research fields had publications related to AI, and today it is over 98%. AI started in the physical sciences and then spread into the life sciences, social sciences, arts and humanities (Fig. 2). AI diffusion can be measured and visualised with the Gini-Coefficient (GC). Often used to measure wealth inequality, GC is a statistical measure of how evenly distributed a quantity is among a set of categories. The GC value ranges from 0 to 1. When GC = 0, each category has a perfectly equal quantity. When GC = 1, one category has everything with none in the others. We use GC here to determine the spread of AI-related publishing across all 333 fields of research over time. In the year 1960, GC = 0.91 indicating a high concentration of AI publishing in a few fields of research. AI did not diffuse from computer science until the 1970s. However, by 1980 the GC had fallen to 0.72 and has stayed within the range of 0.71-0.76 since that time. One of the reasons it has not fallen further is because the computer science field has increased AI publishing intensity and volume at a faster pace than any other field. As such, the computer science field has maintained a high concentration of AI-related publications in the total publication output (see Fig. 3). We define AI publishing intensity as the share (percentage) of total publications that are AI-related within a field of research. For all fields of research AI-related publishing started at a tiny fraction; 0.02% of total publication output in 1960. It remained under 1% until 1995. From then until 2017 it increased to 3%. Over the five-year period 2017-2021 it increased to 5.3%. This shows that most of the AI adoption in research has been happening in the past few years. More specifically, over 50% of the total volume of AI research has been published in the past 5 years. The year-on-year growth in AI publishing has averaged at 26% over the past 5 years, compared to 17% for all preceding history (Fig. 4). One of the drivers of recent adoption-growth is the release of accessible AI tools and platforms which have emerged over the past several years [1], [11], [49], [60], such as scikit-learn, TensorFlow, Theano, Caffe, Keras, MXNet, mlpack, PyTorch, CNTK, Auto ML, Open NN and H2O. These and other such tools made AI much more readily available to scientists and researchers from diverse disciplinary backgrounds. By examining the science domains and second-level research fields (Fig. 5, Table I), it can be seen that the physical sciences have, overall, been the largest adopter and developer of AI research. However, practically all fields of research show The rising intensity of AI publishing in the arts and humanities is a recent phenomenon; throughout much of history there has been low, or negligible, AI publishing in these fields. Rising share of AI publications in economics is partly being driven by the use of data-driven and AI based approaches for econometric modelling and forecasting. A recent review of machine learning for economic modelling is provided in [40]. The social sciences field captures a highly diverse range of disciplines and contains some fields, such as geography, which are early adopters of AI and are driving recent increases [31]. Overall, there is an unambiguous pattern in the data; AI adoption in research has accelerated in the past few years and AI is now playing an important role in most disciplines. It represents a quarter of total research output in computer science. V. DISCUSSION The results from our bibliometric analysis across all fields of research are consistent with previous reviews of AI conducted within single, or more narrowly defined, fields of research. We found rates of adoption increasing sharply over recent years with the likelihood of continuing into the future. This was found in bibliometric analysis of human resources management [47], engineering [62], healthcare [19], supply chains [54], renewable energy [78] and education [65]. Similar patterns of AI adoption rates have been observed in [31], which examined the use of machine learning in geography and found AI techniques were applied by geographers in cartography, spatial statistics and remote sensing during the 1980s. More recent observations have been made in literature reviews of AI for dentistry [67], chemistry [4], food science [30], agriculture [57], marine science [36], econometrics [40] and veterinary science [16]. There have also been several studies [5], [17], [34], [77] identifying rising multi-disciplinary AI application albeit with different scope, definitions and data sources to our own. Our results add to this evidence base, further demonstrating how AI is enabling and most likely reshaping longstanding processes of human knowledge discovery. This is demonstrated within numerous well-defined and specialised research fields via bibliometric analysis and systematic literature reviews. Here we show these AI adoption trends are also observable at higher levels across all fields of research in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities. However, there are some critical uncertainties about how AI development and application by researchers will unfold over the decades ahead. One critical unknown relates to the extent to which the AI boom can be sustained. It is not the first time in history we have seen a boom in AI activity which is followed by a bust [23]. The second critical unknown relates to productivity gains. Evidence of widespread application is not the same as evidence of productivity enhancements. There is every possibility that AI will experience some form of Solow's paradox [2], where information and communications technologies are associated with modest or even negative productivity growth due to adaptation challenges. We first consider the risk of a downturn in AI adoption of opposite, and potentially equal, magnitude to the current upturn. Historical analyses suggest that there have been two significant surges in AI referred to as "AI springs", followed by two downturns called the "AI winters". The dominant narrative is that the first AI spring was from 1956 to 1974, followed by the first AI winter from 1974 to 1981; the second AI spring was from 1981 to 1987 and the second AI winter was from 1987 to 1993 [61], [9]. Most historic analyses suggest the causes of the winters were related to hype and inflated expectations [21], [25], [38]. In the AI springs, there was much confusion and misunderstanding about what AI was and what it would be capable of doing. Investment money attracted more investment money and AI-related activity increased rapidly. However, when investors realised that AI could not deliver on the perceived promises, research funding was substantially and suddenly reduced. So, are we currently within a third AI spring likely to be followed by a third AI winter? Some researchers have suggested this could be the case, noting that many of the conditions present shortly before the two previous winters are happening today [25], [66], [75]. However, we suggest there is reason to believe the next AI winter is not imminent and may not come at all, at least not in the same form as earlier winters. First, our analyses show that the current surge in AI research far exceeds anything before in history both in the depth (quantity) and breadth (diversity) of AI publishing. Second, the current surge in AI applications is coinciding with advances in hardware, software and costeffective cloud computing resources, including the recent rise of specialised computing hardware that can handle matrix algebra and support machine learning algorithms [79]. Historical technical and financial barriers that previously limited AI adoption and use are less significant today. With quantum Next, we can consider the issue of productivity. A key driver for adopting AI in scientific research is its potential to boost the productivity of researchers. Productivity declines in science have been a focus of scientific conferences [43] and some economists have noted that research outcomes have been declining despite increasing research efforts in fields such as agriculture, electrical engineering and medicine [6], [7], [39]. AI could serve as a general-purpose technology that can be applied to lift science and research productivity in all fields of study, similar to the impact of electricity in the early 1920s. Here electricity was attributed to productivity gains in the manufacturing sector and a period of rapid economic growth across the globe known as the "roaring twenties" [24]. While there is emerging evidence that AI is creating a productivity uplift in business [12], [33], [35], [42], [44], [74] it is not yet well demonstrated in the science, research, innovation and technology sectors. More research is needed to examine this issue. We also tend to hear more about the AI successes than the failures; it is hard to publish a failed AI study. However, the failures do happen and they can be costly. An evaluation of 62 published scientific studies using machine learning for COVID-19 diagnosis and prognosis found none of the models could be used for clinical purposes as they were subject to methodological flaws or biases [55]. Other reviews of machine learning applications in COVID-19 diagnostics have similarly identified a high prevalence of bias in these AI applications which limits their clinical potential [73]. AI-based computer vision applications in radiology have also been criticised [53]. Recognising the shortcomings of AI, and problems for which it is not well-suited, will be an important part of its adoption in application domains. Despite the challenges AI capability uplift is likely to be firmly on the agenda for individual researchers and research organisations over the coming decade. It is one of the most significant disruptors to the scientific method, and processes of knowledge discovery, throughout history. The current era is when the adoption rates are steepest. There are two perspectives about how the future could unfold. From one perspective AI will be a useful tool which, if used properly, will help researchers do what they already do faster, safer, cheaper and better. This is already a huge boon to the research profession which is experiencing a productivity slump [6] and urgently needs a boost. From the other perspective AI is a game changer that reshapes the fundamentals of knowledge discovery. For example, Gobble [18] explores a roadmap to general artificial intelligence, and Kitano [29] has proposed the "Nobel Turing Challenge" which "aims to develop a highly autonomous AI system that can perform top-level science, indistinguishable from the quality of that performed by the best human scientists". VI. CONCLUSION We are amid a worldwide surge in AI development and application for research. This is happening in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities. While AI has surged in the past, none of the prior events come close to the magnitude and breadth of the current situation. While some component of this surge may be hype or trend-following, there is likely to be an important and long-lasting substantive component. AI is likely to continue improving the speed, cost-efficiency, safety and overall productivity of scientific research. Beyond mere efficiency gains, over the coming two decades, AI might fundamentally change the scientific method and human approaches to knowledge discovery. The overall implication of this study for researchers, and research organisations, is to invest in the many dimensions of AI capability uplift.
2023-05-21T15:16:54.414Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "4d0d70769451c0f4e702bf860da26d58fa33e81b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bcbf06f3291c97b307d8e684256ff709f1c552c9", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
237489613
pes2o/s2orc
v3-fos-license
Role of Cardiotocography in predicting perinatal outcome in high-risk pregnancy DOI: 10.4328/ACAM.20281 Received: 2020-07-14 Accepted: 2020-08-11 Published Online: 2020-08-28 Printed: 2021-03-01 Ann Clin Anal Med 2021;12(3):313-316 Corresponding Author: Sangam Jha, Department of obstetrics and gynecology, AIIMS Patna, Phulwarisharif, Bihar, India. E-mail: sangam.jha78@gmail.com P: 91-9827388001 Corresponding Author ORCID ID: https://orcid.org/0000-0002-4349-1589 Abstract Aim: In this study, it was aimed to evaluate the efficacy of fetal cardiotocography in predicting perinatal outcome. Materials and method: In this retrospective observational study, 400 gravid women with high-risk pregnancy fulfilling the eligibility criteria were enrolled. The results of CTG were studied according to NICE 2017 guidelines. Perinatal outcomes were studied by the color of liquor, Apgar score at one minute and five minutes, and NICU admission. Statistical analysis was done using a t-test and pvalue <0.05 was considered statistically significant. Results: PIH was the most common risk factor in 32 % of females. CTG was reactive in 163(40.7%) patients and nonreactive (suspicious and pathological) in 237(59.2%) patients. One hundred thirty (54%) females with nonreactive CTG had meconium-stained liquor compared to only 18(11%) in the reactive group (pvalue<0.05). In the reactive group, only 4% of babies had Apgar 5min <7 compared to 32.4% in the non-reactive group. Perinatal morbidity in the form of NICU admission was higher in the non-reactive group in 77 (32.4%) patients compared to 7 (4%) patients in the reactive group. The sensitivity and specificity of CTG for predicting neonatal morbidity were 63% and 80.4%, while it’s PPV and NPV were 49.4% and 89.4%, respectively. Discussion: CTG has shown high specificity and negative predictive value for detecting adverse perinatal outcomes in this study. This appears to be a simple, non-invasive test that can serve as a screening tool to detect fetal distress that is already present or likely to develop during labor in high-risk obstetric patients in centers with a heavy workload. Introduction Fetal surveillance during labor is necessary to ensure a safe passage of the fetus from an intrauterine to an extrauterine environment with minimum interventions. Birth asphyxia is a broad term that refers to intrapartum asphyxia sufficient to cause neurological damage in some newborns, and rarely intrapartum or neonatal death. The mechanism of labor itself presents as physiological stress to the fetus. Several intrapartum events, such as cord compression, feto-placental blood flow compromise, may cause hypoxia and metabolic acidosis and has the potential to cause neurological injury [1]. Thus, intrapartum FHR monitoring is of paramount importance. FHR monitoring can be done either by intermittent auscultation or electronic fetal monitoring. Like intermittent auscultation, continuous electronic fetal monitoring was introduced clinically in the labor wards in the 1950s with the emphasis on improving fetal birth outcomes by detecting fetal hypoxia, before it leads to death or disability [2]. Intermittent auscultation can measure the baseline FHR but other features such as beat-tobeat variability, accelerations and decelerations in response to uterine contraction remain unappreciated, which is possible only with CTG, thus CTG played the role [3]. FHR has 4 features shown in Table 1 based on NICE 2017 guidelines, and different features of CTG reflect fetal health status. These guidelines are descriptive in character and categorize CTG as normal, suspicious, and pathological. Abnormality of the CTG, sometimes severe enough to be described as pathological trace, is commonly termed fetal distress, will require immediate delivery, whereas suspicious pattern prompts to expedite the process of delivery. Considering the wider clinical picture in interpreting the CTG, and taking timely and appropriate action based on the findings, may help prevent birth asphyxia. The main justification for CTG is that the uterine contraction during labor decreases the placental circulation, and this is aggravated if chronic placental insufficiency is present since the antenatal period. An abnormal tracing identifies a deficiency and hence recognizes fetal compromise at an early stage to allow timely intervention. However, it is not rarely for records to be false positive or false negative. False-positive means that the record is pathological and fresh, undepressed child is born without acidosis; false-negative record means that with the normal CTG record, asphyxiated/depressed child is born and that later will manifest itself in neurodevelopmental disorders [4]. Although the 2017 NICE guidelines do not recommend continuous CTG in low-risk women, risk factors can arise during labor, and therefore CTG is required to detect any changes reflecting fetal hypoxia. Thus, the aim of this study is to evaluate the predictive value of CTG in detecting fetal hypoxia during labor and to correlate the results of the CTG with perinatal outcome in high-risk obstetric cases. Material and Methods This retrospective study was conducted at the Department of Obstetrics and Gynecology, AIIMS Patna from January 2015 to December 2018. After obtaining the institutional ethical committee approval, the study was started. A total of 400 high-risk antenatal mothers who met the inclusion criteria were analyzed. Period of gestation was ascertained using a first-trimester scan or a reliable menstrual history if an early scan was not available. All the demographic and medical data were obtained from the history sheet and the CTG trace was categorized according to the 2017 NICE guideline as normal, suspicious, and pathological. Neonatal data obtained from the baby sheet and its outcome was predicted according to staining of meconium, APGAR at 1 and 5 minutes, and NICU admission. Inclusion criteria were singleton pregnancy, >37 weeks of pregnancy, first and second stage of labor, cephalic presentation, onset of labor (either spontaneous or induced), high-risk factors (postdatism, PIH, GDM, oligohydramnios, polyhydramnios, IHCP, anemia and previous LSCS). Exclusion criteria were POG <37 weeks, USG-confirmed severe congenital anomaly, acute hypoxic states such as abruption placenta, abnormal lie and presentation. Interpretation of fetal heart rate pattern • Baseline fetal heart rate-110-160 bpm. • Baseline variability is the difference between two lines drawn through the highest and lowest points of the trace in any 1-min segment. Normal value is 5-25 bpm. • Accelerations-Rise in FHR > 15 bpm above baseline for >15 s. • Decelerations-Fall of FHR > 15 bpm below the baseline, lasting >15 s. • Early decelerations-FHR falls to no more than 15 beats below the baseline, and the lowest point of fetal heart rate occurs within 20 s of uterine contraction. • Late deceleration-Deceleration where the lowest part of fetal heart rate occurs after the peak of the uterine contraction, and that occurs in more than 50% of the 20-minute trace. • Variable deceleration-All decelerations which do not have any correlation with uterine contraction. • Prolonged deceleration-deceleration lasting > 3 minutes. Statistical Analysis The data were analyzed using SPSS version 26. Groups were compared using the t-test. A p-value <0.05 was considered statistically significant. Results Four hundred gravid women were included in the study. Most of the patients (91%) were in the 20-30 years age group, 39% were primi and 61% were multigravida, 86% delivered at term and 14% were at post-term gestation. The commonest risk factor was PIH (32%) followed by IHCP (23%), GDM (18%) and postdatism (14%); 86% of them had one risk factor whereas 14% of women had multiple risk factors. Table 1 summarizes the maternal characteristics of the studied population. In the study group, of all 400 pregnancies, 163 (40.7%) CTG traces were reactive, and 237 (59.3%) were nonreactive. Late deceleration with reduced variability was present in 57.75% of nonreactive records whereas absent variability was observed in 28.75%. Among the reactive CTG group, 142 (87.1%) had a normal vaginal delivery, 10 (6.13%) had assisted vaginal delivery and 11 (6.7%) underwent cesarean section whereas in the nonreactive group, 104 (43.9%) had a vaginal delivery and 132 (55.7%) had LSCS and 1 had instrumental delivery (p <.0001). The most common indication of cesarean section was non progress of labor in the reactive group and fetal distress in the non-reactive group. In the reactive group, 18(11%) patients had meconium-stained liquor as compared to 46(35.4%), in the suspicious group and 84(78.5%) in the pathological group (p<.0001). Incidence of birth asphyxia as assessed by low APGAR @ 5 minutes was higher in the pathological group (49.5%) as compared to the reactive group (4.2%). NICU admission was also higher among babies from the pathological group (49.5%) as compared to the suspicious (18.4%) and reactive group (4.2%) (p<.0001) ( Table 2). The sensitivity and specificity of pathological CTG for NICU admission were 63% and 80.4% respectively (Table 3). Discussion The role of CTG in fetal monitoring is well documented and the correlation of pathological trace with the neonatal condition as evaluated by APGAR and HIE is well established. Although the vast majority of fetuses cope well during labor, the journey through the birth canal is stressful and the fetus may mount a stress-response. Thus, changes observed on the CTG trace reflect fetal response to the ongoing hypoxia during labor such as compression of the umbilical cord or reduction in the placental blood flow due to uterine contraction [5]. Continuous fetal monitoring is essential for the fetus, considered to be at risk of developing intrapartum hypoxic injury. It is a noninvasive, recordable method of fetal surveillance and is a logical solution to the human lapses of manual monitoring of fetal heart rate during labor. A total of 400 cases were included in the study, the majority belonged to the 20-30years age group, 39% were primi and 61% were multigravida. The most common risk factor was PIH (32%) followed by IHCP and GDM. These data were comparable with studies by Mammen et al [6] and Khatum et al [7]. In the present study, the incidence of cesarean section was 55.69% in the nonreactive group compared to only 6.7% from the reactive group. This is comparable to the study by Rehman et al [8], where operative delivery was required in 2.3% of the reactive group and 60.8% of the non-reactive group. In this study, out of 400 cases, 163 (40.7%) had reactive CTG and 237(59.3%) had non-reactive CTG traces. In nonreactive trace, absent variability with late decelerations contributed maximum to abnormal outcomes (57.75%). Findings were comparable with a study done by Khatum al [7] who demonstrated that the fundamental component of the ominous FHR pattern is absent or decreased beat-to-beat variability. Birth itself is a stressful situation and stress-mediated biochemical events may regulate the passage of meconium occurring in utero or after birth. In this study, out of 237 nonreactive CTG traces, 130 had meconium-stained liquor, in which 35% were from the suspicious group and 78% from the pathological group, whereas only 11% had MSL in the reactive group. This was statistically significant at p-value <.001. This supports the success of CTG in detecting fetal distress. Gupta et al [9] also reported similar results where 66.2% had MSL with non-reactive CTG. Similar findings were reported in Sandhu et al's [10] study, in which MSL was observed in 15% of reactive, 55% of suspicious, and 73% of pathological CTG traces in highrisk patients. CTG is a reliable tool to detect pre-existing fetal distress and those which appear during labor and was better Table 3. Predicting ability of pathological CTG for different parameters of perinatal outcome correlated in our study. In this study, out of 237 non-reactive cases, 53 (49.5%) fetuses in the pathological group and 24 (18.4%) in the suspicious group had APGAR <7 at 5minute compared to 7 (4%) in the reactive group (p <.05), thus, the difference is statistically significant. This finding suggests that non-reactive CTG correlated well with low APGAR at 5 minutes. A similar observation was made by Gupta et al [9] where 36.5% of fetuses had APGAR <7 at 1 minute, and 60.8% had APGAR <7 at 5 minutes in the nonreactive group. However, in Atul K Sood et al's [11] study, only 5.8% of fetuses had APGAR <7 at 1 minute, and 5.2% had APGAR <7 at 5 minutes in the non-reactive CTG trace, which may be due to the difference in risk factors among the studied population. In Blessing David and Saraswathi et al's [12] study, 68.4% of fetuses had APGAR <7 at 5minute in the pathological group. Neonatal admission was higher in the pathological group, since only 7 (4%) fetuses with reactive CTG and 24 (18.4%) fetuses from the suspicious group compared to 53 (49.5%) fetuses from the pathological CTG group were admitted in NICU, thus, the difference was statistically significant with p-value <.05. A similar result was seen in the study conducted by Kumar et al [13], where 44.5% of babies with non-reactive CTG and 6% of babies with reactive CTG were admitted to the nursery. Atul K Sood et al [11] in their study found that there was a significant correlation between APGAR <7 and neonatal admission and was more commonly associated with non-reactive tracing, as 11.2% of babies with non-reactive CTG were admitted in NICU. Similar rates of NICU admission were reported in the study by Saima U et al [14]: 51% in the non-reassuring CTG group compared to 18.4% in the reassuring group. Blessig David and Saraswathi et al [12] in their study found that 47.3% of fetuses in the ominous group had NICU stay. This suggests that CTG can detect fetal asphyxia in a significant number of cases as evidenced by low Apgar score and high NICU admission among the nonreactive group. In our study, it is evident that CTG has a high sensitivity of 63%and specificity of 80.4 % with NPV of 90% in predicting perinatal outcome among high-risk patients. In a study conducted by Rehman et al [8] on 192 patients, a sensitivity of 60%, a specificity of 94.8% with PPV 57%and NPV of 90% for fetal distress were shown. Ingemarsson et al [5] and Qureshi et al [15] also reported a very high specificity in their studies. The high specificity of the test means that the normal test accurately excludes hypoxic fetal status at the time of testing. Thus, with all parameters and conclusions, CTG is an important tool for intrapartum fetal surveillance and gives us immense satisfaction in timely saving the lives of many babies for which, an obstetrician strives and mother longs. However, the use of EFM is controversial. Many societies do not recommend continuous EFM in all pregnancies. Thacker et al [16] also suggested that the use of EFM has limited effectiveness and also carries an increased risk of interventions. The Cochrane review recommends limiting continuous EFM to high-risk pregnancies. In developing countries where antenatal care is inadequate, and a large number of high-risk pregnancies are being delivered in a crowded setting with a low patient-to-doctor ratio, CTG helps in early detection of fetal distress thereby facilitating timely intervention to improve perinatal outcome. Conclusion In conclusion, the present study supports the role of CTG in high-risk obstetric patients. The test has high specificity and negative predictive value and appears to have a role in obstetric wards with a large number of high-risk cases and limited resources. It is simple, inexpensive, and simple to use and causes no inconvenience to the patient. Future researches are required to determine the supplemental diagnostic modalities which can enhance the positive predictive value of CTG.
2021-08-27T16:27:05.845Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4f5935746d510828b16d97305b480bec4c0227d3", "oa_license": null, "oa_url": "https://www.bayrakol.org/tr/2021/march/original-article/item/download/2888_0dd9feebdeedc22d9cff924b70f710d8", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "099fdf871d2df8aef74fad7df5dd94c5a026c445", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6284503
pes2o/s2orc
v3-fos-license
Ipragliflozin in combination with metformin for the treatment of Japanese patients with type 2 diabetes: ILLUMINATE, a randomized, double-blind, placebo-controlled study. This multicenter, double-blind, placebo-controlled study examined the efficacy and safety of ipragliflozin, a sodium-glucose co-transporter 2 inhibitor, in combination with metformin in Japanese patients with type 2 diabetes mellitus (T2DM). Patients were randomized in a 2 : 1 ratio to 50 mg ipragliflozin (n = 112) or placebo (n = 56) once daily for 24 weeks, followed by a 28-week open-label extension in which all patients received 50 or 100 mg ipragliflozin, while continuing metformin. The primary outcome was the change in glycated haemoglobin (HbA1c) from baseline to week 24. HbA1c decreased significantly in the ipragliflozin group (-0.87%; adjusted mean difference from placebo: -1.30%; p < 0.001). The overall incidence of treatment-emergent adverse events was similar in both groups, although pollakiuria and constipation were more common in the ipragliflozin group; thus, ipragliflozin significantly improved glycaemic control and reduced body weight without major safety issues in Japanese patients with T2DM. Introduction Ipragliflozin, a sodium-glucose co-transporter 2 inhibitor [1], improves glycaemic control by promoting urinary glucose excretion in patients with type 2 diabetes mellitus (T2DM) [2][3][4][5][6]. Western studies have shown that ipragliflozin in combination with metformin improves glycaemic control with a low incidence of adverse events [7,8]. We conducted a 24week, randomized, double-blind, placebo-controlled trial with a 28-week open-label extension to confirm the efficacy and safety of adding ipragliflozin to metformin to treat Japanese patients with T2DM. Methods The methods are described in more detail in the Supporting Information. Patients aged ≥20 years with T2DM (≥12 weeks of duration) being treated with metformin (≥6 weeks), with a HbA1c (National Glycohemoglobin Standardization Program) level of 7.4-9.9% and a body mass index of 20.0-45.0 kg/m 2 were eligible. All the patients provided written informed consent before participating in this study. Eligible patients entered a 4-week observation period and a 2-week run-in period in which they received placebo, after which they were randomized to either 50 mg ipragliflozin or placebo (2 : 1 ratio) for 24 weeks (treatment period 1; Figure S1, Supporting Information). Patients with HbA1c values that had declined from baseline and were <8.4% at the end of treatment period 1 were allowed to enter an open-label extension of 28 weeks (treatment period 2). In treatment period 2, the ipragliflozin dose could be increased to 100 mg, if HbA1c was ≥7.4% at week 20. Patients were followed up for 4 weeks after study completion or treatment withdrawal. The study was approved by the institutional review board at each participating site. The study was conducted in accordance with Good Clinical Practice, the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use, as well as local laws and regulations. The study was registered at ClinicalTrials.gov (identifier NCT01135433). The primary efficacy variable was the change in HbA1c from baseline to week 24. The secondary efficacy variables included body weight, waist circumference, fasting plasma glucose (FPG), fasting serum insulin (FSI), plasma leptin, and adiponectin levels. Homeostasis model assessment of insulin resistance (HOMA-R) and homeostasis model assessment of βcell function (HOMA-β) were also measured. Safety outcomes included vital signs, physical examination, 12-lead ECG, haematology, biochemistry, urine analysis and adverse events. Results This study was conducted between May 2010 and November 2011 across 34 sites in Japan. The disposition of patients is summarized in Figure S2. Overall, 56 patients were treated with placebo and 112 with ipragliflozin, of whom 42 and 110, respectively, completed treatment period 1. The baseline characteristics of patients in both groups were generally similar (Table 1) except that patients in the ipragliflozin group had lower FPG and less frequently used hypoglycaemic agents other than metformin than patients in the placebo group before the start of the study. The mean duration of exposure in treatment period 1 was shorter in the placebo group (147.3 ± 41.79 days; mean ± standard deviation) than in the ipragliflozin group (168.3 ± 7.12 days), reflecting the higher discontinuation rate in the placebo group. Of 96 patients in the ipragliflozin group who entered treatment period 2, 90 completed this period. The decreases in FPG, body weight, and waist circumference and the increase in plasma adiponectin levels from baseline to week 24 were significantly greater in the ipragliflozin group than in the placebo group (Table 2, Figure S3B, C). The reductions in FSI and leptin levels were not significantly different between the two groups ( Table 2). The increase in high-density lipoprotein cholesterol levels was significantly greater in the ipragliflozin group than in the placebo group, but the changes in the other lipid levels were not significantly different between the two groups. Systolic blood pressure decreased slightly in the ipragliflozin group but it was not significantly different between the two groups. There was no change in HOMA-β from baseline to week 24 in the ipragliflozin group but it decreased in the placebo group (Table 2); however, these results should be interpreted with caution and re-evaluated using other methods because HOMA-β is a function of FPG and fasting insulin levels. Efficacy outcomes in treatment period 2 are presented in the Supporting Information (Appendix S2 and Figure S4). Table S1 shows all the treatment-emergent adverse events (TEAEs) occurring in ≥2% of patients in either group in treatment period 1. The TEAEs were distributed similarly in both groups. None of the patients died during the study. TEAEs leading to discontinuation were less frequent in the ipragliflozin group than in the placebo group. Two patients in each group experienced serious TEAEs (cataract and anal abscess in the placebo group, and worsening of diabetes and carpel tunnel syndrome in the ipragliflozin group). The incidence rates of pollakiuria and constipation, events possibly related to osmotic diuresis, were higher in the ipragliflozin group than in the placebo group (5.4 vs. 1.8% and 4.5 vs. 1.8%, respectively). Cystitis was less frequent in the ipragliflozin group than in the placebo group and genital infection was not reported. There were no episodes suggestive of hypoglycaemia in either group. Safety outcomes in treatment period 2 are presented in the Supporting Information (Table S2). The total daily dose of metformin at screening did not influence the incidence of TEAEs in either treatment period (Tables S3 and S4). Discussion The present study showed that ipragliflozin significantly improved glycaemic control in terms of HbA1c and FPG at 24 weeks and its efficacy was maintained over 52 weeks. Patients treated with ipragliflozin also experienced reductions in body weight and waist circumference, as well as an increase in adiponectin levels. No hypoglycaemic events were reported. The overall incidence of TEAEs was not significantly different between the two groups; however, pollakiuria and constipation were more common in the ipragliflozin group than in the placebo group. The former was probably attributable to druginduced osmotic diuresis. Our results support those of Western studies showing the efficacy and safety of ipragliflozin in combination with metformin [7,8]. Likewise, the addition of dapagliflozin or canagliflozin to metformin was reported to reduce HbA1c and body weight without major adverse effects [9,10]. Limitations of the present study include the open-label, non-randomized design of treatment period 2, the increase in ipragliflozin dose in some patients in treatment period 2, and the limited generalization of the study population relative to Japanese patients with T2DM in actual clinical settings. Values are means ± standard deviation except for adjusted mean differences, which are presented with confidence intervals (CIs). HbA1c, glycated haemoglobin; DBP, diastolic blood pressure; FPG, fasting plasma glucose; FSI, fasting serum insulin; HDL-C, high-density lipoprotein cholesterol; HOMA-R, homeostasis model assessment of insulin resistance; HOMA-β, homeostasis model assessment of β-cell function; LDL-C, low-density lipoprotein cholesterol; LOCF, last observation carried forward; SBP, systolic blood pressure; TC, total cholesterol. *Analysed using ANCOVA with treatment group as a fixed effect and the baseline value as a covariate. †The difference in change from baseline between the two groups was analysed using Student's t-test. Supporting Information Additional Supporting Information may be found in the online version of this article: Figure S1. Study design. Figure S2. Patient disposition. Figure S3. Time-course of HbA1c (A), FPG (B), and change in BW (C) over 24 weeks. Figure S4. Time-course of HbA1c over 52 weeks in patients treated with ipragliflozin in both treatment periods according to the ipragliflozin dose in treatment period 2. Table S1. Treatment-emergent adverse events occurring during the first 24 weeks of the study (treatment period 1). Table S2. Treatment-emergent adverse events occurring over 52 weeks of treatment with ipragliflozin (treatment periods 1 and 2). Table S3. Incidence of treatment-emergent adverse events in treatment period 1 according to treatment group and the total daily dose of metformin at screening. Table S4. Incidence of treatment-emergent adverse events in treatment periods 1 and 2 according to the total daily dose of metformin at screening. Appendix S1. Supplementary methods (Patients, Study design and treatments, Efficacy and safety outcomes, and Statistical analysis) and references. Appendix S2. Supplementary results (Efficacy outcomes at week 52 and Safety outcomes at week 52).
2018-04-03T06:15:17.878Z
2014-07-31T00:00:00.000
{ "year": 2015, "sha1": "3a87d9081b535fe9627919b442de8f36da599950", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.12331", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3a87d9081b535fe9627919b442de8f36da599950", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233919372
pes2o/s2orc
v3-fos-license
THE PROMOTION OF ORGANIZATIONAL CULTURE: THE CASE OF GREECE : Organizational culture constitutes a fundamental characteristic of the educational organism because, on one hand, it contributes to the shaping of its character and the way of thinking of its members and, on the other hand, it is connected to the productivity of educators and the academic performance of students. Pivotal is the role of the educator in shaping the culture of the educational organism, a culture that moulds the imprint and identity of the school unit and constitutes a criterion for its effectiveness. Culture constitutes a tool in the hands of the leader in order to goad the members of the school community into a developmental trajectory, creating organizational conditions which contribute to learning outcomes and positive change. We conducted a survey using semi-structured interviews on a number of principals of secondary education school units within Attica Prefecture, relative to the way in which they promote organizational culture in their school unit, the role that other stakeholders have, the promotion of cooperative climate and the association of culture and learning outcomes. A lack of strategic orientation for the promotion of culture is clearly evident from the results. The ways in which the members of educational community are involved in organizational culture, cumulatively evaluated, indicate a strategic handling of the promotion of organizational culture. Evaluated, though, separately they are deprived of potency and reassert the incapability in approaching culture holistically. Practices for the consolidation of cooperative climate and the principal's relationship with learning outcomes are confirmed. an important impact on the outcome of school principals' improvement efforts (Deal & Peterson, 2016). School leader research (Leithwood, Haris & Hopkins, 2019· Leithwood Sun & Pollock 2017 shows the importance of principals building an organization which supports a professional and collaborative culture. More specifically, effective leaders seek the creation of a culture of cooperation and trust in their schools, which is founded on a common bond of beliefs and values (Gurr, Drysdale & Mulford 2005· Lucas & Valentine, 2002· Southworth, 2005, which have to be accepted and respected by all the member of the school unit (Leithwood & Riehl, 2003). In researches cooperation was mentioned as aspect of a strong school culture (Aslan, Özer & Ağıroğlu, 2009· Şahin, 2010· Şahin & Fırat, 2009· Saphier & King, 1985. Furthermore, not only cooperation but trust, as well, was mentioned as aspect of a strong school culture in the same researches. More specifically, the promotion and support of a collective culture constitutes a distinctive trait of the successful leader (Day & Harris, 1998), who creates conditions for cooperation among the members of their staff, cultivating the belief that it is positive for them to cooperate, share the problems they face, but also their successes, and exchange opinions and ideas regarding the teaching practices they implement (Dean, 1999). Furthermore, by having their emotional culture developed, they pay attention to the creation of a positive climate among colleagues, are able to build or also improve the relations between people and groups who have different ways of thinking and abet the foundation of collegiate relationships (Fullan, 2002). Therefore, principals are expected to work based on the unique culture and values within their schools which means there is a greater emphasis on building relationships with all school stakeholders (Sergiovanni, 2000). Influential principal is perceived as someone who embraces the power of the relationships among the students and adults in the building (O'Malley, Voight, Renshaw & Fklund, 2015). The cultivation of a common sentiment promotes cohesiveness and a climate of trust among the members of the organism (administrators, educators and students) (Çelikten, 2006· Ozdemir, 2006, and compels principals to promote the cultivation of a system of values and instruction rather than be bureaucrats (Sisman & Turan, 2004· Turan & Bektas, 2013, supporting innovative action, boosting the allocation of work and the initiative towards the development of the school, exercising influence and instructing subordinates (Turan & Bektas, 2013). Administrators in such an environment have a clear sense of duty and purpose, develop positive relationships with the members of the organization, and transform the school as a sustainable structure into a learning organization with the participation of all partners (Şimşek, 2003). It is revealed that there is a relationship between the support for the teacher learning and the culture of the school (Jurasaite-Harbison & Rex, 2010). Collaborative leadership has been shown to have a positive correlation to teacher efficacy (Arbabi & Mehdinezhad, 2015) and researchers have persisted in framing leadership as the driver for change and performance improvement in schools (Heck & Hallinger, 2010). In conclusion, the principal has got a prominent role in shaping school culture, who constitutes a conveyor of qualitative teaching and goads the teaching staff to get educated, shapes a future vision, contributes to effective cooperation among the staff, takes the appropriate decisions and has the ability to resolve crises in the school environment (Cavanagh & Dellar, 1997· Godfrey, 2016. Leadership and culture have been shown to correlate directly to student achievement (Cetin & Kinik, 2015· Helterbran, 2010· O'Donnell & White, 2005· Perilla, 2014· Whitaker, 2017· Wilhem, 2016· Yahaya et al, 2010, as leaders regularly reflect on their beliefs and values with regard to the purpose of education and act to create a culture and climate that supports student achievement (Darling-Hammond, 2007). The leader cultivates a social environment which supports learning, is distinguished for the existence of vision, professional culture, common decision-making structures, and involvement of the parents and the community (Leithwood & Jantzi, 2005), that is to say, with the creation of organizational conditions which contribute to positive change (Darling-Hammond et al, 2010, p. 14-16). Establishing collaborative and congenial working relationships with administrators and teachers and nurturing teacher-teacher relationships through support of professional learning communities has been found to be effective in closing the achievement gap for learners (Leithwood, 2010). The effective leader builds a culture that positively influences teachers, who, in turn, positively influence students (Marzano, Waters & McNulty, 2005, p. 47). The influence of leadership and culture on learning outcomes is evident also with the association of Poverty and School Culture. Namely, while it is challenging to improve academic performance at a low achieving, high poverty school, research suggests that it can be done (Carter, 2000). Culture has been found to be the necessary or dominant theme in research examining high poverty schools that were successful (Barth et al, 1999· Kannapel & Clements, 2005· Ragland et al, 2002. Concluding, school principals are expected to support and help develop a strong school culture where the students and teachers have a high motivation to learn and teach (Karadağ & Özdemir, 2015), sincere and honest relationships among school members and the sense of acting together (Kalkan et al., 2020). It is definitely worth noting that the scope of the leader's action is dependent also upon the type of educational system; centralized and decentralized. The Greek educational system has been characterized as centralized with decentralizing tendencies (Katsaros, 2008· Lainas 1993, essentially, though, it continues being "centralized, bureaucratic, inflexible, wasteful, with characteristics of legislative complexity, lack of continuity, and time-consuming procedures" Saitis, 1997, as cited in Ifanti & Vozaitis, 2005, p.31). Within such an educational framework the leader chooses the basic mission of the school and the way they will seek its completion together with the members of their team, which, evidently, does not choose in a centralized system. The general aim of the present study was the exploration of practices with which the secondary school principals of Attica Prefecture promote organizational culture in the school unit. The study of the notion within the frame of a centralized system, like the Greek one, in which the headmaster, mainly, deals with the administrative operation of the school as an organization, through bureaucratic procedures, involving day-to-day routine and conductive administrative tasks (Pashiardis, 2001), constituting indeed a challenge if headmasters may accept skeptically the central educational policy and shape along with all members of the school unit, its "internal" policy (Leech & Fulton, 2008· Sergiovanni, 1991· Williams, 2009, gives rise to the comparative validation of the Greek educational reality in relation to the European or world facts. The location, in fact, of the conducting of the research, as well as the choice of the qualitative method contribute to the overcoming of the regional validity of the conclusions and to the obtainment of a realistic evaluation regarding the administration of the educational organism. The qualitative approach was selected as participants can express themselves freely with completeness and clarity, without limiting their thinking. Thus, it is achieved the in-depth understanding of human action and behavior, which is determined by social processes and conditions (Cohen, Manion, & Morrison, 2007· Iosifidis & Spyridakis, 2006. As a tool for the collection of data, the semiconductor interview was chosen, as it allows the investigation of complex social processes, behaviors, attitudes, values of the interviewees', allowing researchers to analyze the answers as well as other matters that possibly emerge throughout the interview (Berg, 2001· Cohen et al., 2007· Fontana & Frey, 1998· Miles & Huberman, 1994. The research was conducted between February and April 2017. The participants were the principals of all public junior high schools and high schools within Attica Prefecture during the 2016-2017 school year. Prefecture of Attica is the representative of Greece, as the 34% of Junior High schools and 35.4% of High schools of the country operate there. In order to increase the internal validity of the research (Lincoln, 2001) even though it is not necessary, we used the sampling by layers. Attica Prefecture schools were recorded in numerical order, by administrative education zones, noting the type of school (High School -Lyceum) and the principals' gender. ii A random number table was created (Kendall & Smith, 1938) and schools were selected based on two conditions, school type and principal's gender. Finally, our sample included an equal number of schools concerning the conditions of "Administration Zone" (1 from each Administration Zone and 2 from the largest ones), "school type" (High School -Lyceum) and the principals' ii In High Schools, study and attendance is mandatory for the students aging from 13 to 15 years old. On the contrary, study and attendance is not mandatory for Lyceum's students aging from 16 to 18 years old. gender (male -female). iii We ended up with a total number of twenty schools selected. In order to arrange the interviews, we came in contact with the principals selected, we provided the necessary clarifications and we set up a date and meeting place for the interview. Due to their busy schedule, six out of the twenty principals selected refused to participate in the research. The final sample consisted of ten principals, out of the fourteen accepted to participate. We tried so as our sample to be balanced with reference to the condition of "gender". Before the interviews, 2 pilot interviews were conducted, in order to anticipate and avoid possible misunderstandings. Also, in order to increase the validity of the research, we gave the transcripts to two participants who confirmed the accuracy of their reasons (participant validation) (Symeou, 2007). The theoretical saturation was achieved on the eighth interview, but two more interviews were conducted for enhancing the validity of the research (Polit & Hungler, 1999). All principals have been serving in education from 23 to 35 years. The years in service vary from 2 to 26. Two (2) possess a master's degree and one (1) a PhD too. One of the principals has had further education on administration/management. For the analysis of the data collected, the method of thematic content analysis was selected (Braun & Clarke, 2006). Lastly, we took into account the ethics and the code of conduct to ensure anonymity and confidentiality, providing information regarding the aims of the research, the right of voluntary participation ensuring the respect for private data and the protection of the right to privacy . More specifically, the practices and produced results regarding the matter in question are derivatives of personal traits, like prestige or innate talent in administration (I.2), it is a matter of inspiration and persuasion of the stakeholders of the school unit (I. 5,9). The centralization, however, of our educational system, the functional framework that the state defines (I. 2,8) and the personal wants (I. 10) suffocatingly limit the boundaries of its creation. As a result, their (the leaders') abilities as developer and coordinator of the function of the school unit (I. 4,7), as fashioner of a calm climate (I. 1), as motivator and facilitator of manifestation of events (I. 3, 6, 10), being an example (I. 5, 6), leading students and educators (I. 3, 9), as a paragon of industriousness, really being present, aiming at the adaptability of the stakeholders in the school unit to changes through training (I. 5), driven by their emotion of love for the child and their undertaking (I. 3), and as a good listener (I. 7). "…defining is the role of the principal in shaping culture and they are dependent upon their prestige, the functional framework the state dictates… a matter innate and of talent… the other factors (parents, students, teachers) have little contribution and influence… as the principal exercises a main and defining role in appropriate function" (I. 2) "…the only thing they can… is the creation of a calm climate…" (I. 1) "if they want... to a small extent...to promote activities...cultural events..." (I.10) "…to suggest activities and through being a good example… and they themselves participate in the actions" (I. 6) "… to be an example, has to be the role model…industrious, to be present, not absent… to motivate the colleagues towards training, not authoritatively… we adapt to changes… I try to convince the teachers' association, to inspire them…" (I. 5) "It is fundamental for them to be in front… to urge… to get interactive whiteboard for there to be visits, events… with a great love for the school and the children… it is never the child's fault, there is always something else that lurks at the back… my tenure in technical education is very important, everyone should go through technical education… it is a great lesson to love these children who are not good students but they are remarkable and should be helped to find their goal…" (I. 3) "…example… to be the coordinator… to listen…" (I. 7) "the principal exercises a main and defining role in appropriate function" (I. 4) "…the others follow…" (I. 9) "it can only happen if the role of the principal is significantly strengthened… powers so that they can procure tangible results, but at the same time be evaluated very strictly… the school to be open… to have a strong and cooperative parents-teachers association… seminars clearly for educators, but also for parents… innovative actions… tasks… competitions… connection of the school to the university regarding tuition matters… programs… school culture which is both productive and able to positively graft society… self-governance of the school… in private financial standards… to be accountable… for there to be a reallocation of the teaching time within a year… ideas for good practices and not to interrupt the educational process within the year so as to make them go through seminars…" (I. 8) Through this prism, the shaping of culture aims at a school open to local and wider society, at a strong and cooperative parent-teacher association, at teachers continuously being trained, but also parents -ideally during the summer period -, at innovative teachings and actions, at the connection of the school to the university regarding matters of tuition, at the eradication of bureaucracy. There is, however, an oxymoron· while the administration of the school is of fundamental importance, the principal does not have the required authority. In a different case the granted authority would naturally entail strict evaluation as well, for a self-governed public school, a public school that would seek accountability (I. 8). Subsequently were explored the principals' opinions on the role of others involved in the education process, namely of educators, students, parents. The defining and significant role of the principal in the good function of the school is universally accepted, with a small (I. 2, 10) or larger contribution of other factors of the educational process. More specifically, it is ascertained that the parents-teachers association is governed by introversion, a characteristic which is amplified in financially deprived areas, a factor that decisively affects the students' culture (I. 4), or due to a lack "the other factors (parents, students, teachers' association) have little contribution and influence…, the principal exercises main and defining role to the appropriate function" (I. 2) "…almost everything passes through the principal's hands" (I. 10) "the parents… an introversion is prevalent… particularly in the area that is deprived… the area affects the children's culture… a teachers' association free to take initiative" (I. 4) "… very serious… the students should be convinced to cooperate… that in the school there exist certain rules… the parents are many times more difficult to handle than the students, particularly today in a period of total crisis…" (I. 5) "… each contributes according to their capacities" (I.1) "…the colleagues… administration of the school cooperative… students do not play a significant part, they follow, do not have complaints, they like the function of the school, they haven't even occupied the school … the parents-teachers association stands by us, up to the point they are able to…" (I. 3) "… if the teachers' association do not approve of the actions, they will not come to be… objections by parents make the situation more difficult… their role has to be supportive…" (I.6) "no principal can, no law can be implemented if those who must implement it, do not want to, so, no instruction, no behest of mine will be implemented if the teachers or students themselves have not understood nor co-decided… to the point they are able to… they have not understood why this must be done or have not participated in the making of this decision" (I.7) "…they have to find ways together with their teachers… flexibility… to inspire the teachers… to accommodate the teachers… to corroborate… the principal escaped their fear of responsibility…" (I. 8) "… for there to be cooperation and not only with the teachers' association but also with the children, parents, local area, mayor, bishop, with everyone and to always be able to find the right balance and to always be a diplomat too without violating your principles…" (I. 9) of cooperativeness, a corollary of the wider crisis of values in the modern era (I. 5), or by participation to the extent that it is possible (I. 1, 3, 8, 9), although their presence in the school reality and their consent in the actualization of school activities is deemed necessary (I. 1,6). As a result, administration through information, persuasion, argumentation (I. 5, 3, 9, 8), even through diplomatic channels, without tampering with their personal principles and values (I. 9) seek the ensurement of a climate of cooperation and satisfaction of the members of the school unit, particularly of teachers and students, and the respect for its rules of operation. In conclusion, the participation of teachers and students in decision-making, up to the point they are liable to (I. 7), teachers' association freedom to take initiative (I. 4), the corroboration and support of educators, as well as the flexibility of movement of the principal (I. 8, 9) constitute conditions for the maximum performance of the school unit. To the question of how the climate of cooperation is promoted a principal accorded the lack of a climate of cooperation to the teachers' individualism (I. 10), thinking that the administration of the organism is not to be held liable. Thus, cooperation, even though considered absolutely necessary for a school organism, is not always achieved (I. 1). As means of promoting a climate of cooperation are implemented the dialogue, information, submission of suggestions, successful communication (I. 1, 4, 3, 5, 6, 7), the collaborative handling of educational issues, the "…efforts are made but aren't fruitful… everyone looks after their own interests…" (I. 10) "It is behooved that teachers cooperate with each other… Sometimes it is achieved, sometimes it is not… through dialogue, information… I try to lead them to this direction… I use awarding those teachers who work in a special way… individual, in order not for there to be conflicts within the teachers' association… it may be unfair, because the educator's work is not acknowledged, but the calm climate takes priority" (I. 1) "…dialogue constitutes the basis of cooperation" (I. 4) "… you are welcome to discuss about the students, this class, exchange of opinions… these notes will help you, take mine, and the other person theirs… open your class to university students…" (I. 5) "… with communication as a tool" (I. 3) "I make suggestions… I address people who I think will be more receptive…" (I. 6) "We discuss about everything… we take care to have relationships beyond the standard at the school, we spend time together…friendly…" (I. 7) "Through personal example and instruction together with the notion of assistance I instruct showing and knowing what has to be done" (I. 2) "I have to find the way for cooperation… teach them…the teaching potential is liberated… (I. 8) "The climate of cooperation is promoted by the head, the principal… the principal will fundamentally take the first steps… give the rhythm, the climate of trust will gradually develop, even those who do not want to cooperate or are not cooperative or are negatively predisposed will be made in the end to cooperate by the majority of the teachers' association… there has to be a middle ground…" (I. 9) exchange of educational material, and the effort to make the class more accessible (I. 5). The individual reward towards the ensurement of a calm climate and appeasement of rivalries (I. 1), as well as the development of interpersonal relationships, at a level more than the standard (I. 7) contribute to the creation of cooperative climate. It is noted, though, that cooperation is promoted by administration (I. 8, 9). The principal by being an example, by instructing (I. 2, 8, 9), by training their colleagues in this philosophy, by giving a sense of freedom -within reason -, they create a culture, which even the negatively predisposed teachers, are made to agree with (I. 8, 9). The last question concerned the relationship between culture and learning outcomes and the principal's contribution to them. "There is no relationship between culture and learning outcomes… no contribution by the principal… they depend on the teacher… and the educational framework of the ministry" (I. 2). "…it is up to the teacher and whether they are willing and able who often… are neither willing nor able… to a large extent" (I. 10) "…by ensuring a calm climate at the school and by cultivating the student-teacher trust" (I.1) "…advisory role… I may say some things during educative conferences generally regarding behavioral manner to the students… I say these in an advisory manner, based on my experience, but not more in case I cause reactions against me not being responsible for this" (I. 4) "…the children who are involved in actions gradually become better students… to dictate that for the students' assessment that they have to observe more sides of their personality…" (I. 6) "The learning process is bonded to school culture… it is a holistic result… it is the how, why, from whom… I remind that we are pedagogues, we have an additional role in cooperating with children, we know which children have problems and which families, how this is going to be dealt with, to recognize… learning difficulties and to submit them to the appropriate centers" (I. 5) "…the principal supports the educator's work in order for it to be conducted more conveniently and easily" (I. 3) "…you are responsible, and you prove it to them… the principal covers them, strengthens them, encourages them, rewards them…will provide them with an outlet…" (I. 8) "…contributes, because they are responsible for there to be a calm climate… the colleagues need to feel free in their classes… both the colleagues and students have to feel secure, the colleagues that the principal is going to be the one who will protect them…the principal will give no one the right to, will not allow anyone to speak ill of a colleague, even if the principal has seen something, they can discuss about it with the colleague and not allow a student or parent to question the colleague's consistency or ability, just like students should also feel secure…I get in front, I am the leader of the team" (I. 7) "Contributes to general climate in the school… calmness… composure… the teacher knows that the principal will stand by them… even when they have made a mistake… I will discuss it with them after… in this way I support the culture and mentality of the school in general… everything starts from the principal's magnanimity… when they do not have the experience… feel insecurity, they will not support the teacher…" (I. 9) Two interviewees denied the existence of a relationship between culture and learning outcomes, but also the principal's contribution to learning outcomes claiming that they are completely up to the educator within the legislative frameworks of the ministry (I. 2, 10), characterizing, in fact, to a large extent the educational work as unreliable (I. 10). The rest of the interviewees acknowledged the relationship between culture and learning outcomes and accorded various roles of the principles in them. More specifically, the ensurement of calmness within the school area (I. 1, 7, 9), of a climate of trust between educators and students (I. 1), a pleasant environment (I. 3), an environment which supports actions, which positively influence student performance (I. 6), constitute ways to support the educational work and learning outcomes. Considering, thus, educational process closely bonded to school culture and a product of collaborative procedures (I. 5), the principal's role is advisory, with discreetness of course, relative to the way the teachers treat the students (I. 4), the assessment (I. 6), the pedagogical role of the educator, who has to fact the student spherically and not just as performance (I. 5). Mainly, though, the teacher, in order to be left unhindered to perform their task and succeed, has to feel a sense of freedom, of security that the principal too stands by them every step along the way, as a colleague, co-traveler and not an opponent (I. 3, 7, 8, 9). A principal who provides material to the teacher (I. 8, 9), who urges them to further train themselves (I. 5), who is driven by the sense of magnanimity for the members of the school organism (I. 9). A principal that leads their team (I. 7, 8, 9). A principal, though, who, overwhelmed by inexperience, insecurity, lack of confidence, is possible not to perform their work effectively (I. 9). Discussion Organizational culture constitutes a fundamental characteristic of the educational organism, a driving force for its better function, since, on one hand, it contributes to the shaping of the character of the organism and the way of thinking of its members and, on the other hand, influences the educators' efficiency and the students' academic performance (Pashiardis, 2014, p. 155), as it includes the values, the beliefs, the biases and in general the behavior that manifests through the procedures of school life (Theophilides, 2012, p. 155), and influences the efficiency of the school units as organisms (Chatzipanagiotou, 2008). It is an expanded notion, mainly in decentralized educational systems. The expansion of the notion within the framework of the bureaucratic Greek educational system constitutes a necessity, as the reorganization of the organization and function of the school organism is deemed necessary, that is to say, the reshaping of the organization and function of the school or organizational culture (Theophilides, 2012). On a second level, the possibility is given for the Greek educational reality to be compared to the European or global standards. The object in question was the practices of promotion of organizational culture in secondary schools in Attica Prefecture. The aim was to research the principal's and the other stakeholder's role in the shaping of culture, the ways of promoting culture, and the association of culture and learning outcomes. Tzianakopoulou, Theodora; Manesis, Nikolaos THE PROMOTION OF ORGANIZATIONAL CULTURE: THE CASE OF GREECE More specifically, relative to the role of the principal/leader in shaping school culture it was confirmed, at least on a theoretical level, the strong bond between leadership and organizational culture (Balci, 2011· Barnett & McCormick, 2004· Bass & Avolio, 1993· Celikten, 2006· Cotton, 2003· Day, Gu & Sammons, 2016· Hargreaves, Moore, Fink, Brayman & White, 2003· Hofstede, 1998· Liljenberg, 2015· Peterson & Deal, 2011· Turan & Bektas, 2013· Zmuda, Kulkis & Kline, 2004. A bond directly connected to, on one hand, with the leader's personality and their administration skills (I. 2) (Altınay, 2015· Aydın, 2010· Inandi, Tunc & Gilic, 2013· Morton et al., 2011, and, on the other hand, to the influence and the inspiration they exercise on the stakeholder' s of the school unit (I. 5, 9) (Calik, et al., 2012· Turan & Bektas, 2013. Singled-out practices of promotion of culture were observed. More specifically, the principal/leader promotes culture as a developer and coordinator (I. 4, 7) (Altınay, 2015· Aydın, 2010· Morton et al., 2011, fashioner of a calm climate (I. 1), instigator and facilitator of materialization of actions (I. 3, 6, 10) (Turan & Bektas, 2013), being an example (I. 5, 6) to imitate in general (Tang Keow Ngang, 2011) or providing examples of teaching excellence (Cavanagh & Dellar, 1997· Godfrey, 2016, leading students and educators (I. 3,9) with the notion of instruction (Turan & Bektas, 2013) or by guiding the school community towards improvement (Heck & Hallinger, 2010), as a paragon of industriousness, really being present, aiming at the adaptability of the stakeholders of the school organism (O' Malley et al., 2015) to changes through training (I. 5) ) (Cavanagh & Dellar, 1997· Ghamrawi, 2011· Godfrey, 2016· Jurasaite-Harbison & Rex, 2010· Şimşek, 2003, driven by a love for the child and their work (I. 3), and as a good listener (I. 7) as proof of their communication policy (Schein, 2010). It is concluded that practices of organizational culture were observed, which received cumulatively confirm researches of the present study. It is not substantiated, however, a systematic, conscious promotion of culture, a fact which is connected to the inability of a holistic approach to culture, according to Schein (1992Schein ( , 2004Schein ( , 2017. The reference to the limiting factors in shaping culture, centralization of our educational system (I. 2, 8) and the personal desires (I. 10) on a first level confirm the difficulties which the bureaucratic way of administration of schools in Greece causes (Pashiardis, 2001· Saitis, 1997, and, by extension, the need to eradicate bureaucracy (I. 8). On a second level, however, the challenge for a principal to shape their inner educational policy according to these conditions is pointed out (Leech & Fulton, 2008· Sergiovanni, 1991· Williams, 2009, promoting the development of a system of values and guidance against bureaucratic procedures (Sisman & Turan, 2004). Regarding the role of others involved in the educational process, namely of the educators, students, parents, the defining and prominent role of the principal in the appropriate function of the school was universally accepted, together with the small (I. 2, 10) or larger (I. 1, 3, 4, 5, 6, 7, 8, 9) contribution of other factors, confirming that the leader constitutes a driving force for the success of the organism at the teacher and student level (Creemers & Kyriakides, 2010· Greenberg & Baron, 2013· Yahaya et al., 2010 and pointing out the leader's ability to efficiently manage human resources (Tschannen-Moran, 2004). More specifically, the necessary (I. 1,6) or up to the point they are able to (I. 1, 3, 8, 9) participation of parents constitutes a way of shaping organizational culture (Peterson & Deal, 1998) and a factor for efficient organizational culture (Chatzipanagiotou, 2008· Leithwood & Jantzi, 2005. The socioeconomic background or the crisis of values in the modern era as preventative factors for the involvement of parents constitute strong observations, which are only indirectly confirmed, through the general difficulty in achieving learning outcomes in school in deprived areas, which, however, is not deemed impossible (Barth et al, 1999· Carter, 2000· Kannapel & Clements, 2005· Ragland et al, 2002. Emphasis on the creation of a climate of cooperation is observed (I. 5, 3, 9, 8) (Leithwood et al., 2019(Leithwood et al., , 2017, of positive organizational conditions (Darling-Hammond et al., 2010) and positive culture (Lee & Louis, 2019). Regarding the ways for achieving it, the participation of teachers and students, to the extent they are liable to, in the making of decisions is praised (I. 7) (Leithwood & Jantzi, 2005), the teachers' association freedom to take initiative (I. 4) (Turan & Bektas, 2013), the assistance and support of the educators (Chatzipanagiotou, 2008· Jurasaite-Harbison & Rex, 2010· Leithwood, 2010, assuming that these practices for the promotion of organizational culture contributes to the maximum performance of the organism (I. 8,9) (Deal & Peterson, 2016). It is concluded that the ways for the stimulation of the members of the school community, received cumulatively indicate strategic handling in promotion of culture. Received, however, separately they lack validity and again confirm the weakness for a holistic approach to culture. Recommendations The current study may form the basis for further research as organizational culture is a field that has not been researched thoroughly in Greece. In addition, the expansion of organizational culture within the framework of the bureaucratic Greek educational system constitutes a necessity, as the results of this study shows the need of reshaping the function not only of the school organization, but of the educational system as well. Conclusion The present study concerned the exploration of ways in which the principals of secondary schools in Attica Prefecture promote organizational culture in school organisms. The concept of organizational culture is complex, multi-faceted, a lever for restructuring, improvement, change, indicates a philosophy of administration of the educational organism, demanding as a foundation the holistic approach to educational affairs, the strategic orientation, the development of a system of values and of guidance, the leadership and not administrative procedure, as leadership and organizational culture are in a constant process of interaction between them and of being defined by exogenous factors. The leader, acting as a paragon of transmission of common beliefs, values and behaviors, influences and stimulates the members of the school unit to adopt and materialize their set goals, urges the members of the school unit towards a developmental trajectory and to positive change, contributes to the success of the school organism. Our study confirms, at least on a theoretical level, the strong bond between leadership and organizational culture. Also, it confirms the effort to consolidate a cooperative culture and the principal's connection to learning outcomes. It distinguishes itself, though, from other studies, as the consolidation of culture in the school organism is deprived of strategic handling. The efforts by principals to shape and promote culture or stimulate the members of the school community are individual, sporadic, lacking systemization, organization, systematic stratagem for the achievement of the goal, a fact that is owed to the weakness of a holistic approach to the concept, according to Schein (1992Schein ( , 2004Schein ( , 2017. Culture has not been realized as the cornerstone for the improvement and restructuring of the organism, but as a supporting factor for the function of a standard organism. Also, the inflexibility of the bureaucratic Greek educational system is evident, which clearly suffocates the holistic consideration of organizational culture and identifies it with the Organization and Administration of the School Organism. On the other side, though, the need for shaping of "inner policy" within the educational organism against the inflexibility of bureaucracy is deemed absolutely necessary. Limitations The small sample and the restricted area where the research was performed constitute limitations of the study. Nevertheless, the results don't lose their value.
2021-05-08T00:04:12.284Z
2021-02-14T00:00:00.000
{ "year": 2021, "sha1": "635479aaa1bb32b9cadbc58e2a85a29610d91298", "oa_license": "CCBY", "oa_url": "https://oapub.org/edu/index.php/ejes/article/download/3579/6215", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b1ddb57be1d2a0152d9063cb3545304bda9b89fd", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
202963716
pes2o/s2orc
v3-fos-license
Conventionality and Reality The debate on the conventionality of simultaneity and the debate on the dimensionality of the world have been central in the philosophy of special relativity. The link between both debates however has rarely been explored. The purpose of this paper is to gauge what implications the former debate has for the latter. I show the situation to be much more subtle than was previously argued, and explain how the ontic versus epistemic distinction in the former debate impacts the latter. Despite claims to the contrary, I conclude that special relativity leaves the debate on the dimensionality of the world underdetermined. Introduction Two debates have been central in the philosophy of special relativity (SR): 1. the debate on the conventionality of simultaneity; 2. the debate on the dimensionality of the world. The former debate was sparked by Einstein in 1905; the latter debate was initiated by Minkowski in 1908, a century ago. Einstein believed the notion of simultaneity to be conventional, and not factual; Minkowski considered reality to be fundamentally four-dimensional, and not three-dimensional. Both debates have lingered on to this day, without definite answers. A major contribution to the second debate, in support of Minkowski's claim, came from Rietdijk [37] and Putnam [29]. Call this the RP argument. Yet another argument for the fourdimensionality of the world came from Weingard [46] and Petkov [26,27]. Call this the WP argument. While these arguments are responsible for the commonly held opinion B Pieter Thyssen pieter.thyssen@kuleuven.be 1 Institute of Philosophy, KU Leuven, Kardinaal Mercierplein 2, 3000 Leuven, Belgium that SR necessitates a four-dimensional view of reality, neither argument is without problems, as I will show in this paper. Most strikingly, though, the link between both debates has remained largely underexplored. To make matters even worse, whenever the link was explored, radically different conclusions were reached about the way the former debate impacts the latter. According to Weingard [46] and Petkov [26,27], for example, the conventionality thesis lends further support to Minkowski's claim. Ben-Yami [1] and Cohen [5] disagree, arguing for the opposite thesis, whereas Sklar [42] remains largely uncommitted. The purpose of this paper then is to clarify the current situation by further exploring what implications (if any) the conventionality of simultaneity has for the debate on the dimensionality of the world. Outline Section 2 briefly reviews the debate on the conventionality of simultaneity. Section 3 provides a short introduction to the debate on the dimensionality of the world. Section 4 outlines the RP argument, and Sect. 5 raises a number of objections against it. Most importantly among these is the conventionality objection according to which the conventionality thesis undermines the RP argument. Section 6 shows the situation to be much more subtle than that, and explains how the ontic-versus-epistemic distinction in the former debate impacts the latter. Section 7 summarises the WP argument, and Sect. 8 briefly mentions the transitivity objection. Section 9 concludes this paper with some final thoughts on the soundness of the RP and WP argument. The Conventionality of Simultaneity The claim that distant simultaneity is a conventional notion (as opposed to a factual one) originated in the writings of Poincaré and Einstein, and was further developed by Reichenbach in the 1920s and by Grünbaum in the 1950s [19]. 1 The conventionality thesis can be summarised as follows. Consider two distant events, one at location A in space, the other at location B. To say that both events are simultaneous is to say that they occur at the same time. That is, if an A-and a B-clock were placed at the locations A and B respectively, both clocks should indicate the same time. This of course presumes that the clocks have been previously synchronised. The Conventionality Thesis Einstein's procedure however relies on an important assumption: the isotropy of the speed of light. In order to verify the truth of this assertion, the one-way velocity of light would have to be measured. But this requires the use of spatially separated clocks that are already synchronised. As Einstein [11, p. 27] observed: "It would thus appear as though we were moving here in a logical circle." Reichenbach called this the 'velocity-simultaneity circle argument'. Einstein avoided the circularity by assuming the isotropy of the velocity of light without further (experimental) proof. 2 Einstein's definition of distant simultaneity is thus only a convention. Other definitions are possible according to which with ε the Reichenbach synchronisation parameter. The choice ε = 1 2 is called standard synchrony and leads to Einstein's definition of simultaneity. But according to Reichenbach, the choice of ε is completely arbitrary (see Fig. 2). This, in short, is the conventionality thesis of simultaneity. The Causal Theory of Time Reichenbach arrived at the conventionality thesis via a different route. 3 In summary, the temporal order for any two spacelike separated events is indeterminate. It is only when a definition of distant simultaneity is introduced by hand (via a conventional choice of ε) that a temporal order between spacelike separated events can be established. But this order merely reflects our choice of ε, rather than being an objective matter of fact. Malament The conventionality thesis, it must be said, is not universally accepted. The most influential objection was probably voiced by Malament [20]. According to Norton [22, p. 194 What is Real? One of the central questions in the philosophy of SR is the reality question: is only the present real (presentism), or are the past and future equally real (eternalism)? There are of course other metaphysical positions, such as the view that the past and present are real (possibilism). Also, presentism is an umbrella term, covering a wide range of different views. Depending on which spatiotemporal shape the present takes on, for instance, different flavours of presentism are obtained (Fig. 3). Some of these flavours will be discussed further on. But for the moment, I want to keep the discussion focussed, and will take the present to be a three-dimensional Cauchy hyperplane, spanning the entire spatial extent of the world. Call this the hyperplane present. With that in place, let me briefly unpack the standard presentist and eternalist view. Presentism On the presentist view, the present is singled out as a uniquely special moment we call now. Only those events that constitute the present moment are real. Past events are no longer real and future events are not yet real. According to hyperplane presentism, the world, as a consequence, is three-dimensional. 4 Notice also that presentism is a realist thesis [38]: there is an objective, universal fact of the matter as to which events constitute the present moment, whether or not we have epistemic access to it. That is, the presentist thesis makes an ontological claim about the nature of time, not an epistemological one. In presentism, time is usually assumed to pass: present events disappear into the past as future events come into existence, leading to a succession of presents or a moving now. This dynamic aspect of time is referred to as the passage of time or temporal becoming. Change and temporal becoming are thus taken to be fundamental aspects of reality. The passage of time, however, is not logically entailed by the belief that only the present exists (see [21]). In any case, our focus here is on the reality of events and the dimensionality of the world, not on becoming. Eternalism On the eternalist view, all past, present, and future events are equally real and determinate. No special status is accorded to the present moment. The world, as a consequence, is four-dimensional. The eternalist account of time finds a natural representation in the so-called block universe, where all events coexist on an equal footing. From a God's eye point of view-or what Price [28] calls the view from nowhen-every moment of the universe's history is set out, and time no longer flows. Reality, in the words of Black [2, p. 181], is "a timeless web of 'world-lines' in a four-dimensional space." What is Real? The difference between presentism and eternalism is thus cashed out in terms of which events are real. For the presentist, the events simultaneous with the here-and-now are real. For the eternalist, all events are real, whether or not they are simultaneous with the here-and-now. But what exactly does it mean to say that a particular event is real? This question has remained largely untouched in the philosophical literature. Two exceptions are Callender [4] and Peterson and Silberstein [25]. Callender asks us to consider a fourdimensional manifold of events, where each event carries a lightbulb that can be on or off. When a lightbulb is on, the corresponding event is real; when the lightbulb is off, the event is not real. Presentism, on this view, holds that only present lights are on, whereas eternalism maintains that all lights are on (Fig. 4). 5 Reality Values and Relations Instead of associating a lightbulb with each event, Peterson and Silberstein [25] introduce a reality field R which denotes the ontological status of each event by assigning it a dimensionless reality value or R-value: Since the reality field is a scalar field, all observers agree on the value of the reality field at a particular point of spacetime. Every event, in other words, has a unique Rvalue, with R = 1 denoting a real event, and R = 0 an unreal event (Fig. 5). This is called the uniqueness criterion. Peterson and Silberstein next introduce a binary reality relation R which holds between any two events having the same R-value. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 same R-value, then they are said to be equally real. This is written as a Rb (read: 'event a and event b are equally real' or 'event a is real for event b'). Due to the uniqueness criterion, the relation R is: 1. Reflexive: a Ra is true (since a has a unique R-value); 2. Symmetric: if a Rb is true, then bRa is true (since a and b share the same R-value); 3. Transitive: if a Rb is true, and bRc is true, then a Rc is true (since a and c share the same R-value). This turns R into an equivalence relation. As a consequence, R provides a partition of the underlying set M into two disjoint equivalence classes: the class of real events and the class of unreal events. The Presentist Credo With this in place, we can rewrite the presentist credo that all (and only) present events are real more explicitly. Let M be the set of all spacetime events a, b, . . ., and S the relation of simultaneity among the elements of M. Then aSb is shorthand for 'event a is simultaneous with event b'. If b represents the here-and-now, b is real. That is, The present for b consists of all events simultaneous with b. Hence, if aSb holds true, then a is present for b. Following the presentist credo, a is therefore real for b: with R (a) = R (b) = 1. Call this hyperplane presentism. The Rietdijk-Putnam Argument Presentism is often said to be closest to our common-sense beliefs and intuitions about time. Putnam [29, p. 240] thus calls it the view of "the man on the street." But with the advent of SR, the presentist position has come under increasing pressure. The relativity of simultaneity, in particular, challenges our presentist intuitions and seems to imply an eternalist picture of time instead. According to Savitt [40], the eternalist account of time is now the most popular among philosophers. The Rietdijk-Putnam Argument One of the best known arguments from SR in favour of eternalism and the fourdimensionality of the world is the so-called Rietdijk-Putnam (RP) argument [29,37]. 6 The RP argument is a reductio ad absurdum (but see Stein [44, p. 17]). Rietdijk and Putnam start from the presentist doctrine according to which all (and only) present events are real and determinate (future and past events being indeterminate) and proceed to show the untenability of this position in light of SR. The argument relies on the well-known relativity of simultaneity: for any event that is future with respect to one observer, there always is a second observer (simultaneous with the first) for whom that event is present and hence (following the presentist credo) real. But surely-the argument continues-if an event is real for one observer, it has to be real for all observers. Thus, Putnam [29, p. 242] concludes: "future things (or events) are already real." The same can of course be said for past events, implying that future and past events are real after all. This refutes presentism, and confirms eternalism. Let us go through the argument in a bit more detail. Consider the set M of spacetime events a, b, . . ., and let S and R be the relations of simultaneity and reality as defined above. Now, let a and b be two events on the worldline of an inertial observer O 1 such that a chronologically precedes b (Fig. 6). Consider a second observer O 2 with an event c on her worldline that is spacelike separated from both a and b, such that: (i) At a, c is present relative to O 1 and is therefore real for O 1 ; (ii) At c, b is present relative to O 2 and is therefore real for O 2 . Due to the transitivity of the relation 'is real for', it follows from (i) and (ii) that: But b is in the chronological future of a. Hence, on a presentist reading: Fig. 6 The Rietdijk-Putnam argument A contradiction arises between (iii) and (iv), thereby refuting (hyperplane) presentism and establishing the eternalist worldview instead. Rewriting the RP argument in shorthand notation yields: Against Rietdijk-Putnam Although the RP argument claims to have settled the debate on the side of eternalism, a number of important objections have been raised against it, exposing different fallacies in the RP argument. I mention two objections, and will concentrate on the second. The Transitivity Objection The most common objection focusses on the transitivity of the relation 'is real for'. 7 For-the objection runs-the present in SR is a relative (frame-dependent) notion. What is present for O 1 need not be present for O 2 . And since the reality of events is tied up with their being present, reality itself is bound to be relativized. What is real for O 1 need not be real for O 2 . The non-transitivity of R follows directly from the non-transitivity of S. Just as bSc and cSa in Fig. 2 does not imply that bSa, so bRc and cRa does not imply that bRa. To see this, recall that the relation of simultaneity in SR is a ternary (threeplace) relation among two events and a given reference frame. Two events are only simultaneous with one another relative to some observer. When this is taken into account, the non-transitivity of S across observers is immediately obvious: The flaw in the RP argument-so the objection goes-is that R is taken to be a binary (two-place) relation among events, and not a ternary one such as S, in which case: By making R observer-dependent, there no longer is one reality, but a plurality of (observer-dependent) realities [3]. Such relativisation of existence gives rise to an ontological pluralism, as exemplified in relativized presentism (Fig. 3). The Conventionality Objection A second objection, based on the conventionality of simultaneity, recently appeared in a paper by Ben-Yami [1]. According to the conventionality thesis, the temporal order for spacelike separated events is indeterminate (see §2). Hence, since c is spacelike separated from a in Fig. 6, it cannot be maintained that c is present relative to O 1 at a. Similarly, since b is spacelike separated from c, it cannot be maintained that b is present relative to O 2 at c. Premises and Conclusion Notice that the RP argument falls apart under both objections, but for different reasons [1]. According to the transitivity objection, the conclusion (iii) does not follow from the premises (i) and (ii). According to the conventionality objection, the argument does not even get off the ground since both premises (i) and (ii) are considered false, rendering the argument unsound. Whereas the first objection questions the validity of the RP argument, the latter objection questions its soundness. Weingard and Sklar The conventionality objection is certainly not new, despite Ben-Yami's claim to the contrary. Weingard [46] and Sklar [42] were among the first to apply the conventionality thesis to the RP argument. More recently, Dieks [8] and Cohen [5] If we now associate the real (for an observer) with the simultaneous for him, we must, accepting the conventionality of simultaneity, accept as well a conventionalist theory of 'reality for'. It is then merely a matter of arbitrary stipulation that one distant event rather than another is taken as real for an observer. Now there is nothing inconsistent or otherwise formally objectionable about such a relativized notion of 'reality for', but it does seem to take the metaphysical heart out of the old claim that the present had genuine reality and the past and future lacked it. For what counts as the present is only a matter of arbitrary choice, and so then is what is taken as real. Ontic or Epistemic? In deciding whether the conventionality objection referred to above has any strength, one first has to decide whether the conventionality of simultaneity is an issue of ontology or epistemology. Ontic or Epistemic? On an ontic reading of the conventionality thesis, the relation of distant simultaneity is conventional, as opposed to factual, because this relation does not exist in the objective world. "[I]t is because no relations of absolute simultaneity exist to be measured that measurement cannot disclose them", argues Grünbaum [15, p. 456]. On an epistemic reading of the conventionality thesis, on the other hand, the relation of distant simultaneity is conventional because it is unverifiable. Even if the relation of distant simultaneity really exists, we nevertheless fail to have epistemic access to it, and are thus forced to treat this notion in a conventional manner. Agnostic or Epistemicist? With respect to the epistemic reading of the conventionality thesis, it is worth distinguishing two further positions. The agnostic is non-committal about the possible existence of distant simultaneity. The ε-epistemicist, on the other hand, is convinced that there is "a fact of the matter as to which distant events are 'really' simultaneous with a given event", even though we cannot measure it empirically. That is, the Reichenbach ε-parameter has a determinate value, but due to the velocity-simultaneity circle argument (referred to above, see §2), there is no way for us to determine its value. 8 I call this position ε-epistemicism, borrowing the term from debates on vagueness. 9 Ontic Impact On an ontic reading of the conventionality thesis, the conventionality objection referred to above certainly applies. After all, if distant simultaneity does not belong to the ontological furniture of the world, then clearly premises (i) and (ii) are without substance. Sklar [42, p. 135], for instance, takes the simultaneity of distant events to be "irrealist." We are of course free to introduce such a notion by choosing a particular value for ε. But, argues Sklar, if every choice of ε "can explain equally well all the hard data of experience, why should we take the accounts as differing at all in the real features they attribute to the world?" There is, in other words, "no fact of the matter at all about which distant events are 'really' simultaneous with a given event". Ben-Yami [1, p. 278] agrees that the definitions of distant simultaneity "do not express any objective temporal order between [spacelike separated] events." But "if simultaneity is purely conventional and lacks metaphysical significance," Dieks [8, pp. 618-619] continues, "there is obviously no reason to suppose that simultaneous events share a special "reality-property", so that the Rietdijk/Putnam argument seems to become a non-starter." Cohen [5, p. 46], finally, concurs that "since simultaneity between spatially separated events is merely conventional and not an objective constituent of reality", the premises (i) and (ii) above are "devoid of physical import." Point Presentism Granting that the ontic interpretation of the conventionality thesis undermines the RP argument, where does it leave us with regard to the debate on presentism and eternalism? If there is no such thing as distant simultaneity of events, it would seem that the present gets reduced to the here-and-now of each observer. And if we accept the presentist credo that all that exists, exists presently, then reality itself would get reduced to a single point (Fig. 3). This was called point presentism by Harrington [16]. The problem, according to Stein [44, p. 18], is that it leads to "a peculiarly extreme (but pluralistic!) form of solipsism." Not everyone has reached this conclusion though. Weingard [46], for instance, while agreeing that the conventionality thesis undermines the RP argument, offers a new argument, based on the conventionality thesis, in support of eternalism (see §7). 10 Epistemic Impact Let us first turn to the epistemic interpretation of the conventionality thesis and its consequences for the RP argument. Here the situation becomes more subtle (Fig. 7). Agnostics cannot judge the soundness of the RP argument since they are undecided whether distant simultaneity really exists. The ε-epistemicists, on the other hand, can go both ways. If they assume that ε has a fixed value, different from 1 2 , then the conventionality objection fails, and the RP argument nevertheless goes through. To see that, compare Figs. 6 and 8. Rietdijk and Putnam both assume standard synchrony with ε = 1 2 , leading to the familiar hyperplanes of simultaneity which are orthogonal to the worldlines of the observers (Fig. 6). But suppose now that ε had a different value in reality, say ε = 1 4 . In that case, spacetime would be foliated into one-sheeted hypercones of simultaneity (Fig. 8). 11 Yet, despite such a different foliation, the relativity of simultaneity still holds true, and the RP argument goes through unaffected. One problem with the hypercones is that the notion of intrasystemic simultaneity is no longer symmetric and transitive, and thus no longer an equivalence relation. Although c is simultaneous with a in Fig. 8 (cSa), for example, a is not simultaneous with c (¬aSc). It is customary therefore to make ε direction-dependent (with a choice of ε = 1 4 to the right implying 1 − ε = 3 4 to the left, as explained by Dieks [9]). This The Rietdijk-Putnam argument with direction-dependent ε leads to a foliation of Minkowski spacetime into hyperplanes, rather than hypercones, which are not orthogonal to the time axis. Even so, the relativity of simultaneity still holds true, and the RP argument applies (Fig. 9). However, since the choice of ε is conventional, nothing prevents the epistemicist from making ε observer-dependent as well. That way, a notion of absolute simultaneity can be reintroduced, in which case the RP argument obviously fails (Fig. 10). Neo-Lorentzian interpretations of SR, in particular, subscribe to this position (see for instance Craig [6], Craig and Smith [7]). The threat of nonlocality has also led some Bohmians to introduce a preferred foliation of spacetime [10]. The Weingard-Petkov Argument Another argument from SR for the four-dimensionality of the world is the Weingard-Petkov (WP) argument, which was first proposed by Weingard [46] and has since been advocated by Petkov [26,27]. Whereas the RP argument relies on the relativity of simultaneity, the WP argument relies on the conventionality of simultaneity. Weingard [46] takes issue with the RP argument for two reasons. First, RP use the relation of distant simultaneity S to partition Minkowski spacetime into past, present and future. But the relation of distant simultaneity is frame-dependent, and hence not relativistically invariant. Two observers in relative motion will carve Minkowski spacetime differently, and so won't agree on what is past, present or future. Hence, according to Weingard, our ontology should never be based on frame-dependent concepts, but always on invariant ones. Secondly, Weingard was the first to raise the conventionality objection, as described in §5. Even for one and the same observer, what is past, present and future in the absolute elsewhere is conventional, and hence devoid of ontological significance. Despite this fact, Weingard also uses the conventionality of simultaneity constructively to give a modified argument in support of eternalism. Fig. 10 The Rietdijk-Putnam argument with observer-dependent ε Topological Simultaneity Consider the set M of spacetime events a, b, . . ., and let b represent the here-and-now. By carefully choosing ε, any event in the absolute elsewhere of b can be considered simultaneous with b, and hence present. The present for b, in other words, is just the absolute elsewhere of b-a spatially extended bowtie-shaped region (Fig. 3). It contains all events that are causally non-connectible to b, and hence (in the words of Reichenbach and Grünbaum) topologically simultaneous with b. Contrary to the (standard ε = 1 2 ) hyperplane present for b, the bowtie present for b is relativistically invariant. It neatly partitions Minkowski spacetime into an absolute present (b + elsewhere of b), absolute future (upper lightcone of b) and absolute past (lower lightcone of b). 12 Bowtie Presentism Let A be the relation among the elements of M where A stands for 'is in the absolute elsewhere of'. Then a Ab is shorthand for 'event a is in the absolute elsewhere of event b'. Since b represents the here-and-now, b is real. The present for b consists of all events topologically simultaneous with b. Hence, if a Ab holds true, then a is present for b. Following the presentist credo that all (and only) present events are real, a must be real for b: This position was dubbed bowtie presentism by Gilmore et al. [13]. The Weingard-Petkov Argument The WP argument, in essence, is just the RP argument, but using (7) instead of (4) to gauge what is real (Fig. 11): Once again, a contradiction arises in (C), thereby refuting bowtie presentism and establishing eternalism. The Transitivity Objection Although the conventionality objection does not apply to the WP argument, the transitivity objection still applies. For even the bowtie present is a relative notion. The bowtie present for a in Fig. 11, after all, is different from the bowtie present for c. Hence, if the reality of events is tied up with their being present, then what is real for a need not be real for c. Here again, the non-transitivity of R follows directly from the non-transitivity of A. That is, starting from b Ac ∧ c Aa ⇒ b Aa, (8) and applying (7), one obtains: in contradiction with premise (iii) in the above WP argument. Reality Relations If the transitivity objection holds true, then why do Peterson and Silberstein [25] uphold the transitiveness of the reality relation R, as we showed in §3? The reason, quite simply, is that Peterson and Silberstein force R to be transitive by requiring every spacetime event to have a unique R-value. 13 This uniqueness criterion "seems intuitive" enough, write Peterson and Silberstein [25, p. 212], "since an event with an R-value of both 1 and 0, on our scheme, would be both real and unreal, which would be a contradiction." But intuitions are not always the most reliable guide to ontology. Perhaps an event can have an R-value of both 1 and 0, depending on which point of view one considers. To a bowtie presentist, for instance, the event b in Fig. 11 is real for c, but unreal for a. Although a rejection of the uniqueness criterion thus leads to a relativization of existence, "there doesn't seem to be anything very objectionable a priori about this", dixit Sklar [43, p. 296]. The question whether the reality relation R is transitive or not thus remains very much open. Conclusions The purpose of this paper was to explore the link between two major debates in the philosophy of SR: the debate on the conventionality of simultaneity and the debate on the dimensionality of the world. The focus here was on the RP and WP arguments. Both arguments claim that SR necessitates an eternalist, four-dimensional view of reality. According to Weingard, Sklar, Ben-Yami and others, the conventionality of simultaneity undermines the RP argument. I have shown the situation to be more subtle than that and have argued that the way in which the conventionality thesis impacts the RP argument depends on whether it is an ontological or epistemological thesis. If it is an ontological thesis, the RP argument cannot be saved. But on certain epistemicist positions regarding distant simultaneity, the RP argument is unaffected by the conventionality objection (Fig. 7). Even then, both the RP and WP argument remain subject to other objections, the transitivity objection being just one example. Here, the soundness of both arguments hinges on our interpretation of reality, and in particular on the alleged transitivity of the reality relation R. Since this relation does not belong to the formalism of SR, SR alone cannot answer the reality question. Indeed, despite claims to the contrary, SR leaves the debate on the dimensionality of the world underdetermined. What is needed in order to answer the reality question are additional metaphysical assumptions and presuppositions, which fall outside the scope of SR. This conclusion beautifully resonates with Sklar [41, pp. 272-275]: [S]pecial relativity throws novel light on the philosophical questions, but it is unable by itself to resolve fully the long-standing philosophical issues. [. . .] The science can change the philosophy and put the dispute in a new perspective, but it cannot resolve the dispute in any ultimate sense.
2019-09-17T02:45:57.713Z
2019-09-03T00:00:00.000
{ "year": 2019, "sha1": "5d2d0369bab92a93274a48f4492fa346feed1d87", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10701-019-00294-8.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "50ca91a7d5c48869cd11bf54dc5161d57ebe0560", "s2fieldsofstudy": [ "Physics", "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
255022211
pes2o/s2orc
v3-fos-license
Evaluation of fungicides for management of boxwood blight caused by Calonectria spp. under field conditions in Northern Germany Fungicide protection is the current approach for management of boxwood blight caused by Calonectria pseudonaviculata (Cps) and C. henricotiae (Che). However, published studies evaluating fungicides under field conditions have been focused on Cps. The objective of this study was to evaluate fungicides in Northern Germany where both Cps and Che were present. Three trials were conducted between 2006 and 2016. In 2006, plants were artificially inoculated with a conidial suspension, while infested soil and plant debris were added to a different field as inoculum for the 2012 trial and this field was used again without further addition of inoculum in 2015. Fungicides were applied one to five times and assessments were done three to six times, depending upon the trial. The highest level of disease severity occurred in 2015 (0.91), while the lowest occurred in 2012 (0.01). Among the fungicides evaluated in 2006, preventive sprays of Cercobin FL, Switch, Harvesan, Pugil 75 WG, Dithane NeoTec and Euparen M WG were most effective, with blight control above 65%. In 2012, all treatments including Askon, Cabrio Top, Malvin WG, Dithane NeoTec and Osiris showed > 75% blight control. In 2015, Bayer Rosen-Pilzfrei Baymat and Switch were the most effective (> 82%). Extended in-season blight control was also observed with some fungicides. Additionally, a few fungicides that were evaluated in more than 1 year showed reduced effectiveness over time. This study filled several major knowledge gaps especially regarding fungicide efficacy against Che under field conditions and thus provides crucial information for developing chemical control strategies. Boxwood blight caused by Cps has spread throughout Europe (Palmer and Shishkoff 2014;LeBlanc et al. 2018;Daughtrey 2019), parts of Asia (Akilli et al. 2012;Mirabolfathy et al. 2013), New Zealand (Ridley 1998) and North America (Ivors et al. 2012;Elmhirst et al. 2013), jeopardizing the supply chain of a major evergreen ornamental plant (Hall et al. 2021). In the USA alone, this disease has spread to over 30 states, putting at risk more than 90% of the nation's boxwood production (Hall et al. 2021). This pathogen threatens boxwood crops, plantings and forests globally (Barker et al. 2022). The other species, Che, has been reported for a few European countries, including Germany, Belgium, the UK, Slovenia, The Netherlands (Gehesquière 2014) and The Czech Republic (Bartíková et al. 2020). It is likely that Che could spread and establish outside Europe. This pathogen may pose a more severe threat to the global boxwood industry if it spreads to other countries, due to its greater tolerance of high temperature and reduced sensitivity to some of the most efficacious fungicide groups (Gehesquière et al. 2016). The current management approach largely depends upon the use of tolerant cultivars and fungicide applications. Although a handful of cultivars are less susceptible to boxwood blight, no cultivar is completely resistant to the disease (Brand et al. 2022;Kramer et al. 2020;LaMondia and Shishkoff 2017;Yoder et al. 2022). In order to maintain a broad spectrum of economically important Buxus species, varieties and cultivars, disease management is heavily dependent upon chemical control strategies. While fungicide evaluation studies against Cps have been ample, studies pertaining to its sister species, Che, have been limited. Several fungicides tested in vitro, alone or in combinations, have been shown to work against different stages of the Cps life cycle (Brand 2006;Henricot et al. 2008;LaMondia 2014). Brand (2006) found that prochloraz, propiconazole, thiophanate-methyl and carbendazim + flusilazole inhibited mycelial growth, whereas tolylfluanid, mancozeb, chlorothalonil and fludioxonil + cyprodinil were effective against conidial germination. Later, Henricot et al. (2008) reported that prochloraz, carbendazim, kresoxim-methyl and epoxiconazole + pyraclostrobin + kresoxim-methyl were effective in inhibiting mycelial growth while azoxystrobin, chlorothalonil, kresoxim-methyl, mancozeb, boscalid + pyraclostrobin, epoxiconazole + pyraclostrobin and epoxiconazole + pyraclostrobin + kresoxim-methyl limited conidial germination. Similarly, LaMondia (2014) concluded that propiconazole, triflumizole, tebuconazole and fludioxonil + cyprodinil were effective against mycelial growth but not conidial germination, highlighting efficacy against Cps under in vitro conditions. One field study conducted in the UK found propiconazole + pyraclostrobin, myclobutanil + fludioxonil, kresoxim-methyl and chlorothalonil to have greater efficacy against Cps (Henricot and Wedgwood 2013). The major gap in knowledge is with regard to fungicide efficacy against Che under field conditions. Considering its lower sensitivity to tetraconazole and kresoxim-methyl compared to Cps observed in vitro and in vivo (Gehesquière et al. 2016), evaluating fungicides against Che under field conditions is long overdue. The objective of this study was to fill this knowledge gap by evaluating fungicides with 19 different active ingredients from 9 different Fungicide Resistance Action Committee (FRAC) groups in fields where Cps and Che were both present (Brand et al. 2022). Results from the study will enable growers and landscapers to better manage boxwood blight. Materials and methods All field trials were conducted at the Research and Teaching Institute for Horticulture Bad Zwischenahn, Germany. Buxus sempervirens 'Suffruticosa' was used as the test plant in all trials due to its high susceptibility to boxwood blight. Three trials were performed in 2006, 2012 and 2016 (Table 1). Each trial started with a new planting of about 2-year-old Buxus sempervirens 'Suffruticosa', propagated by cuttings with a height of about 10 cm in 0.5 l pots. Planting dates are listed in Table 2. Each treatment had three (2006) or four (2012 and 2015) replicate rows with ten plants per row (with 15 cm planting distance in the row and 50 cm between rows, resulting in a row length of 1.5 m and a plot size of 0.75 m 2 ), and they were arranged in a randomized complete block (RCB) design within each trial. Fungicide efficacy field trials-inoculation and treatments Plants of the 2006 trial were inoculated on 14 September by spraying a conidial suspension of Cylindrocladium pseudonaviculatum at 10 4 -10 5 conidia/ml until runoff (150-200 mL/ m 2 ) using a handheld pressure sprayer. After inoculation, the entire field was covered for 36 h with white polyethylene film to assure sufficient leaf wetness duration for infection. The 2006 trial evaluated nine fungicides (Table 1) and compared to a nontreated control. All test fungicides were applied once before inoculation to determine their performance as protectants, while seven were also applied once 2 days after inoculation to determine their curative effects ( Table 2). All fungicide applications were made with a backpack sprayer with a water volume of 100 mL/m 2 at the application rates mentioned in Table 1. At the end of the trial, fallen leaves and infected plants were shredded and incorporated into the topsoil to acquire an area contaminated with the pathogen for succeeding trials (Brand et al. 2022). The 2012 trial was conducted in a nearby field, where plants were inoculated by scattering infested soil and fallen leaves collected from an infested field where blight trials had been performed since 2006 (Brand et al. 2022). Inoculum was added to the trial field on August 20, 2012, immediately after the first treatment with fungicides, earlier the same day (Table 2). This trial included five fungicides (Table 1) and a nontreated control. Each fungicide was applied to the same plants twice. Due to slow disease development, the field was sprayed with water and covered with white polyethylene film from 9 to 10 September to improve infection conditions. The 2015 trial was done in the same field as the 2012 trial evaluating six fungicides (Table 1) compared to a nontreated control. No inoculation was made, but inoculum was present in organic debris from earlier studies. Each fungicide was applied five times at 2-week interval starting 15 July (Table 2). Disease assessment Disease severity in each replicate row was assessed by estimating percentage of area with leaf spots and fallen leaves, recorded as an estimate of the percentage of total leaf area affected. Later, mean severity for each replicate row was calculated and expressed as proportion ranging between 0 (exclusive) and 1 (exclusive). Three assessments were made in 2006 and 2012, while six were made in 2015 (Table 2). Comparative analysis of fungicides evaluated in multiple years To strengthen the comparative analysis of fungicide performance across several years, results from two additional trials that are presented in supplemental Figs 4 and 5 were included in the analyses. In 2015, a separate field trial (referred to as 2015_a) was conducted evaluating three fungicide products: Delan Pro (dithianon + potassium phosphate), Geoxe (fludioxonil) and Switch (fludioxonil + cyprodinil). Planting for the 2015,_a trial was done on 22 August and each fungicide was applied twice on 21 August and 31 August; three disease assessments were made. An additional trial was done in 2016 evaluating Delan Pro (dithianon + potassium phosphate), Sunjet Flora (azoxystrobin + isopyrazam), Dagonis (fluxapyroxad + difenoconazole), Ceriax (fluxapyroxad + epoxiconazole + pyraclostrobin), Switch (fludioxonil + cyprodinil) and Geoxe (fludioxonil). Planting was done on 19 August with two treatment applications (18 August and 1 September) and two disease assessments (23 September and 10 October). Data analysis All disease severity data were analyzed using proc GLIM-MIX in SAS (v9.4; Statistical Analysis Software, Cary, NC). Analysis of variance was used to test for the overall effect of treatment, disease assessment time and their interactions on disease severity with a 'beta' distribution specified in the model statement. Due to presence of 0 s (zeros) and 1 s (ones) in the disease severity data, severity values were adjusted using the following equation (Smithson and Verkuilen 2006): where y′ is the new disease severity value; y is the original disease severity; and N is the number of observations or the sample size. This resulted in the 0 s being a very small decimal and the 1 s being very close to 1. The effect of the fungicides on disease severity for each assessment date was modeled in a similar manner as described above with treatment as fixed and replicate as random effect. Treatment LSMEANS were estimated using the maximum likelihood approach, while differences among treatments were compared using t-grouping. Later, data from the model scale were converted to the original scale using an 'ilink' option. Fungicide efficacy of each treatment was calculated as described by Paul et al. (2007): The higher the percentage, the better the treatment efficacy. Treatment efficacy among assessment dates within a given year was compared using t-grouping. A nonparametric student's t test was used to compare efficacy of fungicides tested across trials/years where applicable. Significance of treatment and assessment time Boxwood blight was observed in all field trials, with the highest blight severity in the nontreated control ranging from 0.16 in 2012 to 0.91 in 2015. There was a significant effect of treatment and disease assessment time on blight severity in all trials (Table 3). Similarly, the treatment × time effect was significant for all years except for 2006. Fungicide performance Fungicide performance close to 2 weeks after the last treatment is presented here, while data are provided in supplemental materials. The 2006 trial A clear distinction in the proportion of blighted leaves was seen among protective and curative sprays (Fig. 1). Six fungicides applied protectively, including Cercobin FL (thiophanate-methyl), Switch (fludioxonil + cyprodinil), Harvesan (carbendazim + flusilazole), Pugil 75 WG (chlorothalonil), Dithane NeoTec (mancozeb) and Euparen M WG (tolylfluanid), were the most effective, with the proportion of blighted %Efficacy = Severity of untreated control − Severity of treatment Severity of untreated control × 100 leaves in plants below 0.01. In contrast, the proportion of blighted leaves in the control and the corresponding curative treatments was at least twice as high. Percentage blight control achieved by those 6 preventive sprays remained above 65% even 18 days after the treatment application (Table 4). Curative sprays of Mirage 45 EC (prochloraz) and Ortiva (azoxystrobin) were the least effective, with blight control ranging between 17 and 19%. Tilt 250 EC (propiconazole) was not effective as a curative or a protectant. The 2015 trial Of the six treatments tested, Bayer Rosen-Pilzfrei Baymat (tebuconazole) and Switch (fludioxonil + cyprodinil) were ; the other two are presented as supplemental materials. Each column represents the mean of three replicates and is topped with a standard error bar. Columns topped with a shared letter(s) did not differ according to t-grouping at α = 0.05 the most effective, with the proportion of blighted leaves below 0.20, while the proportion of blighted leaves in the nontreated control remained above 0.85 (Fig. 3). The corresponding efficacy 18 days after the last treatment was 82 and 87%, respectively (Table 4). Products that were least effective with blight proportion above 0.60 included Duaxo Universal Pilzfrei (difenoconazole), Pilzfrei Ectivo (myclobutanil), Polyram WG (metiram) and Ortiva (azoxystrobin). Extended efficacy after the last treatment While maximal fungicide protection was observed within the first 2 weeks of the last treatment, most products retained their efficacy throughout the trial. For example, in 2006, efficacy of preventive application of Cercobin FL, Switch, Harvesan, Pugil 75 WG, Dithane NeoTec and Euparen M WG increased from 18 to 38 days after the last treatment, with blight control ranging between 87 and 90% (Fig. 4). While blight control efficacy of Euparen M WG and Dithane Neo-Tec was reduced significantly 71 days after the last treatment application, the percentage blight control of Cercobin FL, Switch, Harvesan and Pugil 75 WG did not change (Fig. 4). Similarly, in 2012, while percentage blight control of Osiris and Cabrio Top increased significantly from 7 to 17 days after the last treatment, their efficacy did not alter 25 days from the last treatment (Fig. 5). Other fungicides, including Askon, Malvin WG and Dithane NeoTec, provided a consistent blight control throughout the trial, ranging between 47 and 84% (Fig. 5). In 2015, performance of Bayer Rosen-Pilzfrei Baymat and Switch increased significantly from the first to the second disease assessment time with blight control ranging between 79 and 83%. These products continued to provide blight protection until the final assessment (Fig. 6). Discussion This study provided field-based evidence that a number of fungicides were effective against both pathogens, Cps and Che, under varying levels of disease pressure in northern Germany. Among the most effective products with single chemistry were Malvin WG (captan), Bayer Rosen-Pilzfrei Baymat (tebuconazole) and prophylactic sprays of Pugil 75 WG (chlorothalonil), Cercobin FL (thiophanate-methyl), Dithane NeoTec (mancozeb) or Euparen M WG (tolylfluanid). Likewise, effective products with two active ingredients Fig. 2 Proportion of blighted leaves of Buxus sempervirens 'Suffruticosa' with and without fungicide treatments in 2012. Shown is the second of three assessments (10 September, 20 September, 28 September), the other two are presented as supplemental materials. Each column represents the mean of four replicates and is topped with a standard error bar. Columns topped with a shared letter did not differ according to t-grouping at α = 0.05 Fig. 3 Proportion of blighted leaves of Buxus sempervirens 'Suffruticosa' with and without fungicide treatments in 2015. Shown is the fourth of six assessments (12 August,25 August,8 September,25 September,9 October and 30 October); the other five are presented as supplemental materials. Each column represents the mean of three replicates and is topped with a standard error bar. Columns topped with a shared letter(s) did not differ according to t-grouping at α = 0.05 included Askon (difenoconazole + azoxystrobin), Cabrio Top (metiram + pyraclostrobin), Osiris (epoxiconazole + metconazole), Switch (fludioxonil + cyprodinil) and Harvesan (carbendazim + flusilazole). These active ingredients belong to FRAC groups 1, 3, 9, 11, 12, M 03, M 04, M 05 and M 06, offering a variety of options for scheduled spray programs against boxwood blight and for product rotation to slow fungicide resistance development. This study also demonstrated extended in-season blight control of some fungicide products. Additionally, it showed decreasing performance of some fungicides over time. The study confirmed the efficacy of fungicide products previously evaluated against the blight pathogens. For example, chlorothalonil and cyprodinil + fludioxonil that were earlier shown to provide control against Cps (Cinquerrui et al. 2017;Henricot and Wedgwood 2013) were among the most effective products in our study. Cyprodinil + fludioxonil (Switch) treatment was significantly better than controls when tested over multiple years under our field conditions. More importantly, this study expanded our understanding of product performance from Cps to Che. For example, azoxystrobin and propiconazole, reported to be effective against Cps (Henricot et al. 2008;LaMondia 2014), were ineffective against the combination of the two blight pathogens in our study, which is in good accordance with the in vitro study of Gehesquière et al. (2016). Another active substance from the azoles (FRAC Group 3), myclobutanil (Pilzfrei Ectivo), also had limited effectiveness in our study. This might be due to the dominance of Che at the research site (Brand et al. 2022). These results highlight the importance of monitoring for the presence or dominance of Che when selecting fungicides for blight mitigation. Our findings demonstrated that preventive applications were more effective than curative applications. In 2006, preventive sprays were significantly more effective than curative treatments of the same fungicides. In subsequent trials, the first spray was made prophylactically followed by succeeding sprays, making it harder to make comparisons between preventive and curative approaches. However, our results reflect that fungicides with greater activity are best applied prophylactically prior to infection. This result is in agreement with the observation of LaMondia (2015) that fungicide products containing propiconazole, thiophanate-methyl, pyraclostrobin, fludioxonil and kresoxim-methyl, applied as protectants, had the best efficacy. In practice, however, it is very difficult to apply treatments immediately prior to infection because predictive models of infection and microclimatic influences on infection-especially temperature and humidity as seen in other pathosystems (Chaulagain et al. 2020;De Wolf et al. 2003)-are not always available and applicators are not always able to make treatments on short notice. As a result, periodic treatment with fungicides currently is a common practice in nurseries and landscapes. Our preventive + curative treatment applications appear to have worked even better to limit blight spread under higher disease pressure condition. For example, in 2015, five applications of tebuconazole and fludioxonil + cyprodinil, applied at 2-week intervals, were sufficient to control blight severity by 50-87%, when maximum blight severity in that year was very high (severity proportion > 0.85). The preventive-only treatments, including Opera (pyraclostrobin + epoxiconazole) and Bravo (chlorothalonil), however, were not as effective against boxwood blight under high infection pressure as they were under low infection pressure (Henricot and Wedgwood 2013). Besides disease pressure and treatment time, other factors such as a fungicide's mode of action, application method, pathogen populations and their fungicide sensitivities, infection site, dose, active ingredient and the formulation of the fungicide, in addition to weather, may play an important role in determining the extent of fungicide efficacy. The observation of extended in-season performance of some fungicides was not expected, but it is intriguing and potentially of practical importance. It raised a set of important questions. Are the recommended treatment intervals set too short for these products? What additional studies are needed to re-evaluate current recommendations on treatment interval? And how might the extended performance observed be related to the number of treatment times? Any new recommendations on extending treatment intervals would reduce the number of treatments required per growing season, saving growers both chemical and labor cost and curtailing their environmental footprint. If extended treatment interval were to come with a cost of faster development of fungicide resistance in the pest population, however, the temporary savings would not be worth the loss of effective active ingredients. While not all fungicides were tested each year, a few fungicides including Ortiva, Switch, Askon, Delan Pro, Geoxe and Dithane NeoTec were evaluated for at least 2 years (Supplemental Fig. 6). We noted that performance of these fungicides was reduced significantly in later years compared to their initial trial years. This was most striking in strobilurin and azole groups: efficacy of Ortiva (azoxystrobin) decreased by 51 percentage points from 2006 to 2015_a and that of Askon (difenoconazole + azoxystrobin) reduced by 98 percentage points from 2012 to 2016. We speculate that this reduced sensitivity could be a result of a shift in pathogen population structure from Cps to Che: for the first inoculation in 2006, an isolate of Cy. pseudonaviculatum (syn. Cy. buxicola) was used, while later, in 2016, Che was detected in our field and largely predominated by 2021 (Brand et al. 2022). Our results validated the in vitro and in vivo observation on Che made by Gehesquière et al. (2016), suggesting its risk of developing fungicide resistance as noted by Maurer et al. (2017). A continuous evaluation of fungicide products involving similar active ingredients over the period since Cps and Che were first introduced to Europe could have better presented the occurrence and evolution of fungicide resistance, but our results suggest changes in fungicide susceptibility over time. To properly use and refine best management practices, it is crucial to follow research-based fungicide use recommendations, including rotation of products with different modes of action, and to monitor shifts in fungicide sensitivity. Some of the fungicides that show great efficacy in our studies are no longer available in Europe or the USA, but may remain on the market elsewhere. For example, thiophanatemethyl, carbendazim, flusilazole, chlorothalonil, mancozeb, tolylfluanid and prochloraz are no longer approved in the European Union (European Commission 2022); therefore, no corresponding fungicides are registered in Germany (Bundesamt für Verbraucherschutz und Lebensmittelsicherheit, 2022). According to our results, among the fungicides currently registered for use in nurseries, only fludioxonil + cyprodinil (Switch) is effective. Only two fungicides are registered for use in private gardens against leaf spot pathogens or boxwood blight, of which only Bayer Garten Rosen-Pilzfrei Baymat (tebuconazole) showed sufficient blight control. The same product is also approved for public greens, but application in larger boxwood plantings is hardly practical because it is available only in very small and comparatively expensive packages. Other fungicides approved for public greens (Duaxo Universal Pilzspritzmittel, Ortiva) did not suppress disease effectively in this study. From the American perspective, products such as flusilazole + carbendazim, flusilazole, tolylfluanid, metiram and metiram + pyraclostrobin are not registered for use by the United States Environmental Protection Agency (2022). In the absence of effective registered fungicides, alternative products with relatively low toxicity will be needed to minimize blight development, particularly if Che with its smaller spectrum of effective chemistries is introduced to North America. However, the effect of materials with lower risk potential for environment and health on the blight pathogens remains to be studied.
2022-12-24T16:40:38.601Z
2022-12-21T00:00:00.000
{ "year": 2022, "sha1": "227069e66d73ac056c9fd901d0579653d6fed327", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41348-022-00691-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "48eacb4e05048f7ad15f400a2d8d65f086ec6e62", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
249029518
pes2o/s2orc
v3-fos-license
Soil Fertility Status of Patarghat Block in Saharsa District of Agro-Climatic Zone-II of Bihar A soil nutrient status inventory work was carried out in some villages of Patarghat block in Saharsa district. Results shows that soil texture of the soil under investigation was loamy land. Soil pH value ranged between 6.12 to 7.57 and electrical conductance remained less than 0.38 dSm -1 . Soil organic carbon ranged between 4.25 to 7.12 g kg -1 . Available nitrogen content in these soil was found to be between 185 to 292 kg ha -1 . Available phosphorus content varied from 20.10 to 37.26 kg ha -1 . Available potash content varied from 138 to192 kg ha -1 . CaCl 2 extractable soil sulphur varied from 0.70 to 10.25 mg kg -1 renders the soil deficient in S. Hot water soluble boron content ranged from 0.22 to 0.47 mg kg -1 . All the figures in lower range was found in upland soils while the higher value of all the parameters were found in low land. There was an increasing trend with respect to soil reaction, soil organic carbon, N, P, K, S and B from upland which is due the washing down of basic cations, organic matter and plant nutrients from upland and subsequent deposition in low land give rise to the higher value in low lying areas. Clay content was found to be positivity correlated with all the parameters except phosphorus. Significant positive correlation of organic carbon, nitrogen, potassium, and boron with soil pH. Similarly, soil organic carbon was positively correlated with clay, macro and micro nutrients. . Soil organic carbon ranged between 4.25 to 7.12 g kg -1 . Available nitrogen content in these soil was found to be between 185 to 292 kg ha -1 . Available phosphorus content varied from 20.10 to 37.26 kg ha -1 . Available potash content varied from 138 to192 kg ha -1 . CaCl 2 extractable soil sulphur varied from 0.70 to 10.25 mg kg -1 renders the soil deficient in S. Hot water soluble boron content ranged from 0.22 to 0.47 mg kg -1 . All the figures in lower range was found in upland soils while the higher value of all the parameters were found in low land. There was an increasing trend with respect to soil reaction, soil organic carbon, N, P, K, S and B from upland which is due the washing down of basic cations, organic matter and plant nutrients from upland and subsequent deposition in low land give rise to the higher value in low lying areas. Clay content was found to be positivity correlated with all the parameters except phosphorus. Significant positive correlation of organic carbon, nitrogen, potassium, and boron with soil pH. Similarly, soil organic carbon was positively correlated with clay, macro and micro nutrients. INTRODUCTION Saharsa is one of the important district in the eastern part of the state of Bihar, India. It is located near the eastern banks of the Koshi river with a geographical area of 1687 square kilometre. It is considered as the heart of whole Mithila region. It is the place which gave birth too many legends like Mandan Mishra. Saharsa district is bounded on the west by river Koshi, an abundance of fish and makhana [1,2]. It is the major producer of best quality of corn and makhana in India. Rice, bamboo, wheat, mustard, sugarcane and sagwan trees are now grown on a large scale. Soil and water are essential resources for the sustained quality of human life and the basis of agricultural development [3]. In any agricultural operations, soil is the utmost significance as it is the cradle for all crops and plants. This is the reservoir of nutrients that play an important role in providing the growth of crops, keeping the environment clean [4]. Fertilizer with specified dose for a certain crop is always recommended in order to increase its pollution. The fertilizer application by farmers in the field without knowledge of soil nutrient status and nutrient requirement of different crop ordinarily leads to adverse effect on soil as well as crop [5]. Sound knowledge about soil fertility status is very much relevant for identifying constraints in crop production for attaining sustained productivity. Indian agriculture is operating on a net negative balance of plant nutrients at the rate of 10 million tonnes per annum [6]. Long term experiments indicated that imbalanced use of nutrients through fertilizer has a deleterious effect on soil health leading to unsustained productivity. It is therefore important to regularly monitor the fertility status of soil from time to time with a view to sustain the soil health [7]. A soil resource inventory provides an insight into the potentialities and limitations of soil for its effective explanation [8]. Keeping the above facts in view a soil resource inventory work was undertaken to study the soil fertility status of three villages of Patarghat block in Saharsa district of Bihar. The study has generated a lot of information related to the soil physico-chemical properties and their interrelationship for better understanding of soil fertility which would provide the basis for implementing the advanced technologies for sustainable crop production with higher profitability. MATERIALS AND METHODS Surface (0-15 cm) soil samples were collected from the randomly selected villages namely Bishunpur, Kishunpur and Lachhmipur of Patarghat block of Saharsa district under agroclimatic zone-II of Bihar. As per modern system of soil classification the soils comes under Entisols. Collected samples were air-dried, grind with wooden pastel and mortar and sieved through 2.00 mm sieve (0.2 mm sieve for organic carbon) labelled and stored. The samples were analyzed the chemical parameters viz., pH and electrical [9]; organic carbon [10], available nitrogen [11], available P [12], available K [13], available (CaCl 2 extractable) soil sulphur [14] and hot water soluble boron [15], soil textural classes [16] was also determined. The analytical methods were followed as per the procedure laid down by Jackson [9]. Physico-chemical Properties Most of the soil of these villages are waterlogged in rainy season because of inundation of standing water from flood. Soil texture of Bishunpur and Kishunpur village was sandy loam but in Lachhmipur soil texture varied from loamy sand to sandy clay loam. High amount of clay content was found in low land as clay particles are washed down from the up and medium land during rainfall and their subsequent deposition in low land because of the pedogenic process of colluviation [3]. Surface soils of Bishunpur village were slightly active to slightly saline which varied from 6.45 to 7.72 (Table 1). In Kishunpur soil pH varied from 6.81 to 7.78 which come under moderately neutral to saline. Similarly in Lachhmipur soil pH ranged from 6.23 to 7.60 which indicates that soils are slightly acidic to neutral. Soil pH is significantly and negatively correlated with sand but positively with silt, clay, organic carbon and nitrogen [17]. The electrical conductivity values of soil of Bishunpur village ranged from 0.28 to 0.56 dSm -1 . The low land generally have higher electrical conductivity values in comparison to upland and medium land but the value remains within the safe limit for crop production [18]. In Bishunpur village soil organic carbon content was found to vary 0.39 to 0.48 %, 0.51 to 0.59 per cent and 0.67 to 0.78 per cent in upland, medium land and low land respectively (Table 1). Similarly, in Kishunpur village it ranged from 0.39 to 0.77 per cent in different land type. The soil of Lachhmipur village varied 0.46 to 0.75 per cent. Higher soil organic carbon content in low land soils of all the villages is because of the lower topographical position due to which they receive runoff washing of upland and a medium land soils which is decomposed by microorganisms giving rise to higher content soil organic carbon [19]. Soil organic carbon is negatively correlated with sand but significantly and positively correlated with N, available K. Macro Nutrients An increasing trend of average soil nitrogen content was observed in all the villages from upland to low land. Soils of low land contained more N in comparison to the medium and upland. Soils available nitrogen is negatively correlated with sand but positively correlated with pH, K, S and B. the P 2 O 5 content in all soils of Bishunpur village was found to be low to high. Soils of Kishunpur village were medium in P 2 O 5 content. Similarly, soils of Lachhmipur were medium to high which indicate a very good index of soil fertility. Mean available P 2 O 5 content in soils of all the villages increased from upland to lowland. Higher value of available phosphorus in low land soil may be due to higher content of soil organic carbon as phosphorus is released from the organic matter slowly. Soil available phosphorus is negatively correlated with clay but positively correlated with organic carbon, N, K, S and B (Table 4). Available soil potassium content in all the villages increased according to land situation from upland to lowland. Comparatively higher content of potassium in the low land soils may be due to the presence of higher content of clay in the low land surface soil. Soil available potassium was negatively correlated with sand but positively correlated with clay, organic carbon, N, P 2 O 5 , S and B [20]. The results shows that the soil under investigations were low to medium in S content which can limit the crop production specially oil seed crops. Comparatively higher amount of available sulphur in low land soils may be attributed to the higher amount of organic carbon in the low land soil [21]. Lower correlation of S in upland soils may be characterized by the leaching of surface runoff loss and subsequent accumulation of sulphur in low land [22]. Available S was positively correlated with pH, organic carbon, N, P 2 O 5 , K and B. Similar significant positive correlation of S was also observed by Ali et al. [23] and Das et al. [21]. Mean hot water soluble B content was found to be increased from upland to low land situation. Thus, higher level of B was found in low land soil compared to the upland and medium land soils. This might be attributed to the higher amount of organic carbon present in low land soil and washing down of B from up to medium land. Hot water soluble boron was positively correlated with pH, organic carbon, N, P, K and S. similar finding was observed by Behera et al. [24]. CONCLUSION The present study help to build up the data base evaluation, planning and monitoring of soil fertility status which can serve the farming community for higher profitability with a balanced recommendation of fertilizer in sustainable manner for present and in future.
2022-05-25T15:16:47.904Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "a06fac433baddaf22817c81370483a8a6ff50db2", "oa_license": null, "oa_url": "https://journalcjast.com/index.php/CJAST/article/download/31713/59503", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "60de7154791bf69b0b5b21e5512c43f9637779d2", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
9846145
pes2o/s2orc
v3-fos-license
The phase problem # 2003 International Union of Crystallography Printed in Denmark ± all rights reserved Given recent advances in phasing methods, those new to protein crystallography may be forgiven for asking `what problem?'. As many of those attending the CCP4 meeting come from a biological background, struggling with expression and crystallization, this introductory paper aims to introduce some of the basics that will hopefully make the subsequent papers penetrable. What is the `phase' in crystallography? What is `the problem'? How can we overcome the problem? The paper will emphasize that the phase values can only be discovered through some prior knowledge of the structure. The paper will canter through direct methods, isomorphous replacement, anomalous scattering and molecular replacement. As phasing is the most acronymic realm of crystallography, MR, SIR, SIRAS, MIR, MIRAS, MAD and SAD will be expanded and explained in part. Along the way, we will meet some of the heroes of protein crystallography such as Perutz, Kendrew, Crick, Rossmann and Blow who established many of the phasing methods in the UK. It is inevitable that some basic mathematics is encountered, but this will be done as gently as possible. Received 1 May 2003 Accepted 11 August 2003 Introduction There are many excellent comprehensive texts on phasing methods (Blundell & Johnson, 1976;Drenth, 1994;Rossmann & Arnold, 2001;Blow, 2002) so this introduction to the CCP4 Study Weekend attempts to give an overview of phasing for those new to the ®eld. Many entering protein crystallography are from a biological background, unfamiliar with the details of Fourier summation and complex numbers. The routine incorporation of selenomethionine into proteins and the wide availability of synchrotrons means that in many cases structure solution has become press-button. This is to be welcomed, but not all structure solutions are plain sailing and it is still useful to have some understanding of what phasing is. Here, we will emphasize the importance of phases, how phases are derived from some prior knowledge of structure and look brie¯y at phasing methods (direct, molecular replacement and heavyatom isomorphous replacement). In most phasing methods the aim is to preserve isomorphism, such that the only structural change upon heavy-atom substitution is local and there are no changes in unit-cell parameters or orientation of the protein in the cell. Of course, single-and multi-wavelength anomalous diffraction (SAD/MAD) experiments achieve this. Where non-isomorphism does occur, then this can be used to provide phase information and we will look at an example where nonisomorphism was used to extend phases. In the diffraction experiment ( Fig. 1), we measure the intensities of waves scattered from planes (denoted by hkl) in the crystal. The amplitude of the wave |F hkl | is proportional to the square root of the intensity measured on the detector. To calculate the electron density at a position (xyz) in the unit cell of a crystal requires us to perform the following summation over all the hkl planes, which in words we can express as: electron density at (xyz) = the sum of contributions to the point (xyz) of waves scattered from plane (hkl) whose amplitude depends on the number of electrons in the plane, added with the correct relative phase relationship or, mathematically, &xyz 1aV jF hkl j expi hkl expÀ2%ihx ky lzY where V is the volume of the unit cell and hkl is the phase associated with the structure-factor amplitude |F hkl |. We can measure the amplitudes, but the phases are lost in the experiment. This is the phase problem. The importance of phases The importance of phases in producing the correct structure is illustrated in Figs. 2 and 3. In Fig. 2 three`electron-density waves' are added in a unit cell, which shows the dramatically different electron density resulting from adding the third wave with a different phase angle. In Fig. 3, from Kevin Cowtan's Book of Fourier (http://www.ysbl.york.ac.uk/~cowtan/fourier/ fourier.html), the importance of phases in carrying structural information is beautifully illustrated. The calculation of aǹ electron-density map' using amplitudes from the diffraction of a duck and phases from the diffraction of a cat results in a cat: a warning of model-bias problems in molecular replacement! Recovering the phases There is no formal relationship between the amplitudes and phases; the only relationship is via the molecular structure or electron density. Therefore, if we can assume some prior knowledge of the electron density or structure, this can lead to values for the phases. This is the basis for all phasing methods (Table 1). Direct methods Direct methods are based on the positivity and atomicity of electron density that leads to phase relationships between the (normalized) structure factors, e.g. where E represents the normalized structure-factor amplitude; that is, the amplitude that would arise from point atoms at rest. Such equations imply that once the phases of some re¯ections are known, or can be given a variety of starting values, then the phases of other re¯ections can be deduced leading to a bootstrapping of phase values for all re¯ections. The The diffraction experiment. requirement of, what is for proteins, very high resolution data (<1.2 A Ê ) has limited the usefulness of ab initio phase determination in protein crystallography, although direct methods have been used to phase proteins up to $1000 atoms. This so-called Sheldrick's rule (Sheldrick, 1990) has recently been give a structural basis with respect to proteins (Morris & Bricogne, 2003). However, direct methods are used routinely to ®nd the heavy-atom substructure, such as in Shake-and-Bake (SnB; Miller et al., 1994), SHELXD (Schneider & Sheldrick, 2002) and SHARP (de La Fortelle & Bricogne, 1997), and even subsequent phase determination from the substructure with programs such as SHELXE (Debreczeni et al., 2003) and ACORN (Foadi et al., 2000). Molecular replacement (MR) When a homology model is available, molecular replacement can be successful, using methods ®rst described by Michael Rossmann and David Blow (Rossmann & Blow, 1962). As a rule of thumb, a sequence identity >25% is normally required and an r.m.s. deviation of <2.0 A Ê between the C atoms of the model and the ®nal new structure, although there are exceptions to this. Patterson methods are usually used to obtain ®rst the orientation of the model in the new unit cell and then the translation of the correctly oriented model relative to the origin of the new unit cell (Fig. 4). Isomorphous replacement The use of heavy-atom substitution was invented very early on by small-molecule crystallographers to solve the phase problem; for example, the isomorphous crystals (same unit cells) of CuSO 4 and CuSeO 4 (Groth, 1908). The changes in intensities of some classes of re¯ections were used by Beevers & Lipson (1934) to locate the Cu and S atoms. It was Max Perutz and John Kendrew who ®rst applied the methods to proteins (Perutz, 1956;Kendrew et al., 1958) by soaking protein crystals in heavy-atom solutions to create isomorphous heavy-atom derivatives (same unit cell, same orientation of protein in cell) which gave rise to measurable intensity changes which could be used to deduce the positions of the heavy atoms (Fig. 5). In the case of a single isomorphous replacement (SIR) experiment, the contribution of the heavy-atom replacement to the structure-factor amplitude and phases is best illustrated on an Argand diagram (Fig. 6). The amplitudes of a re¯ection are measured for the native crystal, |F P |, and for the derivative crystal, |F PH |. The isomorphous difference, |F H | 9 |F PH | À |F P |, can be used as an estimate of the heavy-atom structure-factor Taylor The phase problem 1883 CCP4 study weekend Figure 4 The process of molecular replacement. Figure 3 The importance of phases in carrying information. Top, the diffraction pattern, or Fourier transform (FT), of a duck and of a cat. Bottom left, a diffraction pattern derived by combining the amplitudes from the duck diffraction pattern with the phases from the cat diffraction pattern. Bottom right, the image that would give rise to this hybrid diffraction pattern. In the diffraction pattern, different colours show different phases and the brightness of the colour indicates the amplitude. Reproduced courtesy of Kevin Cowtan. amplitude to determine the heavy-atom positions using Patterson or direct methods. Once located, the heavy-atom parameters (xyz positions, occupancies and Debye±Waller thermal factors B) can be re®ned and used to calculate a more accurate |F H | and its corresponding phase H . The native protein phase, P , can be estimated using the cosine rule ( Fig. 7), leading to two possible solutions symmetrically distributed about the heavy-atom phase. This phase ambiguity is better illustrated in the Harker construction (Fig. 8). The two possible phase values occur where the circles intersect. The problem then arises as to Harker construction for SIR. Figure 6 Argand digram for SIR. |F P | is the amplitude of a re¯ection for the native crystal and |F PH | for the derivative crystal. Figure 7 Estimation of native protein phase for SIR. Phase probability (Blow & Crick, 1959). The lack of closure Figure 10 Phase probability for one re¯ection in a SIR experiment. F best is the centroid of the distribution. The map calculated with |F best | exp(i best ) [or m|F P | exp(i best ), where m is the ®gure of merit, hcos Ái] has least error. m = 0.23 implies a 76 error. Figure 5 Two protein diffraction patterns superimposed and shifted vertically relative to one another. One is from the native bovine -lactoglubulin, one from a crystal soaked in a mercury salt solution. Note the intensity changes for certain re¯ections and the identical unit cells suggesting isomorphism. (Photo courtesy of Dr Lindsay Sawyer). which phase to choose. This requires a consideration of phase probabilities. Phase probability In reality, there are errors associated with the measurements of the structure factors and in the heavy-atom positions and their occupancies such that the vector triangle seldom closes. David Blow and Francis Crick introduced the concept of lack of closure (4) and its use in de®ning a phase probability (Blow & Crick, 1959) (Fig. 9). Making the assumption that all the errors reside in F PH(calc) and that errors follow a Gaussian distribution, the probability of a phase having a certain value is then Most phasing programs calculate such a probability from 0 to 360 in 10 intervals, say, to produce a phase probability distribution whose shape can be represented by four coef®cients of a polynominal, the so-called Hendrickson±Lattman coef®cients HLA, HLB, HLC and HLD (Hendrickson & Lattman, 1970). Blow and Crick also showed that an electrondensity map calculated with a weighted amplitude representing the centroid of the phase distribution gave the least Acta Cryst. (2003). D59, 1881±1890 Taylor The phase problem 1885 CCP4 study weekend Figure 13 Phase probability for one re¯ection in a MIR experiment. (a) One derivative. (b) Three derivatives. P( P ) G NoX of derivatives i1 expÀ4 2 i a2E 2 i . Figure 12 Harker diagram for MIR with two heavy-atom derivatives. error. Fig. 10 shows the phase probability distribution for one re¯ection from an SIR experiment. The centroid of the distribution is denoted by F best , whose amplitude is the native amplitude |F P | weighted by the ®gure of merit, m, which represents the cosine of the phase error. Modern phasing programs now use maximum-likelihood methods to derive phase probability distributions, as described in Read (2003). Fig. 11 shows the electron density of part of the unit cell of the sialidase from Salmonella typhimurium (Crennell et al., 1993) phased on a single mercury derivative. Although the protein±solvent boundary is partly evident, the electron density remains uninterpretable. The use of more than one heavy-atom derivative in multiple isomorphous replacement (MIR) can break the phase ambiguity, as shown in Fig. 12. The phase probability is obtained by multiplying the individual phase probabilities, as shown in Fig. 13 for the same re¯ection as in Fig. 10, but this time three heavy-atom derivatives have resulted in a sharp unimodal distribution with a concomitantly high ®gure of merit. Phase improvement It is rare that experimentally determined phases are suf®ciently accurate to give a completely interpretable electrondensity map. Experimental phases are often only the starting point for phase improvement using a variety of methods of density modi®cation, which are also based on some prior knowledge of structure. Solvent¯attening, histogram matching and non-crystallographic averaging are the main techniques used to modify electron density and improve phases (Fig. 14). Solvent¯attening is a powerful technique that removes negative electron density and sets the value of electron density in the solvent regions to a typical value of 0.33 e A Ê À3 , in contrast to a typical protein electron density of 0.43 e A Ê À3 . Automatic methods are used to de®ne the protein±solvent boundary, ®rst developed by Wang (1985) and then extended into reciprocal space by Leslie (1988). Histogram matching alters the values of electron-density points to concur with an expected distribution of electron-density values. Non-crystallographic symmetry averaging imposes equivalence on electrondensity values when more than one copy of a molecule in present in the asymmetric unit. These methods are encoded into programs such as DM (Cowtan & Zhang, 1999), RESOLVE (Terwilliger, 2002) and CNS (Bru È nger et al., 1998). Density-modi®cation techniques will not turn a bad map into a good one, but they will certainly improve promising maps that show some interpretable features. Density modi®cation is often a cyclic procedure, involving back-transformation of the modi®ed electron-density map to give modi®ed phases, recombination of these phases with the experimental phases (so as not to throw away experimental reality) and calculation of a new map which is then modi®ed and so the cycle continues until convergence. Such methods can also be used to provide phases beyond the resolution for which experimental phases information is available, assuming higher resolution native data have been collected. In such cases, the modi®ed map is back-transformed to a slightly higher resolution on each cycle to provide new phases for higher resolution re¯ections. The process is illustrated in Fig. 15. An example of the application of solvent¯attening and histogram matching using DM is shown in Fig. 16 for the S. typhimurium sialidase phased using three derivatives. Anomalous scattering The atomic scattering factor has three components: a normal scattering term that is dependent on the Bragg angle and two terms that are not dependent on scattering angle, but on wavelength. These latter two terms represent the anomalous scattering that occurs at the absorption edge when the X-ray photon energy is suf®cient to promote an electron from an inner shell. The dispersive term reduces the normal scattering factor, whereas the absorption term is 90 advanced in phase. This leads to a breakdown in Friedel's law, giving rise to anomalous differences that can be used to locate the anomalous scatterers. Fig. 17 shows the variation in anomalous scattering at the K edge of selenium and Fig. 18 the breakdown of Friedel's law. The anomalous or Bijvoet difference can be used in the same way as the isomorphous difference in Patterson or direct methods to locate the anomalous scatterers. Phases for the native structure factors can then be derived in a similar way to the SIR or MIR case. Anomalous scattering can be used to break the phase ambiguity in a single isomorphous replacement experiment, leading to SIRAS (single isomorphous replacement with anomalous scattering). Note that because of the 90 phase advance of the f H term, anomalous scattering provides orthogonal phase information to the isomorphous term. In Fig. 19, there are two possible phase values symmetrically located about f H and two possible phase values symmetrically located about F H . For completeness, the use of multiple isomorphous heavy-atom replacement using anomalous scattering is termed MIRAS. MAD Isomorphous replacement has several problems: nonisomorphism between crystals (unit-cell changes, reorientation of the protein, conformational changes, changes in salt and solvent ions), problems in locating all the heavy atoms, problems in re®ning heavy-atom positions, occupancies and thermal parameters and errors in intensity measurements. The use of the multiwavelength anomalous diffraction (MAD) method overcomes the non-isomorphism problems. Data are collected at several wavelengths, typically three, in order to maximize the absorption and dispersive effects. Typically, wavelengths are chosen at the absorption, f HH , peak (! 1 ), at the point of in¯ection on the absorption curve (! 2 ), where the dispersive term (which is the derivative of the f HH curve) has its Acta Cryst. (2003). D59, 1881±1890 Taylor The phase problem 1887 CCP4 study weekend Figure 17 Variation in anomalous scattering at the K edge of selenium. Figure 19 Harker construction for SIRAS. minimum, and at a remote wavelength (! 3 and/or ! 4 ). Fig. 20 shows a typical absorption curve for an anomalous scatterer, together with the phase and Harker diagrams. The changes in structure-factor amplitudes arising from anomalous scattering are generally small and require accurate measurement of intensities. The actual shape of the absorption curve must be determined experimentally by a¯uorescence scan on the crystal at the synchrotron, as the environment of the anomalous scatterers can affect the details of the absorption. There is a need for excellent optics for accurate wavelength setting with minimum wavelength dispersion. Generally, all data are collected from a single frozen crystal with high redundancy in order to increase the statistical signi®cance of the measurements and data are collected with as high a completeness as possible. The signal size can be estimated using equations similar to those derived by Crick and Magdoff for isomorphous changes (Fig. 21), which also shows a predicted signal for the case of two Se atoms in 200 amino acids, calculated using Ethan Merritt's webbased calculator (http://www.bmsc.washington.edu/scatter/ AS_index.html). Note that the signal increases with resolution owing to the fall-off of normal scattering with resolution. An example of MAD phasing is shown in Fig. 22. In this example of an archael chromatin modelling protein, Alba (Wardleworth et al., 2002), the protein was expressed in a Met À strain of Escherichia coli and the single methionine was replaced with selenomethionine. Data were collected at three wavelengths around the Se K edge with a 12-fold redundancy to 3.0 A Ê on the ESRF beamline ID14-4. There were two monomers of 10 kDa in the asymmetric unit and SOLVE was used to determine the Se-atom positions and derive phases. RESOLVE was used to apply density modi®cation to improve the phases. SAD It is becoming increasingly possible to collect data at just a single wavelength, typically at the absorption peak, and use density-modi®cation protocols to break the phase ambiguity and provide interpretable maps (Fig. 23). This so-called SAD (single-wavelength anomalous diffraction) method is described in Dodson (2003). Cross-crystal averaging Protein crystallography is not a black-box technique for every protein; there are still challenges to be had in cases where MAD or SAD techniques cannot be used to derive a highresolution map. On occasion, two or more crystal forms of a . N A is the number of anomalous scatterers, N T the total number of atoms in the structure and Z eff is the normal scattering power for all atoms (6.7 e À at 2 = 0). protein are available: low-resolution phases may be known for one crystal form, but high-resolution data for another crystal form may be available. Cross-crystal averaging involves mapping the electron density from the one unit cell into the other; phases can then be derived for the new crystal form and through averaging of density between crystal forms and possibly phase extension as part of a density-modi®cation procedure, one can bootstrap the phases to high resolution. The procedure is outlined in Fig. 24. One example of the power of cross-crystal averaging is that of Newcastle disease virus haemagglutinin-neuraminidase (HN), whose structure solution was plagued with nonisomorphism problems (Crennell et al., 2000). Native crystals from the same crystallization drop could have signi®cantly different unit-cell parameters. The protein was derived from virus grown in embryonated chickens' eggs, so SeMet methods were out of the question. Most heavy-atom derivatives were non-isomorphous with native crystals and with one another. A platinum derivative was found that gave a clear peak in an anomalous Patterson, which resulted in an attempt at MAD phasing, but the signal was just too small with one possibly not fully occupied Pt atom in 100 kDa. The P2 1 2 1 2 1 unit cell had dimensions that varied as follows: a = 70.7±74.5, b = 71.8±87.0, c = 194.6± 205.4 A Ê . In the end, cross-crystal averaging was used to bootstrap from a poor uninterpretable 6.0 A Ê MIR map out to a clearly interpretable 2.0 A Ê map (Fig. 25). Four data sets were chosen for crosscrystal averaging in DMMULTI, chosen on the following criteria: (i) they were as non-isomphous as possible to one another and (ii) they were to as high a resolution as possible. These were a pH 7.0 room-temperature data set to 2.8 A Ê (a = 73.3, b = 78.0, c = 202.6 A Ê ), for which MIR phases were available to 6.0 A Ê , a pH 6 room-temperature data set to 3.0 A Ê (a = 72.0, b = 83.9, c = 201.6 A Ê ), a pH 4.6 frozen data set to 2.5 A Ê (a = 71.7, b = 77.9, c = 198.2 A Ê ) and a pH 4.6 frozen data set to 2.0 A Ê (a = 72.3, b = 78.1, c = 199.4 A Ê ). The power of the methods lies in the fact that the different unit cells are sampling the molecular transform in different places. Like most things, the idea is not new, and was indeed used by Bragg and Perutz in the early days of haemoglobin (Bragg & Acta Cryst. (2003). D59, 1881±1890 Taylor The phase problem 1889 CCP4 study weekend Harker construction for SAD. ÁF AE is used to ®nd the substructure of anomalous scatterers, followed by phasing and phase improvement. Figure 24 Cross-crystal averaging. Two crystal forms of the same protein for which phase information to low resolution in known for one form (left) and high-resolution data but no phase information is known for another form (right). Perutz, 1952), where they altered the unit cell of the crystals by controlled dehydration in order to sample the one-dimensional transform of the molecules in the unit cell. This paper is worth a read, if only for the wonderful inclusion of random test data in the form of train times between London and Cambridge! Conclusion The phase problem is fundamental and will never go away; however, its solution is now fairly routine thanks to SAD and MAD. The major problems in protein crystallography are now in the molecular biology, protein expression and crystallization, but perhaps most of all in interpreting the biological implications of structure which, after all, is where the fun starts. I have been privileged to have received any understanding of phasing I possess from some excellent teachers. In particular, I would like to thank Stephen Neidle, Tom Blundell and Ian Tickle. I would like to thank Ethan Merritt for allowing me to reproduce graphs from his web site in Figs. 17, 20 and 21.
2017-04-04T10:21:12.574Z
2003-11-01T00:00:00.000
{ "year": 2003, "sha1": "026cca558237fdba7e47c37c5ea7f8b18725f5e4", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/d/issues/2003/11/00/ba5050/ba5050.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "026cca558237fdba7e47c37c5ea7f8b18725f5e4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
52307997
pes2o/s2orc
v3-fos-license
Distinct multilevel misregulations of Parkin and PINK1 revealed in cell and animal models of TDP-43 proteinopathy Parkin and PINK1 play an important role in mitochondrial quality control, whose malfunction may also be involved in the pathogenesis of amyotrophic lateral sclerosis (ALS). Excessive TDP-43 accumulation is a pathological hallmark of ALS and is associated with Parkin protein reduction in spinal cord neurons from sporadic ALS patients. In this study, we reveal that Parkin and PINK1 are differentially misregulated in TDP-43 proteinopathy at RNA and protein levels. Using knock-in flies, mouse primary neurons, and TDP-43Q331K transgenic mice, we further unveil that TDP-43 downregulates Parkin mRNA, which involves an unidentified, intron-independent mechanism and requires the RNA-binding and the protein–protein interaction functions of TDP-43. Unlike Parkin, TDP-43 does not regulate PINK1 at an RNA level. Instead, excess of TDP-43 causes cytosolic accumulation of cleaved PINK1 due to impaired proteasomal activity, leading to compromised mitochondrial functions. Consistent with the alterations at the molecular and cellular levels, we show that transgenic upregulation of Parkin but downregulation of PINK1 suppresses TDP-43-induced degenerative phenotypes in a Drosophila model of ALS. Together, these findings highlight the challenge associated with the heterogeneity and complexity of ALS pathogenesis, while pointing to Parkin–PINK1 as a common pathway that may be differentially misregulated in TDP-43 proteinopathy. Introduction Amyotrophic lateral sclerosis (ALS) is an adult-onset neurodegenerative disease characterized by progressive motor neuron loss, leading to muscle weakness and wasting. ALS is incurable and the patients usually die within 2-5 years after diagnosis. The majority (>90%) of ALS cases are sporadic (sALS) with unknown causes, whereas mutations in genes such as SOD1, C9orf72, FUS, and TARDBP are reported to cause familial ALS that accounts for the remaining 10% 1,2 . Ubiquitin-positive cytoplasmic inclusions containing transactive response DNA-binding protein 43 kDa (TDP-43, encoded by TARDBP) in the brain and the spinal cord of patients are a main pathological hallmark of ALS 3 . Moreover, TDP-43positive protein inclusions are found in a large spectrum of neurodegenerative disorders, including frontotemporal lobar degeneration, Alzheimer's disease, dementia with Lewy bodies, polyglutamine diseases, and others [4][5][6][7][8] , which are collectively known as TDP-43 proteinopathy. In physiological conditions, TDP-43 protein is predominantly localized to the nucleus. It belongs to the heterogeneous ribonucleoprotein family and plays an important role in regulating gene transcription, RNA processing, transport, and stability, as well as the formation of stress granules. In disease conditions, ubiquitinated TDP-43 accumulates in the cytoplasm. The mislocalization and aberrant aggregation of TDP-43 cause dysfunction of various aspects of RNA metabolism as well as protein homeostasis, eventually leading to motor neuron degeneration [9][10][11] . With the recent advancement of the next-generation sequencing, the RNA targets as well as the common principles of TDP-43mediated RNA regulations are emerging [12][13][14][15] . The long intron-containing pre-mRNA of Parkin is one of the reported targets of TDP-43 13 . Parkin is an E3 ubiquitin ligase involved in the clearance of damaged mitochondria via autophagy (termed "mitophagy"), which partners with PTEN-induced putative kinase 1 (PINK1) to execute the mitochondrial quality control function. PINK1 is a serine/threonine kinase, which after translation is continuously transported into mitochondria, cleaved, and released to the cytosol for proteasomemediated degradation 16,17 . When mitochondria are damaged, PINK1 cannot be effectively cleaved and is subsequently anchored on the mitochondrial outer membrane, which in turn recruits Parkin to mitochondria and induces mitophagy 18 . Mutations in Parkin and PINK1 genes are linked to autosomal recessive early-onset Parkinson's disease (PD) 19,20 . Increasing evidence points to mitochondrial dysfunction as a common pathogenic factor in ALS 21,22 . Interestingly, in the spinal cord autopsy samples from sALS patients, neurons with TDP-43 protein inclusions have reduced Parkin protein levels 12 . In this study, we investigate whether Parkin and PINK1 are involved in TDP-43-induced neurodegeneration. We find that Parkin and PINK1 are differentially misregulated by TDP-43 at the RNA and protein levels. Consistently, genetic manipulations of Parkin or PINK1 exhibit the opposing modifying effects in a Drosophila model of TDP-43 proteinopathy. Collectively, we propose that distinct multilevel misregulations of Parkin and PINK1 contribute to the pathogenesis of TDP-43 proteinopathy. TDP-43 overexpression selectively decreases Parkin but not PINK1 mRNA levels Previous studies showed that TDP-43 regulated Parkin pre-mRNA and TDP-43 loss of function (LOF) reduced Parkin mRNA levels in mouse brains 12,13 . Since TDP-43induced neurodegeneration involves both LOF and gainof-function (GOF) mechanisms 23 , in this study we investigated whether TDP-43 GOF affected Parkin and PINK1, starting with overexpressing human TDP-43 (hTDP-43) in mammalian systems. We examined the mRNA levels of Parkin and PINK1 in human 293T cells transfected with hTDP-43-HA and mouse primary neurons infected with lentivirus to express hTDP-43-HA (Fig. S1A). In both 293T cells (Fig. 1a) and primary mouse neurons (Fig. 1b), TDP-43 overexpression (OE) caused a significant reduction of Parkin mRNA levels compared to each of the control groups. However, in neither 293T cells (Fig. 1a) nor mouse neurons (Fig. 1b) did TDP-43 OE significantly alter PINK1 mRNA levels. Further, we found that in neurons derived from the transgenic mice expressing mutant hTDP-43 Q331K24 (Fig. S1B), Parkin mRNA and protein levels were also significantly decreased (Fig. 1c-e). Of note, the endogenous mouse TDP-43 protein level was decreased in hTDP-43 Q331K -derived neurons, which was likely due to the inhibition of its own transcription by hTDP-43 OE as reported previously 24 . The total expression levels of both endogenous and exogenous TDP-43 proteins were examined in Fig. S1C-D. To confirm this effect in an in vivo system, we examined mRNA levels of Parkin and PINK1 in the fly head of a previously established Drosophila ALS model of TDP-43 25,26 . Consistently, we detected an almost 50% reduction of the Parkin mRNA levels in the hTDP-43 fly heads (Fig. 2a). In contrast, TDP-43 OE did not significantly affect PINK1 mRNA abundance (p = 0.565, Fig. 2a). Hence, the mRNA levels of Parkin but not PINK1 were selectively reduced in multiple human cell, primary mouse neuron and in vivo fly models of TDP-43 proteinopathy. Generation of the dParkin-(HA) knock-in fly and examination of TDP-43 OE on Parkin protein levels in vivo Next, we sought to confirm the effect of TDP-43 OE on Parkin protein levels in the in vivo fly models. Unfortunately, none of the commercially available Parkin antibodies we have examined in this study worked in western blots with Drosophila Parkin protein (data not shown). To overcome this problem, we utilized the CRISPR-based gene editing and the transgenic Cas9-gRNA system 27,28 to generate an HA knockin (KI) fly in which an HA-tag was inserted to the C terminus of the endogenous Drosophila Parkin (dParkin) gene ( Fig S2A). Briefly, we constructed an HA donor vector flanked with dParkin sequences (pCR2-TOPO-dParkin-2xHA-PBac-3xp3-eGFP) and two dParkin-stop codon guide RNA (gRNA) vectors (pBFv-U6.3-dParkin-stop gRNA-1 and gRNA-2). The HA donor and the dParkin-stop gRNA plasmids were mixed for micro-injection using a nanos-Cas9 founder line. The desired transformants were selected for subsequent balancing and background clearing, and were eventually established as a stable dParkin-(HA) KI line ( Fig. 2b and Methods). The 2xHA tag inserted to the C terminus of the endogenous dParkin gene allowed us to measure the endogenous Parkin protein levels by probing the HA-tag. Next, we crossed the dParkin-(HA) KI flies to hTDP-43 flies and examined the dParkin-HA protein levels by western blotting. Compared to the control group (GMR driver only), expression of hTDP-43 in the fly eyes caused a significant reduction of the endogenous Parkin protein levels ( Fig. 2c, d). Together, both the mRNA and protein levels of Parkin were decreased in the Drosophila model of TDP-43 proteinopathy. TDP-43 can decrease the mRNA and protein levels of Parkin in the absence of intron/untranslated regions Previous RNA-seq studies indicated that TDP-43 preferentially regulated pre-mRNAs with exceptionally long introns that are often more than 100 kb [12][13][14][15] . Human and mouse Parkin pre-mRNAs contained very long introns (Fig. S2B) and TDP-43 regulated their mRNA levels 12,13 . Intriguingly, we noticed that the dParkin gene contains only short introns (the longest <200 bp, Fig. S2A). This raised the possibility that in addition to the reported intron-mediated regulation, TDP-43 might also regulate Parkin mRNA levels by an intronindependent mechanism. To test this hypothesis, we generated the transient expression plasmid that contains only the coding region of the human Parkin gene (Flag-Parkin, Fig. S2C). We then examined whether TDP-43 could downregulate the mRNA levels of the plasmid-expressed, intron-free human Parkin, in mammalian cells. An intron-free PINK1 (PINK1-V5) constructed in the same expression plasmid was included as a control. The reverse transcription-PCR (RT-PCR) primers were designed to distinguish the Flag-tagged Parkin and V5-tagged PINK1 from the endogenously expressed Parkin and PINK1 mRNA in 293T cells (see Methods). As shown in Fig. 3a, b, the mRNA levels of Flag-Parkin were reduced by about 50% in TDP-43-overexpressing cells, whereas PINK1-V5 mRNA was not significantly changed (Fig. 3c, d). Consistently, the protein levels of the exogenous, intron-free Flag-Parkin were also drastically decreased by TDP-43 OE (Fig. 3e, f). In contrast, the protein levels of full-length (FL) PINK1 were not decreased, whereas cleaved PINK1 was increased (Fig. 3g, h; to be addressed later in this study). These data indicated that TDP-43 could regulate Parkin mRNA and protein levels in the absence of introns in mammalian cells, suggesting that TDP-43-mediated reduction of Parkin levels in sALS patients might involve both intron-based and intron-independent regulations of Parkin mRNA. Furthermore, since Parkin and PINK1 were subcloned into the same vector with the same backbone and promoter (see Methods), it was unlikely that the selective downregulation of plasmid-expressed, intron-free Parkin was due to a general effect of TDP-43 on transcription or the untranslated regions (UTRs) of the expression plasmid. Downregulation of intron-free Parkin requires both the RRM1 and glycine-rich domains of TDP-43 TDP-43 is a RNA-binding protein that consists of a nuclear localization signal and a nuclear export signal, two RNA recognition motif (RRM) motifs involved in DNA and RNA binding, and a glycine-rich domain (GRD) domain at the C terminus that mediates protein-protein interaction with other members of the heterogeneous nuclear RNP protein family 9,10 . To understand what motif (s) are required and how TDP-43 regulates Parkin in the absence of any intron or gene-specific UTR, we generated the plasmids to express truncated TDP-43 protein: hTDP-43-HA-ΔRRM1, -ΔRRM2, and -ΔGRD (Fig. 4a). We found that hTDP-43-ΔRRM1 and hTDP-43-ΔGRD were unable to significantly downregulate the intronfree Parkin mRNA levels, whereas hTDP-43-ΔRRM2 showed a similar reduction of Parkin mRNA to that of the FL TDP-43 (Fig. 4b, c). Further, we examined the effects of FL and truncated TDP-43 on the protein levels of intron-free Parkin. Consistently, hTDP-43-ΔRRM1 and hTDP-43-ΔGRD could not downregulate the protein levels of Flag-Parkin, whereas hTDP-43-ΔRRM2 significantly decreased Flag-Parkin protein levels, and this was despite the fact that hTDP-43-ΔRRM2 was expressed at a lower level (Fig. 4d, e). These results suggest that the post-transcriptional, intron/UTR-independent downregulation of Parkin requires both the RNA-binding and protein-protein interaction functions of TDP-43. Increase of TDP-43 causes cytosolic accumulation of cleaved PINK1 that forms insoluble aggregates in the cell We noticed that although TDP-43 OE did not significantly change PINK1 mRNA or FL PINK1 protein levels (~64 kDa), there was a remarkable increase of cleaved PINK1 (~52 kDa 16,29 ) in the western blots (Fig. 3g, h). Damaged mitochondria accumulate PINK1 on the outer membrane, which in turn recruits Parkin and induces mitophagy 18 . To determine if the TDP-43-induced increase of cleaved PINK1 was due to mitochondrial accumulation . The mRNA levels of Parkin and PINK1 are normalized to actin and shown as average percentages to that of the control group. b A simplified overview of the process to generate the dParkin-(HA) knock-in flies using the transgenic Cas9-gRNA system (graph adapted from the flyCRISPR website). A pCR2-TOPO-dParkin-2xHA-PBac-3xp3-eGFP HR donor vector is generated to insert a 2xHA tag into the fly genome at the C terminus of the dParkin gene by homology-directed repair (HDR). The successful knock-ins are identified by the GFP selection marker, which is subsequently removed by PBac transposase, leaving only the HA-tag and the stop codon in the target locus. c, d Western blot analysis of the endogenous Parkin protein levels measured by the knock-in HA-tag in the hTDP-43 flies or the driver only control (GMR) flies. The quantification is shown in d. Data are means ± SEM; n = 5; *p < 0.05, **p < 0.01; ns not significant; Student's t-test of PINK1, we examined the subcellular localization of PINK1 by immunocytochemistry. Since the expression of endogenous PINK1 protein was too low to be reproducibly detected in 293T cells, we transfected the cells with PINK1-V5 and immunostained for the V5 tag. In the control cells, PINK1-V5 was predominantly co-localized with mito-DsRed ( Fig. 5a), indicating that PINK1 was mostly localized to mitochondria under normal conditions. With TDP-43 OE, PINK1-V5 was no longer specifically localized to mitochondria but spread out in the cytoplasm (Fig. 5b). The reduced mitochondrial localization was further demonstrated by the co-localization analysis of PINK1-V5 and mito-DsRed (Fig. 5c, d). In addition to the altered subcellular distribution of PINK1, another striking phenomenon we observed was that PINK1 formed massive puncta in the cells cotransfected with hTDP-43 (Fig. 5b, e). To determine if the PINK1 puncta were protein aggregates, we assessed the solubility of PINK1 by fractionation and western blot. In control cells, the FL PINK1 was mainly in the soluble fraction and the cleaved PINK1 was in the insoluble fraction. With TDP-43 OE, there was a robust increase of both the FL and cleaved PINK1 levels in the insoluble fraction, of which the cleaved PINK1 was more dramatically increased ( Fig. 5f, g). In contrast, TDP-43 did not significantly alter Parkin subcellular distribution or form insoluble aggregates (Fig. S3). Together, we concluded that TDP-43 OE led to a cytosolic accumulation of cleaved PINK1 that formed insoluble protein aggregates. PINK1 proteostasis is sensitive to TDP-43-induced proteasomal activity impairment In normal cells, cleaved PINK1 is released from mitochondria and undergoes rapid degradation in the cytoplasm via the ubiquitin proteasome system (UPS) 17 . As such, PINK1 protein usually is undetectable or at very low levels in healthy cells. The observation of TDP-43induced cytosolic accumulation of cleaved PINK1 (Fig. 5) strongly suggested that the function of UPS might be impaired in TDP-43 proteinopathy. To test this possibility, we measured the proteasomal activity of the cells using an in vitro fluorogenic peptide cleavage assay. Compared to the control cells, there was a small but significant reduction of proteasomal activity of the cells transfected with hTDP-43 (Fig. S4A). Considering that TDP-43-mediated neurodegeneration in diseases usually develops over years, we think a small decrease rather than an abrupt inhibtion of the proteasomal function may be more pathophysiologically relevent to the disease condition. Since the reduction of proteasomal activity by TDP-43 OE indicated by the in vitro assay was rather moderate, we asked if it was sufficient to significantly disturb the proteostasis of PINK1. To address this question, we treated the 293T cells with the proteasome inhibitor MG-132 at a series of concentrations, ranging from 5 nM to 5 μM for 3 h (Fig. S4B). The concentration of 50 nM generated a mild inhibition that is similar to the effect of hTDP-43 OE in 293T cells (Fig. S4A). We then treated the cells with MG-132 using this condition. Similar to hTDP-43 OE, no increase of Parkin intensity or Parkin protein aggregate was observed at this condition ( Fig. S4C-D, G). In contrast, the same treatment of MG-132 inhibition caused a robust increase of PINK1 overall intensity and a massive accumulation of PINK1 aggregates in the cytoplasm ( Fig. S4E-F, H). This result was consistent with the effects of TDP-43 OE on Parkin (Fig. S3) and PINK1 (Fig. 5), indicating that the turnover of PINK1 protein was extremely sensitive to alterations of proteasomal activity. Hence, TDP-43-induced mild impairment of proteasomal activity selectively hindered the turnover of proteasome function-sensitive proteins such as PINK1, but not other less labile proteins such as Parkin. This gave rise to the distinct effects that TDP-43 OE downregulated Parkin levels but led to a cytosolic accumulation of PINK1. Thus, in addition to intronbased regulation of Parkin pre-mRNA 12,13 and intronindependent regulation of Parkin mRNA (Figs. 3 and 4), a third level of misregulation was on PINK1 protein turnover due to impaired proteasomal activity in TDP-43 proteinopathy. Endogenous PINK1 protein accumulates in TDP-43 Q331K mice To confirm if endogenous PINK1 was similarly misregulated by TDP-43 in an in vivo mammalian model, we examined the PINK1 protein levels in the motor cortex of the 8-month-old TDP-43 Q331K mice. Consistently, we observed a remarkable increase of both FL and cleaved PINK1 proteins compared to the non-transgenic sibling control mice (Fig. 6a, b). As mentioned earlier, PINK1 protein is usually rapidly turned over and has a very low basal level in normal cells 17 . The accumulation of PINK1 protein in TDP-43 neurons is likely to be deleterious. Data are means ± SEM; n = of 3-5; *p < 0.05, **p < 0.01, ***p < 0.001; ns not significant; one-way ANOVA Consistent with this idea, we observed that neuronal upregulation of PINK1 reduced the Drosophila lifespan by~6.9% (Fig. S5A-B). Increase of cleaved PINK1 reduces mitochondrial functions To further elucidate the cellular consequence of PINK1 accumulation, especially the increase of cleaved PINK1, we generated the plasmid to express truncated PINK1 (PINK1 Δ1-104 -V5, Fig. 6c, d) that mimicked the protein product after the FL PINK1 was cleaved 17 . We used a Seahorse XFe96 Analyzer together with the Cell Mito Stress Test to measure the oxygen consumption rate (OCR) of the 293T cells transfected with PINK1 Δ1-104 -V5 or vector. The basal mitochondrial respiration occurred similarly, whereas the spare respiratory capacity was significantly reduced in PINK1 Δ1-104 -V5 cells (Fig. 6e); the maximal respiration also showed a trend to decrease although not statistically significant (p = 0.066; Fig. 6e). The data suggest that under normal conditions, cells with accumulation of cleaved PINK1 may function normally. However, under stress, these cells can exhibit reduced mitochondrial activity, which over years in aging might promote the pathogenesis of neurodegeneration in disease. Upregulation of Parkin, whereas downregulation of PINK1 improves the degenerative phenotypes of TDP-43 flies After revealing the multilevel misregulation of Parkin and PINK1 by TDP-43 OE at the cellular level, we were keen to know whether their misregulations contribute to ALS pathogenesis and whether restoring Parkin and PINK1 normal expression levels offers a means to ameliorate TDP-43-mediated neurodegeneration at the animal level. To answer these questions, we utilized an Fig. 7a, b). RNAi knockdown (KD) of PINK1 but not Parkin significantly delayed the age-dependent climbing deficits of hTDP-43 flies (D20 and D30, Fig. 7a), in spite of a more efficient KD of Parkin than that of PINK1 by the RNAi transgenes (Fig. S5C-D). On the contrary, Parkin OE provided remarkable suppression (D20 and D30, Fig. 7b), whereas PINK1 OE enhanced the age-dependent climbing deficits of hTDP-43 flies (D30, Fig. 7b). The KD efficiency and OE levels of the UAS transgenes are confirmed in Figure S5. Moreover, KD or OE of Parkin in fly neurons did not significantly change the protein levels or subcellular distribution of hTDP-43; nor did the genetic manipulations of PINK1 (Fig. S6). These data confirmed that the modifying effect of Parkin and PINK1 in the hTDP-43 flies was not due to an alteration of the TDP-43 protein per se. Next, we evaluated the effects of manipulating Parkin and PINK1 expression levels on modifying the longevity of hTDP-43 flies. As shown in the lifespan experiments in Fig. 7c-e, the median lifespan of hTDP-43 flies was significantly increased by 11.2% with downregulation of PINK1 and by 14.8% with upregulation of Parkin in adult fly neurons. Considering that TDP-43 is a DNA/RNAbinding protein and excessive TDP-43 impairs proteasomal activity (Fig. S4), it is likely that there are other targets whose mRNA abundance and protein turnover are also affected. Thus, it is unsurprising that correcting the misregulations of Parkin or PINK1 only partially rescued the phenotypes of hTDP-43 flies. In an earlier study, we have shown that reduction of PINK1 levels in adult neurons of wild-type flies did not have a dramatic effect on aging 30 . And, although ubiquitous Parkin OE extended Drosophila lifespan 31 , neuronal OE of Parkin showed minimal 31 or no statistically significant increase of longevity (Fig. S5A-B). Thus, the improvement of TDP-43 flies' survival was unlikely a generic effect of Parkin OE or PINK1 KD on aging. Rather, the results suggested a specific involvement of Parkin and PINK1 in TDP-43 proteinopathy. Discussion Mitochondrial dysfunction has been linked to the pathogenesis of ALS [32][33][34][35][36] . Parkin and PINK1 are both important genes involved in mitochondrial quality control, and previous sequencing studies identified Parkin pre-mRNA as a RNA target of TDP-43 12,13 . Interestingly, Parkin mRNA levels were decreased in the brain of TDP-43 KD mice; whereas in sALS patient cells, Parkin protein level decrease was associated with excessive TDP-43 accumulation 12 . This is likely because TDP-43-mediated neurodegeneration involves both LOF and GOF mechanisms 23 . In this study, we focus on how TDP-43 GOF affects Parkin and PINK1, and use multiple cell and animal models of TDP-43 proteinopathy to demonstrate that they are differentially misregulated at RNA and protein levels by TDP-43 OE. In all models tested in this study, we observe a significant reduction of Parkin mRNA and protein levels ( Figs. 1 and 2). Consistently, the increase of Parkin suppresses the degenerative phenotypes of hTDP-43 flies (Fig. 7) and reduces neuronal cell death in the motor cortex of rat TDP-43 model 37 . We speculate that the TDP-43 KD-induced Parkin mRNA reduction may be due to loss of TDP-43 binding and protection of the Parkin pre-mRNA 12,13 , whereas TDP-43 OE-induced Parkin decrease may result from abnormal binding and misregulation of mature Parkin mRNA (Figs. 3 and 4) or other mechanisms to be discovered. Nevertheless, such seemingly contradictory effects are also observed with TDP-43 in regulating alternative splicing. Taking the TDP-43 targets Kcnd3 and Ahi1 as an example, both KD and OE of TDP-43 cause aberrant alternative splicing of them in the same direction 24 . As a RNA-binding protein, TDP-43 regulates various aspects of RNA metabolism, including the sustenance of long intron-containing pre-mRNA, alternative splicing, 3′ UTR-mediated mRNA transport or stability, and regulation of long noncoding RNAs 2,12,13,15 . All of these regulations appear to require noncoding sequences such as introns and 3′UTRs. One intriguing finding of this study is that TDP-43 not only regulates the mRNA of long intron-containing human and mouse Parkin but also reduces the mRNA levels of Parkin transcribed from the dParkin gene with only very short introns. We further exemplify that in the absence of any intron or UTR, plasmid-expressed hParkin can still be downregulated by TDP-43 (Fig. 3), which suggests an additional, intronindependent regulation of mRNA by TDP-43. Furthermore, we find that downregulation of intronfree hParkin requires the RRM1 and GRD domains of TDP-43 (Fig. 4), indicating that this regulation depends on both the RNA-binding and the protein-protein interaction of TDP-43. It is reasonable to speculate that TDP-43 may bind to the coding region of Parkin via the RRM1 motif and recruit other proteins that regulate mRNA stability via the GRD domain. Alternatively, TDP-43 might indirectly decrease Parkin mRNA levels by regulating other RNA-binding proteins. Lending support to this possibility, TDP-43 regulates the RNA splicing of the PUF-domain-containing protein PUM1, which regulates mRNA stability 38 . Also, we notice that the coding region of Parkin contains a TGTAAAGA sequence, which encodes the mRNA that is only one nucleotide different from the PUF-binding sequence (UGUANAUA) and might be recognized by PUM1 39 . However, as the PUF recognition sequence usually locates within 3′UTRs, the exact mechanism of this enigmatic intron/UTR-independent regulation of Parkin mRNA is yet to be unraveled by additional research in the future. Although TDP-43 does not directly regulate PINK1 mRNA or protein levels, we find that it selectively impedes PINK1 protein turnover and causes cleaved PINK1 to accumulate in the cytoplasm due to the impairment of proteasomal degradation (Fig. 5 and Fig. S4). This alteration may cause cytotoxicity at two levelson one side, it may reduce PINK1 interaction with its normal mitochondrial substrates such as NdufA10 40 , leading to mitochondrial dysfunction; on the other side, the cytosolic accumulation of PINK1 may cause gain of toxicity due to ectopic or increased phosphorylation of cytosolic substrates such as Parkin, ubiquintin, and others yet to identify 41,42 . Along the line, although we did not observe substantial induction of mitophagy by TDP-43 OE in our systems (Fig. S7), it is reported that cytosolic accumulation of cleaved PINK1 induces non-selective mitophagy 43 and constitutive activation of PINK1 triggers non-apoptotic cell death that is independent of mitophagy 44 . In this study, we show that the increase of cleaved PINK1 reduces the reserve respiration capacity of mitochondria (Fig. 6), which may contribute to disease progression in TDP-43 proteinopathy. The misregulation of the Parkin-PINK1 pathway shall have profound detrimental consequences. LOF mutations of the Parkin and PINK1 genes are associated with juvenile PD 19,20 . In flies, Parkin and PINK1 LOF mutants exhibit mitophagy defects and cell death in muscles, which subsequently cause locomotion defects and reduce the longevity [45][46][47][48] . Moreover, mitochondrial fragmentation has been observed in transgenic mouse and fly models of TDP-43 33,34,49 as well as fibroblasts obtained from patients carrying TDP-43 mutations 50 . Since the Parkin-PINK1 pathway also regulates mitochondrial fission-fusion proteins 51,52 , their misregulation may contribute to TDP-43-induced mitochondrial over-fission. Similarly, the substrates of Parkin range from proteins regulating mitochondrial dynamics, transport, components of the electron transport chain, to proteins functioning in the proteasome and the nucleus 53 . Therefore, the TDP-43-mediated downregulation of Parkin is likely to have comprehensive deleterious effects. In summary, in this study we find that Parkin and PINK1 are differentially misregulated in TDP-43 proteinopathy (Fig. 8). TDP-43 OE reduces Parkin mRNA levels, which involves both intron-based and intronindependent mechanisms. In the meantime, excessive TDP-43 impairs the proteasomal activity, which hinders PINK1 turnover and causes cytosolic accumulation of cleaved PINK1. Consistently, upregulation of Parkin, whereas downregulation of PINK1 suppresses TDP-43induced neurodegeneration in a Drosophila model of ALS. Together, we propose that Parkin and PINK1 are differentially misregulated at RNA and protein levels, which may contribute to the pathogenesis of TDP-43 proteinopathy. As functions of Parkin and PINK1 beyond regulating mitophagy or mitochondria are increasingly revealed, our findings strongly suggest that differential therapeutic strategies need to be developed when considering the Parkin-PINK1 pathway as a common target for treating ALS. The expression constructs of truncated TDP-43 were generated by homologous recombination. Briefly, the DNA fragments of truncated hTDP-43-HA were amplified by PCR and inserted into the cloning vector using In this study, we reveal that TDP-43 OE misregulates the mitochondrial quality control genes Parkin and PINK1 differently and at multiple levels. In addition to (1) the previously reported TDP-43 function in regulating long intron-containing pre-mRNA of Parkin, we find that (2) Parkin mRNA without any intron can also be robustly decreased by TDP-43, suggesting an unidentified, intron-independent mechanism. The intron-based and the intron-independent regulations together lead to a significant reduction of Parkin protein abundance. Meanwhile, (3) excessive TDP-43 impairs proteasomal activity that impedes the turnover of cleaved PINK1, resulting in cytosolic accumulation of insoluble PINK1 aggregates. Consistent with the changes at the cellular level, Parkin OE but PINK1 KD in a Drosophila model of TDP-43 significantly suppresses the degenerative phenotypes ClonExpress MultiS One Step Cloning Kit (Vazyme). The constructs were then subcloned into the pCAG expression vector as above. The PCR primers used were listed below: To generate the pCAG-PINK1 Δ1-104 -V5 plasmid, the DNA fragments of truncated PINK1 were amplified from FL PINK1 by PCR and inserted into the pCAG vector as above. The PCR primers used were: PINK1 Δ1-104 -F: 5′-CATCATTTTGGCAAAGAATTC CACCATGGGGCTAGGGCTGGGCCTC-3′ PINK1 Δ1-104 -V5-R: 5′-GCTCCCCGGGGGTACCTCG AGTTACGTAGAATCGAGACCGAGGAGAGGGTTAG GGATAGGCTTACCCAGGGCTGCCCTCCATGAG-3′. All constructs were verified by sequencing to ensure the integrity of the cloned open reading frames. Cell cultures and transfection 293T cells were cultured in Dulbecco's modified Eagle medium (Invitrogen) supplemented with 10% (v/v) fetal bovine serum (BioWest) and GlutaMAX TM (Invitrogen) at 37°C in 5% CO 2 . Transient transfection was performed using Lipofectamine™ 2000 (Invitrogen) in Opti-MEM (Invitrogen). Cells were transfected for 48 h before harvest. For proteasome inhibition assays, MG-132 was added into the medium 3 h before harvest at a final concentration of 50 nM. Lentivirus production and primary neuron culture Lentivirus was prepared according to the established protocols. Briefly, lentivirus packaging was performed by co-transfecting FHsynPW-TDP-43-HA with VSVG and delta 8.9 with the ratio of 1:1.5:2 into 293FT cells cultured in Opti-MEM I medium using PolyJet™ reagent (Signa-Gen). Culture supernatant was collected at 48 h after transfection and passed through a 0.45-μm filter. Viral particles were concentrated from culture supernatants by Lenti-X™ Concentrator (Clontech). Viral pellets used for neuronal infection were resuspended in Neurobasal medium (Invitrogen). Primary hippocampal neurons were isolated from C57BL/6 mouse hippocampus at embryonic day 14 (E14) and cultured in serum-free Neurobasal medium (Invitrogen) supplemented with 2% B27 (Life Technologies), GlutaMax, and penicillin-streptomycin (Invitrogen). At 14 days in vitro (DIV), neurons were infected with FHsynPW-TDP-43-HA for 5 days before extraction for RNA or protein. For TDP-43 Q331K mouse-derived culture, the cortical neurons of the transgenics or their littermates were isolated at E16-E18 and cultured as above. The RNA and protein levels of TDP-43 Q331K neurons were examined at 15 DIV. Flies tested in this study were raised on standard cornmeal media and maintained at 25°C and 60% relative humidity. For adult-onset, neuronal expression of the UAS or RNAi transgenes using the elavGS driver, 59 flies were raised on regular fly food supplemented with 80 μg/ml RU486 (TCI). A 352-bp PCR product was generated and cloned into the MluI/NdeI sites of the transition vector by In-Fusion HD Cloning Kit to get the pCR2-TOPO-2xHA-PBac-3xp3-eGFP vector. A 1149-bp PCR product was generated and cloned into the BsmbI/XhoI sites of dParkin-2xHA transition vector by In-Fusion HD Cloning Kit to get the complete pCR2-TOPO-dParkin-2xHA-PBac-3xp3-eGFP HR donor vector. Embryo injection and transformant selection The two pBFv-U6.3-dParkin-stop gRNA plasmids were mixed with the pCR2-TOPO-dParkin-2xHA-PBac-3xp3-eGFP HR donor vector for nanos-Cas9 founder line injection. Following green fluorescent protein (GFP)mediated identification of successful KI flies, the GFP marker was removed through a single cross to PBac transposase. The embryo injection and selection of correct transformants were performed by BestGene Inc. and confirmed by PCR in the lab. RNA extraction and real-time quantitative PCR For quantitative PCR (qPCR), total RNA was isolated from fly heads, cell cultures, or mouse brain tissues using TRIzol (Invitrogen) according to the manufacturer's instruction. After DNase (Promega) treatment to remove genomic DNA, the RT reactions were performed using All-in-One cDNA Synthesis SuperMix kit (Bimake). The cDNA was then used either for semiquantitative RT-PCR experiments by PCR amplification using the Taq Plus Master Mix (Vazyme) or real-time qPCR using the SYBR Green qPCR Master Mix (Bimake) with the QuantStudio™ 6 Flex Real-Time PCR system (Life Technologies). The mRNA levels of actin or GAPDH were used as an internal control to normalize the mRNA levels of genes of interest. The qPCR primers used in this study are listed below: hβ-actin forward: 5′-GTTACAGGAAGTCCCTTGCC ATCC-3′ hβ-actin reverse: 5′-CACCTCCCCTGTGTGGACTT GGG-3′ Protein extraction Fly heads or cultured cells were lysed in 2% SDS lysis buffer (100 mM Tris-HCl at pH 6.8, 2% SDS, 40% glycerol, 10% β-mercaptoethanol, and 0.04% bromophenol blue) containing protease and phosphatase inhibitor cocktails (Roche). For separation of soluble and insoluble proteins, cells were lysed on ice using RIPA buffer (50 mM Tris at pH 8.0, 150 mM NaCl, 1% NP-40, 5 mM EDTA, 0.5% sodium deoxycholate, and 0.1% SDS) supplemented with protease and phosphatase inhibitors (Roche). Samples were sonicated and then centrifuged at 13 000 × g for 20 min at 4°C. The resulting supernatant was used as the soluble fraction and the pellets containing insoluble fractions were dissolved in a 9 M urea buffer (9 M urea, 50 mM Tris buffer, pH 8.0) after wash. Fresh brains of deep-anesthetized mice were rapidly excised and the motor cortices were isolated and collected in sterile 1.5 ml micro-centrifuge tubes. The samples were quickly plunged into liquid nitrogen and stored at −80°C until testing. The brain tissues were then lysed and homogenized in ice-cold RIPA buffer supplemented with protease and phosphatase inhibitors (Roche). Western blotting Equivalent amounts of lysates were resolved by electrophoresis through a 10% Bis-Tris SDS-polyacrylamide gel electrophoresis gel (Invitrogen) and probed with the primary and secondary antibodies listed above. Detection was performed using the High-sig ECL Western Blotting Substrate (Tanon). Images were captured using an Amersham Imager 600 (GE Healthcare) and densitometry was measured using ImageQuant TL Software (GE Healthcare). The contrast and brightness were optimized equally using Adobe Photoshop CS6 (Adobe Systems Inc.). All experiments were normalized to GAPDH, tubulin, or actin levels as indicated in the blots and the values are plotted relative to the control (set to a value of 1) in across-assay comparison quantifications. Nuclear and cytoplasmic extraction For nuclear-cytoplasmic fractionation, 20 flies per genotype were homogenized in the lysis buffer [50 mM Tris at pH 7.4, 10 mM NaCl, 0.5% NP-40, 0.25% Triton X-100, 1 mM EDTA, and protease inhibitors (Roche)] by incubating for 5 min on ice as reported 15 followed by centrifuge at 3000 × g for 5 min at 4°C. The supernatant was collected as the cytoplasmic fraction and the pellet was dissolved in tissue lysis buffer (Invitrogen) as the nuclear fraction. Immunocytochemistry and confocal imaging 293T cells or primary neurons were grown on Nunc Chambered Coverglasses (Lab-Tek) and transfected with the indicated plasmids for 24 h. Thereafter, cells were fixed in 4% paraformaldehyde in phosphate-buffered saline (PBS) for 15 min at room temperature, permeabilized with 0.5% Triton X-100 in PBS for 15 min, and blocked with 3% goat serum in PBST (PBS + 0.1% Triton X-100) for 1 h at room temperature. The above primary and secondary antibodies in the blocking buffer were then incubated at 4°C overnight or at room temperature for 1 h. After three washes with PBST, cells were mounted on glass slides using Vectashield Antifade Mounting Medium with DAPI (Vector Laboratories). Fluorescent images were taken with Leica TCS SP8 confocal microscopy system using a ×63 oil objective (numerical aperture = 1.4). Co-localization of Parkin or PINK1 with mitochondria was evaluated by the line scanning analysis of LAS X and the protein puncta were counted using the "Analyze Particles" module of ImageJ. Images were assembled into figures using Adobe Photoshop CS6. Proteasomal activity assay Proteasomal activity was measured using a Proteasome Activity Assay kit (Abcam, ab107921). 293T cells were lysed in PBS with 0.5% NP-40 on ice for 10 min. The supernatant was obtained by centrifuge at 13 000 × g for 10 min at 4°C. The proteasomal activity of the supernatant was determined by assaying the cleavage of a fluorogenic peptide substrate Suc-LLVY-AMC according to the manufacturer's instruction. The fluorescence intensity was measured after the substrate peptide was incubated with the cell lysates at 37°C at the end of the assay (60 min) using a microplate reader (BioTek, Ex/ Em = 350/440 nm). OCR measurement Mitochondrial respiration functions were evaluated by measuring the OCRs using the Seahorse XFe96 Analyzer (Agilent) per the manufacturer's instruction. Briefly, 24 h after transfection, 4 × 10 4 293T cells were seeded onto 96well microplates pre-coated with poly-L-lysine. On the next day, the OCRs (pmol/min) of the cells in XF base medium containing 1 mM pyruvate, 2 mM glutamine, and 10 mM glucose (Sigma-Aldrich) were assayed with the Seahorse CF Cell Mito Stress Test following sequential additions of 1 μM oligomycin, 0.5 μM Carbonyl cyanide-4-(trifluoromethoxy)phenylhydrazone, and a combination of 0.5 μM antimycin A and 0.5 μM rotenone. Fly lifespan and climbing assays For the lifespan experiment, 20 flies/vial, 5-8 vials/ group were tested. Flies were transferred to fresh fly food every 3 days and the number of dead flies of each vial was recorded. Flies lost through escape or accidental death were excluded from the final analysis. The median lifespan was calculated as the mean of the medians of each vial belonging to the same group, whereas the "50% survival" shown on the survival curves was derived from compilation of all vials of the group. For the climbing assay, 20 flies were transferred into an empty polystyrene vial and gently tapped down to the bottom of the vial. The number of flies that climbed over a distance of 3 cm within 10 s was recorded. The test was repeated three times for each vial and 5 vials of each genotype were tested. 57,58 Statistical analysis Unless otherwise noted, statistical significance in this study is determined by unpaired, two-tailed Student's t-test at *p < 0.05, **p < 0.01, and ***p < 0.001. Error bars represent the standard error of the mean.
2018-09-21T19:07:34.195Z
2018-09-20T00:00:00.000
{ "year": 2018, "sha1": "3ff3efd6eb1cbeb04ccb9047cd04dff101117f55", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-018-1022-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5065480514263b1b430b7e86e81b2e4fd98051fb", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249812709
pes2o/s2orc
v3-fos-license
Structurally Modified Plant Viruses and Bacteriophages with Helical Structure. Properties and Applications Structurally modified virus particles can be obtained from the rod-shaped or filamentous virions of plant viruses and bacteriophages by thermal or chemical treatment. They have recently attracted attention of the researchers as promising biogenic platforms for the development of new biotechnologies. This review presents data on preparation, structure, and properties of the structurally modified virus particles. In addition, their biosafety for animals is considered, as well as the areas of application of such particles in biomedicine. A separate section is devoted to one of the most relevant and promising areas for the use of structurally modified plant viruses – design of vaccine candidates based on them. Studies of the structural rearrangement of virions, properties of the resulting particles, and their interaction with mammalian cells/organisms are important research tasks. This is of fundamental importance and may also facilitate creation of new biotechnological platforms for antigen presentation, drug delivery, and biomarker production. However, the detailed study of the process occurred much later at the Department of Virology, Faculty of Biology, Moscow State University [1,10]. These studies served as an impetus for scientifi c experiments and biomedical developments in Russia and other countries [2-4, 7, 14-22]. In particular, American researchers used the structurally modifi ed TMV particles to develop contrast agents for magnetic resonance imaging (MRI) [2], as well as to deliver drugs to tumor tissues during chemotherapy [3]. Most of the results related to the structurally modifi ed viruses have been demonstrated for TMV. Nevertheless, the possibility of structural modifi cation has also been shown for other plant viruses [11,12,23] and for bacteriophages [9,24]. The conditions for structural modifi cation may differ for the viruses with same morphology, therefore, the study of structural modifi cation of virions provides new fundamental knowledge about the structure and stability of viruses. This review is devoted to the analysis of the accumulated data on the structure, properties, safety, and practical application of the structurally modifi ed plant and bacterial viruses. BIOCHEMISTRY STRUCTURALLY MODIFIED PLANT VIRUSES Structurally modifi ed tobacco mosaic virus particles. Characteristics and properties. Tobacco mosaic virus is a rod-shaped virus, 300 nm long and 18 nm in diameter. TMV has a capsid consisting of 2130 identical coat protein (CP) subunits, which are assembled around the viral single-stranded RNA. In this way, a helical structure with 2-nm diameter central cavity is formed [25]. TMV occupies a unique place in the history of virology and is one of the most studied viruses. TMV research has largely determined fundamental ideas of the modern molecular virology. The accumulated knowledge has made it possible to develop new platforms based on TMV for application in medicine and biotechnology [26]. In 1956, Hart [13] used electron microscopy to study a heat-treated TMV. He reported that 10 second incubation at 80-98°C resulted in swelling of TMV particles at one or both ends, followed by transformation into "ball particles". The volume of these particles was comparable to the volume of the initial viral "rods". These studies were not continued and, one might say, were "forgotten" until 2011. In that year, the article by Atabekov et al. [10] was published in which such particles were again obtained and studied using modern experimental methods and approaches. This publication gave rise to a whole series of studies devoted to the particles obtained by thermal rearrangement of TMV. In these works, formation conditions, physicochemical properties, and structure of these particles were studied in detail. These structures have been called "spherical particles" (SPs) [1,10,[27][28][29][30][31]. The conducted studies have shown that it is possible to obtain TMV SPs of a given size by thermal treatment of the native virus particles. The structurally modifi ed TMV particles were studied using various methods (electron microscopy, nanoparticle tracking analysis, dynamic light scattering). These methods complemented each other, which made it possible to thoroughly describe the produced SPs. Structural transition of the TMV virions into SPs occurs through intermediate forms. In the fi rst stage, structures with swelling on one or both ends are formed, which are further transformed into spherical particles. TMV SPs are highly stable and uniform in shape. They do not form aggregates and do not change shape and size when stored for at least 6 months. The same has been observed for the SPs undergoing sedimentation by centrifugation at 10,000g, reheating to a temperature of 98°C and cooling, repeated freezing to -20°C followed by thawing [10]. A number of studies have demonstrated the possibility of obtaining TMV SPs in preparative amounts [19,32]. An important feature of the TMV SPs is that their size depends on the concentration of the original virus in the sample, which makes it possible to obtain SPs of a given diameter [10,32] (Fig. 1). Unlike TMV virions, TMV SPs do not contain RNA, and the protein isolated from them cannot be assembled into a regular structure. This indicates that thermal denaturation is irreversible in this case [10]. The TMV SPs protein was studied by the methods of circular dichroism, Raman spectroscopy, and fl uorescence spectroscopy. In terms of structural characteristics, it turned out to be signifi cantly diff erent from the CP in the native TMV. Rearrangement of the TMV virions into SPs is accompanied by the increase in particle density. At the same time, there is a transition of CP subunits into a structure with a low content of α-helices and signifi cant proportion of β-structures. The appearance of cross-β-structures is evidenced, in particular, by the strong reaction of TMV SPs with thiofl avin T [28]. Using the tritium planigraphy technique, composition of amino acids on the surface of SPs was compared with the TMV virions. It was found that the surface of SPs is much more hydrophobic due to the fact that the SPs are assembled from the thermally denatured protein subunits. Thus, under conditions of thermal denaturation, CP subunits of TMV acquire a specifi c conformation favorable for the assembly of stable SPs. The amino acid composition of the surface of SPs diff ers signifi cantly from that of TMV virions [31]. Avian infl uenza virus polyvalent epitopes of HA and M2 proteins [41] Rabies virus monovalent inactivated virion [42] Rotavirus monovalent epitope of VP6 protein [40] Puumala virus (hantavirus) monovalent inactivated virion [38] Coronavirus SARS-CoV-2 polyvalent RBD domain, conserved fragments of the S2 subunit [7] Bacillus anthracis bacterium monovalent protective antigen -PA [35] As a result of thermal rearrangement, TMV SPs acquire properties diff erent from those of virions. In particular, they have unique adsorption capabilities. Protein molecules of various sizes and amino acid profi le (including antigens of human pathogens) can be adsorbed on the surface of TMV SPs. Moreover, SPs can adsorb polymers and native virus particles of small size and spherical shape [1,4,7,27,29,30,[33][34][35]. The process of obtaining complexes of SPs with target agents is extremely simple: it involves short incubation (10-15 min), during which the molecules or virions bind to the surface of SPs via non-covalent bonds. In a number of studies, our group demonstrated another feature of TMV SPs -increase in the immune response to antigens of various nature and molecular weight adsorbed on the surface of SPs [1,4,7,34]. All these properties made it possible to consider the possibilities of using TMV SPs as a biogenic protein platform and to develop approaches for their practical application. Application of TMV SPs in the development of vaccines. Modern vaccinology is associated with the use of recombinant bacterial and viral antigens. Search for novel safe adjuvants is an important task. They are needed to improve immunogenicity of the antigens, reduce the dose of active vaccine substances and cost of its production. Adjuvants not only enhance immune response and its duration, but can also infl uence the type of response (humoral and/or cellular). Adjuvants stimulate immune response to diff erent antigens with diff erent effi cacy: for example, adjuvants based on aluminum compounds are ineff ective in some cases [36][37][38][39]. The resulting compositions of TMV SPs in complex with antigens (vaccine candidates) are at various stages of development. In all cases, the possibility of adsorption of recombinant antigens and antigenic specifi city of the proteins in the compositions of SPs has already been shown by methods of fl uorescence microscopy, immunoelectron and/or immunofl uorescence microscopy. In the recently published article, TMV SPs were used as a platform (depot) and adjuvant in the development of the vaccine candidate against COVID-19. It has been shown that the TMV SPs signifi cantly enhance immunogenicity (total IgG titers) of the recombinant antigens and contribute to the stimulation of a balanced Th1/Th2 immune response. When immunizing Syrian hamsters, the candidate vaccine induced production of the antibodies that neutralize SARS-CoV-2, suggesting that this approach is promising [7]. The adjuvant properties of TMV SPs have also been shown in other studies. One example is a rubella vaccine candidate. It is a composition consisting of the antigen (tetraepitope A of glycoprotein E1) in complex with TMV SPs. This complex provides a signifi cant increase in antibody titer (~10-fold) compared to immunization with only antigen itself or a mixture of antigen with aluminum hydroxide adjuvant [4]. Currently, preclinical trials of the rubella vaccine candidate based on the TMV SPs have been successfully completed. When immunizing animals, SPs signifi cantly enhance humoral immune response to the inactivated Puumala virus vaccine and have adjuvant activity in terms of the production of cytokines IL-12 and IFN-γ [38]. In addition, it has been shown that the SPs enhance protective properties of the widely used non-adjuvanted inactivated rabies vaccine produced in Russia -Rabikan. The result was increase in the protective activity of the Rabikan vaccine when used together with TMV SPs, comparable to the eff ect of incomplete Freund's adjuvant [42] ( Table 1). A number of studies have shown that when animals are immunized with the vaccine candidates based on BIOCHEMISTRY (Moscow) Vol. 87 No. 6 2022 TMV SPs, most of the antibodies are produced against the target antigen, and not against the protein particle used as an adjuvant [4,7]. Another application of TMV SPs in the vaccine development is stabilization of antigens during storage. It is known that proteins -components of drugs -can be subjected to spontaneous degradation during production and storage. Our study has shown the possibility of stabilizing the protective antigen, which is the main antigen of the anthrax pathogen, by adsorbing it on the surface of TMV SPs. The recombinant protective antigen (rPA) is the main component of almost all currently developed vaccines. Instability of this protein is mainly associated with the presence of proteolytic sites, as well as with spontaneous deamidation of asparagine residues, leading to degradation of rPA. The rate of deamidation is greatly increased when aluminum hydroxide is used as an adjuvant, which considerably reduces protective properties of the vaccines during storage and clinical applications [43,44]. Our studies have shown that a good way to achieve high storage stability is to introduce amino acid substitutions for the sites responsible for protein destabilization into rPA and then adsorb it on the surface of TMV SPs [8,35]. Application of TMV SPs as carriers for functionally active molecules and in antitumor therapy. Methods for covalent binding of the functionally active molecules to TMV SPs, as well as methods for encapsulation of the molecules during thermal rearrangement of SPs have been developed in a number of studies. The study by Dr. N. F. Steinmetz research group showed that the TMV SPs could be eff ectively used for bioconjugation via the functional side chains of amino acid residues of lysine (amino group), aspartic and glutamic acids (carboxyl group), and cysteine (thiol group) on their surface [3]. Chemical modifi cation of TMV SPs expands the possibilities of their use in the formation of complexes with the functionally active compounds for various biomedical applications. In particular, the possibility of TMV SPs conjugation was shown in an experiment in which the carboxyl groups of their glutamic acids were covalently bound to the chemotherapeutic drug doxorubicin. In the same study, the authors tested the possibility of non-covalent encapsulation of doxorubicin by simply adding it to the TMV virions during thermal rearrangement. Using two breast cancer cell lines, both approaches demonstrated eff ective drug delivery (cell uptake) and cancer cell killing. In another study by the same group of scientists, the properties of TMV virions and TMV SPs in stimulating an antitumor response were compared. B16F10 melanoma cells, a highly aggressive and poorly immunogenic tumor model, widely used to study various drugs in immunotherapy, were used for this propose [22]. Previously, antitumor activity of various plant viruses, from icosahedral cowpea mosaic virus (CPMV) to fi lamentous potato virus X (PVX) and papaya mosaic virus [45][46][47] was shown using this model. An impressive result was shown in the experimental mouse model with induced melanoma: intratumoral administration of a suspension of TMV or TMV SPs led to the decrease in the rate of tumor growth and increase in the survival time of animals. However, analysis of these TMV-based drugs showed lower effi cacy in the experiments compared to CPMV. The authors note that the TMV SPs with a too large diameter (about 250 nm) were used in the work. They suggest that even more eff ective protection can be achieved by using particles with smaller diameter [22]. Wu et al. [20] used β-cyclodextrin (β-CD) to chemically modify the surface of TMV virions. Upon thermal rearrangement of such particles, SPs with an average diameter of 88 nm were formed. SPs modifi ed with β-CD were able to form a complex with adamantane (a chemical compound and its analogs used in treatment of various diseases) [20]. Previously, the same authors studied the possibility of assembling such complexes based on the native TMV virions, modifi ed by β-CD. In addition to adamantane, these complexes could include folic acid, rhodamine B, doxorubicin, or polyethylene glycol [48]. Based on these data, the scientists suggested that the described approach for the assembly of supramolecular complexes is promising. Some other studies have demonstrated the possibility of immobilization of gold and silver nanoparticles on the surface of TMV SPs [21,49]. These results may have potential applications, for example, in the development of biosensors, as well as in antitumor and antibacterial therapy. Unfortunately, there are no new studies published by these scientifi c teams. Promising results on immobilization of the metal ions on the surface of TMV SPs were obtained by Dr. N. F. Steinmetz research group. The scientists modifi ed inner surface of the TMV virions with gadolinium (III) (Gd) chelate complexes, widely used in clinical practice as contrast agents for MRI. The complexes of TMV SPs-Gd (diameter 170 nm) provided higher relaxation time compared to the free chelate complexes of Gd or TMV-Gd. The relaxation time was comparable to such highly contrasting agents for MRI as synthetic dendrimers conjugated with a Gd chelate complex [2,14]. To prolong the action of the complexes in the body, the scientists coated the TMV SPs-Gd complexes (75 nm in diameter) with the biologically inert and stable silica (silicon dioxide). Mineralization of TMV SPs-Gd led to an almost threefold increase in the relaxation time. In addition, such complexes were more rapidly absorbed by macrophages and were protected from antibody recognition. The authors concluded that the silica-coated SP TMV-Gd can be used in MRI diagnostics of diseases associated with infl ammatory processes [18]. A large arsenal of approaches, previously tested and studied using virions of various plant viruses, is used to study possible applications of TMV SPs in biotechnology and medicine. These particles are able to enter cells, and are also capable of chemical bioconjugation and non-covalent encapsulation of various therapeutically signifi cant compounds. Due to these properties, TMV SPs could be an attractive platform for the delivery of drugs or contrast agents and could fi nd their application in medicine. The principal possible areas of application of TMV SPs are shown in Fig. 2. Structurally modifi ed particles from plant viruses with diff erent morphology. Currently, TMV is represented in biotechnological and biomedical research more than other phytoviruses. However, some other plant viruses with morphology/structure diff erent from TMV can be structurally modifi ed by physical impact. The structurally modifi ed spherical particles were obtained by thermal treatment of virions of plant viruses belonging to various taxonomic groups. These include the rod-shaped virions of a representative of the genus of tobamoviruses -Dolichos enation mosaic virus (DEMV) (Sunn-hemp mosaic virus), hordeiviruses -barley stripe mosaic virus (BSMV) [23]; fi lamentous virions of potyviruses -potato virus A (PVA) [50] and potexviruses; alternanthera mosaic virus (AltMV) [12] and potato virus X (PVX) [11]. It should be noted that the attempts of temperature-induced rearrangement of the virions of plant viruses with icosahedral symmetry type were unsuccessful. In particular, heat treatment of spherical caulifl ower mosaic virus (CaMV) and bean mild mosaic virus (BMMV) virions did not result in structural modifi cation, and no changes in virion morphology and size was detected [23]. Thermally-induced rearrangement of virions of other rod-shaped or fi lamentous plant viruses into spherical particles occurs in the same way as in TMV: in two stages through formation of intermediate forms. The complete structural rearrangement of BSMV, DEMV, and AltMV virions occurs at 94°C, and of PVX -at 90°C ( Table 2). Diff erences in the temperatures and conditions of thermal rearrangement of the morphologically similar plant viruses (PVX and AltMV) indicate diff erences in the structure and stability of their virions. It was shown that AltMV SPs and PVX SPs do not contain RNA. Studies of the protein structure revealed diff erences in the secondary and tertiary structures of CP in SPs and in native virions [11,12]. Similarly to the TMV SPs, structural transition of AltMV and PVX virions is accompa-nied by the appearance of a larger fraction of ß-structures in the protein. The adsorption properties of SPs and their ability to bind model antigens on their surface were analyzed. The obtained characteristics of PVX SPs and AltMV SPs indicate potential possibility of their use as platforms for presentation of the target antigens and formation of the functionally active complexes. Moreover, presence of the chemically reactive amino acids on the surface has been shown for AltMV SPs [12]. Studies of the structurally modifi ed spherical particles obtained from various representatives of plant viruses should be continued. It is very likely that the properties and features that distinguish them from TMV SPs would provide additional information about the structure of virions, and would also be useful for creating biogenic platforms. Safety of SPs application. Plant viruses are safe for mammals, they are not pathogenic for them and cannot replicate in human cells [51]. The possibility of biodegradation of the TMV SPs protein was studied in vitro: TMV SPs undergo complete proteolysis in the presence of proteinase K, while native virions remain resistant to the enzyme [27]. Biodistribution profi le of the TMV SPs with a diameter of about 50 nm was studied in mice: SPs predominantly accumulated in the spleen and liver 4 h after injection and were cleared from circulation by macrophages. Twenty-four hours after administration, SPs were not found in animal tissues. This gives them an advantage over synthetic materials of non-biological origin, which can persist in tissues for a long period and whose biodegradation is accompanied with formation of by-products leading to the induction of oxidative stress, as well as apoptosis [52]. TMV SPs show good compatibility with blood (does not cause hemolysis or clotting) and tissues (no signs of infl ammation, apoptosis, degeneration or necrosis) [15]. Further studies were carried out on three animal models (mice, rats, rabbits): introduction of the 300-nm diameter TMV SPs intramuscularly or intraperitoneally did not cause any toxic eff ects. The evaluation included preclinical studies of local tolerability after a single administration, as well as local and systemic eff ects after repeated administration of SPs, such as physiological, histological, and hematological changes. Reproductive toxicity has also been studied [53,54]. Furthermore, safety of the SPs in the composition of the rubella vaccine candidate has been demonstrated in preclinical studies in three animal species (mice, rats, rabbits) [4]. Thus, TMV SPs are biologically compatible with animal cells, biodegradable, and safe. Presence of crossβ-structures in SPs yes n/a n/a yes yes yes --Change in the surface amino acid composition yes n/a n/a yes n/a n/a --Adsorption properties yes n/a n/a yes yes n/a --Note. n/a -not assessed. * Conditions for PVA SPs formation diff er from the conditions for obtaining SPs from other plant viruses. STRUCTURALLY MODIFIED BACTERIOPHAGE PARTICLES AND THEIR APPLICATIONS The possibility of producing structurally modifi ed spherical particles was demonstrated for the bacteriophage M13 (family Inoviridae, genus Inovirus). M13 virions have a fi lamentous shape with a length of 880 nm and a diameter of 6.5 nm [55]. The virion of Inovirus M13 bacteriophage contains a single-stranded circular DNA (6407 nt), which is packaged within approximately 2700 copies of the p8 major coat protein [5,24,56]. The p8 consists of almost 100% α-helices [9]. The capsid also includes minor coat proteins: p3, p6, p5, p9 [5]. It is interesting to note that in this case, structural transition does not occur under the eff ect of high temperature, but upon short-term treatment of the suspension of phage particles with an equal volume of chloroform at 24°C [9]. The possibility of structural modifi cation has also been demonstrated for other representatives of the Inovirus genus: for bacteriophages fd and f1 [24,57]. It has been shown that treatment with chloroform results in the changes in hydrophobic interactions between the p8 protein subunits, and 2/3 of the viral DNA is released through a pore formed by fi ve subunits of the p3 minor coat protein. Thus, treatment of the virions of fi lamentous bacte-riophages with chloroform leads to formation of spherical particles with asymmetric arrangement of p8 and p3 proteins [5,9,57]. The size of spherical particles formed as a result of the structural rearrangement of M13 bacteriophage (M13 SPs), according to transmission electron microscopy, is about 39 nm and remains stable for 12 h when incubated on ice or at room temperature. However, with longer incubation, the size of M13 SPs becomes more variable and may increase [24]. With the structural modifi cation of M13, as in the case of the TMV coat protein, a decrease in the content of α-helices in the p8 protein is observed [9]. Contrasting M13 SPs with phosphotungstic acid revealed the presence of cavity in these nanoparticles [24]. It should be noted that the TMV SPs, unlike M13 SPs, are not hollow [32]. Lowering the treatment temperature with chloroform to 2°C made it possible to observe formation of an intermediate form during the structural transition of fi lamentous virions into M13 SPs. The intermediate form is represented by a 250 nm long and 15 nm wide rod-shaped particle, which has a hollow central channel and broadening at one end, the increase of which subsequently leads to formation of the spherical particle [9]. The possibility of structural modifi cation of fi lamentous bacteriophages under the eff ect of chloroform was demonstrated in the early 1980s [9]. However, only in 2007 Olsen et al. [58] highlighted the possibility of using spheroids from fi lamentous bacteriophages of the Inoviridae family as components of biosensors for Salmonella typhimurium. For this purpose, fi lamentous bacteriophages with affi nity to S. typhimurium were obtained by phage display. After treatment with chloroform, these phages turned into spherical particles. The resulting SPs were subsequently used to create an affi nity monolayer of the biosensor. The work described above, as well as a number of publications on TMV SPs and demonstration of their biotechnological potential, arouse interest in SPs based on fi lamentous bacteriophages [1,2,10,58]. A group of scientists from the University of California has published a number of papers on the production and application of M13 SPs, with affi nity to gold [6,55,59]. The intermediate forms and the M13 SPs themselves were covered with gold, and in both cases ability of the particles to act as a photothermally-induced antibacterial agent was shown using the example of Escherichia coli cells. It is interesting that the size of the formed intermediate forms (161 ± 33 nm) and SPs (60 nm) carrying the gold-binding peptide in the p8 protein diff ered from the SPs and intermediate forms of the native M13. It should be noted that the minor capsid protein p3 in the composition of the intermediate forms and M13 SPs retains its ability to bind to E. coli receptors. Due to this, these particles can be considered as a targeted antibacterial agent capable of inducing photothermal lysis of the target bacterial cells. The authors of this study suggest that, by modifying the p3 protein, it is possible to expand the spectrum of bacteria with which the structurally modifi ed M13-based particles coated with gold can bind [6]. In another work, the same scientifi c group simultaneously modifi ed the p8 (modifi ed with a gold-binding peptide) and p3 (modifi ed with a zinc-binding peptide) proteins of bacteriophage M13. Subsequent treatment with chloroform led to formation of bifunctional M13 SPs. Next, Au and ZnS were synthesized on the surface of bifunctional M13 SPs, and thus the hybrid nanostructures asymmetrically coated with gold and zinc sulfi de were obtained [5]. It was shown previously that the Au/ZnS nanoparticles can be considered as regenerable catalysts for photocatalytic reactions [60]. Taking into account the above results, it can be assumed that SPs obtained by treating fi lamentous bacteriophages of the Inoviridae family can be used for the development of bactericidal agents, biosensors for diagnostics, and also as a scaff old for the design of hybrid nanostructures (Fig. 3). CONCLUSIONS Half a century after the discovery of structural rearrangement of the rod-shaped TMV virions into SPs via thermal treatment, the intensive development of biomedicine aroused interest in a more detailed study of this phe-nomenon. This has led to numerous publications on the properties and application of the structurally modifi ed viral particles. Studying conditions and features of structural modifi cation of the morphologically similar virus particles could provide important fundamental knowledge about organization, structure, and stability of virions. Most of the studies devoted to characterization and search for potential areas of application of the structurally modifi ed plant virus particles were carried out with TMV SPs. The unique adsorption properties of TMV SPs and their eff ective adjuvant properties, as well as wide possibilities for surface modifi cation of TMV SPs, make them a promising platform for various biotechnologies. They are biocompatible, safe and biodegradable, which is a clear advantage for possible use of SPs in medicine. TMV SPs have antitumor activity, could become a platform/adjuvant for developing vaccine candidates against viral and bacterial infections, and could also fi nd application in diagnostics and microelectronics. However, studies on the possibility of application of the structurally modifi ed viral particles should not be limited only to TMV SPs. It is possible that similar particles obtained from the virions of other plant viruses could be used to obtain biogenic platforms with new properties that are diff erent from the TMV SPs. The ease of isolation and purifi cation of plant viruses, high potential for scaling up their commercial production, and safety for humans and animals are important advantages of plant virus SPs over other biopolymers and synthetic nanomaterials. In addition to the structurally modifi ed particles of plant viruses, attention should be paid to the works on generating SPs from fi lamentous bacteriophages. A number of areas of their possible application have been suggested. Thus, SPs based on M13 are of particular interest as photothermal bactericidal agents and as a scaff old/carrier for formation of complexes with various metal compounds. Effi cient methods have already been reported for obtaining bacteriophage M13 in quantities necessary for practical use and with high degree of purity [61]. This also makes it possible to consider it as a promising object for future developments. It should be noted that at present the range of practical applications of the structurally modifi ed viral particles correlates with the approaches proposed earlier for virions and virus-like particles of plant viruses [62][63][64]. The studies of the native virions of plant viruses and bacteriophages played a signifi cant role in the work with the structurally modifi ed particles. It may be expected that, due to their unique characteristics, these particles could also fi nd application in new areas of biotechnology and medicine. The presented data allow, in our opinion, to make an unambiguous conclusion that the study of structurally modifi ed viral particles is a very promising area of research. The results of such studies may be of practical importance for modern virology and biotechnology.
2022-06-18T15:23:18.415Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "b07d3479c09d5666945a83ed8186fc937ac43092", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0a782215735acbce959369d92500de886fb8c377", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
237099921
pes2o/s2orc
v3-fos-license
The rate of ecosystem acclimation is the dominant uncertainty in long-term projections of an ecosystem service Rapid climate change may exceed ecosystems’ capacity to respond through processes including phenotypic plasticity, compositional turnover and evolutionary adaption. However, research predicting impacts of climate change on ecosystem services rarely consider this rate of “ecosystem acclimation.” Combining statistical models fit to historical climate data and remotely-sensed estimates of herbaceous productivity with an ensemble of climate models, we demonstrate that assumptions concerning acclimation rates are a dominant source of uncertainty: models assuming minimal acclimation project widespread decreases in forage production in the western US by 2100, while models assuming that acclimation keeps pace with climate change project widespread forage increases. Uncertainty related to ecosystem acclimation is larger than uncertainties from variation among climate models or emissions pathways. A better understanding of ecosystem acclimation is essential to improve long-term forecasts of ecosystem services, and shows that management to facilitate ecosystem acclimation may be necessary to maintain ecosystem services at historical baselines. Introduction Anthropogenic climate change is disrupting relationships between climate and ecosystem structure and function that have developed over millennia (Whittaker 1975;Svenning & Sandel 2013). Ecosystems are responding to this disruption through mechanisms spanning a wide range of timescales (Smith et al. 2009). Individual physiological rates can change on scales of minutes, population sizes typically change over years to decades, and species turnover, evolutionary adaptation, and alteration of disturbance regimes and biogeochemical cycles may play out over decades or centuries (Fig. 1). While the term "acclimation" has traditionally referred to phenotypic plasticity, here we consider "ecosystem acclimation" as the net effect of all the processes by which ecosystems respond to climate change. Given the long timescales of many of these processes, ecosystem acclimation will likely lag behind the pace of climate change, limiting the rate of ecosystem acclimation and creating disequilibria between ecosystem structure and climate (Webb 1986;Svenning & Sandel 2013;Blonder et al. 2015;Williams et al. 2021). The consequences of such disequilibria for ecosystem functioning remain unknown. Our lack of understanding about the timescales and potential impacts of acclimation lags ( Fig. 1) creates great uncertainty in long-term projections of climate change impacts on ecosystem services (Carpenter 2002;Luo et al. 2011;Charney et al. 2016;Runting et al. 2017). While the fast components of ecosystem acclimation, such as physiology, can be studied using experiments and observations, we know much less about the slow components. Experiments typically lack the spatial or temporal extent to observe species turnover or evolutionary adaptation, and may only capture transient behavior (Collins et al. 2012;Reich et al. 2018). Paleoecological observations provide much of what we know about long-term ecological responses to environmental change (Jackson & Overpeck 2000), but the data have critical gaps and current rates and magnitudes of warming exceed those observed in the past (Kaufman et al. 2020). Without relevant empirical data about the slow components of acclimation, most ecological models assume either minimal ecosystem acclimation or rates of acclimation that perfectly track climate change (Blonder et al. 2017). Either way, the sensitivity of results to these assumptions is rarely tested. Although uncertainty about the rate of ecosystem acclimation has been recognized (Adler et al. 2020;Luo & Schuur 2020;Rollinson et al. 2021), to our knowledge previous studies have not estimated its magnitude or consequences for ecosystem services. Here, we quantify how varying assumptions about the rate of ecosystem acclimation translates into uncertainty in long-term projections of an economically important ecosystem service, forage production. We compared this uncertainty to those from more commonly studied sources, such as variation in greenhouse gas emission scenarios (referred to as relative concentration pathways or RCPs), general circulation models (GCMs), spatial variation, and model parameter error. Our approach exploits differences in how forage production responds to spatial and temporal variation in climate (Lauenroth & Sala 1992). Spatial relationships between forage production and climate capture an advanced stage of ecosystem acclimation: species composition, disturbance regimes, and resource pools have had centuries to equilibrate to a relatively stationary climate. When we project future production based on spatial gradients, we assume that even the slow components of ecosystem acclimation instantaneously track the rate of climate change, representing a scenario of rapid ecosystem acclimation (Fig. 1). In contrast, when we study the relationship between historical, interannual variation in weather and interannual variation in production at one location, we hold slow processes such as changes in species composition and resource supply relatively constant (Briggs & Knapp 1995). Projections of future forage production based on this time-series approach represent a scenario of minimal ecosystem acclimation driven only by mechanisms operating on fine timescales (Fig. 1). Together, the spatial gradient, "rapid acclimation" scenario and the time-series, "minimal acclimation" scenario provide objective bounds on the range of uncertainty in future forage production resulting from different assumptions about the rate of ecosystem acclimation. We implemented this approach by fitting statistical models to 30 years of remotelysensed estimates of forage production (Robinson et al. 2019) for six rangeland ecoregions covering the western U.S. (Fig. S1). These data allowed us to simultaneously describe how productivity responds to spatial gradients in climate and temporal variation in weather by decomposing precipitation and temperature observations into spatial and temporal components. The spatial component is represented by variation in mean precipitation and temperature across sites (pixels), while the temporal component is represented by annual anomalies from the mean precipitation and temperature at each site. Because our statistical model contained both spatial and temporal relationships between climate and productivity, we were able to use either the timeseries (minimal acclimation) or spatial gradient (rapid acclimation) approach to project effects of climate change on forage production (Fig. S2). Rangeland ecoregions We studied five expansive rangeland ecoregions of the western United States: the Northern Mixed Prairies (NM), the Shortgrass Steppe (SGS), the Cold Deserts (CD), the California Annual grasslands (CA), and the Hot Deserts (a combination of the Mojave, Sonoran, and Chihuahuan Deserts). These ecoregions were defined by potential natural vegetation (Kuchler 1964). A previous assessment of functional type cover in each of these ecoregions was consistent with expectations based on potential natural vegetation (Felton et al. 2021). We split the Hot Deserts ecoregion into two to account for differences in precipitation seasonality. We assigned sites receiving > 30% of total annual precipitation during the summer months (July-September) to a Hot Desert summer precipitation (HDS) ecoregion, and assigned sites receiving less than 30% of annual precipitation during summer to a Hot Desert winter precipitation ecoregion (HDW) (Fig. S1). Forage production data We leveraged a remotely-sensed dataset of annual net primary productivity data developed specifically for rangeland ecosystems of the western US, spanning the years 1986-2015 (Robinson et al. 2019). This data product partitions annual primary production (defined as net carbon inputs in gm -2 ) into functional groups of shrubs, trees, annual forbs/grasses, and perennial forbs and grasses. We defined annual forage production, our response variable, as the sum of productivity from the herbaceous functional groups: annual and perennial grasses and forbs. We aggregated these data from their native spatial resolution (30 m) to 1/16 th degree (approximately 6.9 km) to match the spatial resolution of our climate data. We applied a multi-step filtering process to remove non-rangeland pixels, such as irrigated agriculture (Felton et al. 2021). We removed 6.9 km pixels in which less than 50% of the 30 m LANDSAT subpixels were rangeland. We also removed pixels where the average net primary productivity was below 10 gm -2 . We then sequentially removed outlier pixels for each ecoregion based on distributions of multiple metrics of plant-water relationships across each ecoregion. These metrics, presented in order, were 1) mean precipitation use efficiency (mean net primary productivity/mean annual precipitation), the slope of annual productivity regressed on annual precipitation, and 3) annual precipitation use efficiency. Outliers were defined as beyond three standard deviations from the mean. After filtering, we had 774 pixels in California Annual grasslands, 13,869 in Cold Desert, 6,446 in Hot Deserts summer precipitation, 1,573 in Hot Deserts winter precipitation, 13,070 in the Northern Mixed Prairie, and 4,925 in the Shortgrass Steppe. Historical climate data and model selection For historical climate data, we used a gridded hydrometeorological data product with daily time steps of temperature and precipitation (Livneh et al. 2013(Livneh et al. , 2015. The native resolution of these data are ~6 km, which we aggregated to 6.9 km to match the resolution of soil moisture data used in our pre-model fitting. These data encompass years 1915-2015, but we focused on years 1986-2015 to correspond with the temporal coverage of the forage production data. Explaining an annual ecological response, like production, using daily weather data poses a difficult model selection problem because of the virtually limitless choices about how to aggregate the weather data (Tredennick et al. 2021). Therefore, we constrained our covariate selection approach in two ways: First, in each candidate model we considered only one variable representing moisture stress and one variable representing heat stress, along with their interactions as explained below. This constraint guarded against collinearity and overfitting. In addition to precipitation (moisture stress) and temperature (heat stress), we considered additional covariates simulated by the SOILWAT ecohydrological model (Bradford et al. 2014; & Andrews, C. A. 2018), using the daily weather data (Livneh et al. 2013) as input for each location (pixel). These included moisture-related variables such as soil water availability, volumetric water content, and transpiration, and variables related to heat stress such as potential evapotranspiration (PET) and the ratio of soil water availability to PET. Preliminary analyses using linear mixed effects models (not shown) indicated that precipitation and temperature explained more variation in observed productivity than pairs of variables derived from SOILWAT. For all subsequent models reported here, the raw covariates were precipitation and temperature. Second, we constrained variable selection using our a priori knowledge of each ecoregion to define the most relevant climate window, rather than exhaustively searching for the optimal temporal window over which to aggregate the moisture stress and heat stress variables. For the California annual grasslands, the cold deserts, and the hot deserts with winter precipitation, we summed precipitation and averaged temperature over fall, winter, and spring (October -June). For the northern mixed prairies, shortgrass steppe, and hot deserts receiving summer precipitation, we aggregated over spring and summer (April -September). While the raw covariates for each ecoregion were total precipitation and mean temperature aggregated over the relevant seasons, we decomposed each of these raw covariates into spatial means and temporal (annual) anomalies and included interactions among them (Kleinhesselink & Adler 2018;Felton et al. 2021). Specifically, we decomposed , , the precipitation received in pixel i and year t, into the mean precipitation in pixel i, � i, and the annual precipitation anomaly in pixel i and year t, δ , . Note that � varies only across space while δP varies only in time. Similarly, we define � i as the mean temperature in pixel i and δ , as the annual temperature anomaly. We then designed interactions among these derived covariates to allow the effects of annual anomalies to vary in space, as shown in Table S1. (Note that in our computer code, we refer to the precipitation and temperature anomalies as deviations.) Model fitting We developed spatiotemporal hierarchical models for each ecoregion to infer the response of forage production (herbaceous NPP) to past climate conditions, and then used this model to predict changes in forage in response to future climate change. Observed forage production, y, in each pixel, i, and year, t, was modeled as normally distributed with mean ( , ) and process variance ( ( ) 2 ): The process model for the mean is a linear model of covariates , and ecoregion-specific parameters b, along with additive, pixel-specific spatial ( ) and year-specific temporal ( ) random effects: Year random effects were fit with a non-centered parameterization (more computationally efficient in Stan) using a standard normal distribution, rescaled with standard deviation ( ) : ′~(0,1). [4] Spatial random effects were fit using a spatial dimension reduction approach following the general methods of (Tredennick et al. 2016). Dimension reduction approaches help overcome the computational burden of modeling residual autocorrelation structures by reducing the number of pixels to a constituent number of knots that summarize the landscape of spatially autocorrelated processes not accounted for by covariates. Spatial random effects were modeled as where η is a vector of spatial random effects specific to each pixel, i (i = 1...I), α is a vector of spatial random effects specific to each knot, s (s = 1...S), and K is an I by S matrix that describes how the knot based random effects are translated into random effects for each pixel. Each element in K, Ki,s, is a normalized, distance weighted function from each knot to where , is the distance from pixel i to knot k, and is a distance decay parameter. Knot-based random effects were fit with a non-centered parameterization using a standard normal distribution, rescaled with standard deviation ( ) as: ′ ~(0,1). [9] Following the approach of (Tredennick et al. 2016), and knot spacing was determined empirically prior to model fitting to improve computation based on the observed scale of spatial auto-correlation. To do this, we fit a version of our model including only fixed effects, using the maximum likelihood solution for b. We then calculated model residuals between our fixedeffects-only model and observed NPP values, and visualized the scale of autocorrelation using empirical semi-variograms (Fig. S3). The spacing of knots is determined visually by the approximate sill distance of each ecoregion, and value of is one-third the sill distance. Knots were placed in a regular grid across each ecoregion with spacing based on sill distance (Fig. S4). To limit overfitting, we regularized covariates with Bayesian ridge regression. This was done by fitting seven different models in each ecoregion to a subset of the data (randomly withholding 5 years, ~17% of the data) with increasingly wider zero centered priors for covariate parameters (~(0, 2 )), and then testing the predictive ability of each prior specification with the withheld data (Hooten & Hefley 2019). The priors for that minimized the average mean squared predictive error (MSPE) in each ecoregion were then used to fit the final model to the full dataset (Fig. S5). For each ecoregion, this final model was used to project future forage production. Parameter estimates are shown in Table S2, and goodness-of-fit measures are shown in Table S3. Future climate data We focused our projections on the late-century time period, defined as 2061-2100. To represent the range of uncertainty in future climate data, daily climate data from eleven different general circulation models (GCMs) under two different representative concentration pathways Forage projections and uncertainty and sensitivity analyses We combined the parameters estimates of the statistical model with GCM simulations of late century precipitation and temperature to project future forage production. The first step was decomposing future precipitation and temperature into means and annual anomalies for each location (pixel) (Fig. S2). One option is to calculate future annual anomalies in precipitation and temperature based on the historical means at each location. This implements a time-series approach: we assume that climate means will not change, but there will be many hotter than normal years in the future, and the effects of these anomalous years are captured by the model's temporal anomaly terms. The second option is to calculate future annual anomalies based on the future means at each location. This implements the spatial gradient approach: we assume relatively small changes in the annual anomalies but potentially large changes in mean precipitation and temperature, and the effects of these shifts in the means are captured in the model's spatial terms. Although most of our analyses focus on the time-series (historical means as the reference) and spatial gradient (future means as the reference) approaches, we also explored a third option in which we calculate annual anomalies based on GCM projections for the mid-century period . This approach represents an intermediate rate of ecosystem acclimation. After calculating the climate covariates, we projected future productivity. For each location and each year in the late century period, we predicted productivity for 22 climate scenarios (11 GCMs × 2 RCPs), 500 sets of model parameters drawn from the MCMC chains, and our two methods (time-series and spatial gradient). We then averaged predicted productivity across years within each pixel, preserving variation among all combinations of locations, parameter draws, GCMs, RCPs, and projection method. Because we extrapolated linear relationships, for some locations our models project negative values of forage production, resulting in declines greater than 100%. We chose to report these biologically impossible values, rather than truncating them at zero, to avoid confusion about our approach. The uncertainty analysis featured in the main text methods focuses on the future change in mean forage production relative to the historical period. We calculated change in a given pixel as the difference between the mean annual production in the future period and mean annual production from the historical period, divided by the historical mean. Note that the historical mean is not based on observed data, but rather on predictions from the statistical model, ensuring that biases in the model do not influence the estimated change in productivity. Dividing by the historical mean led to some very large values in pixels where the historical mean was close to zero. Our uncertainty analysis was sensitive to these outliers, so we removed sites with predicted historical mean production less than 10 g m -2 (26 out of 13,839 pixels in the Cold Deserts, 18 of 6,446 pixels in the Hot Deserts with summer precipitation, and 14 of 1,573 pixels in the Hot Deserts with winter precipitation). Because our predictions for the two periods assume the same model process error, process error cannot contribute to change in productivity and we can ignore it. Similarly, we can ignore random year effects, which we assumed have the same variance in the future as in the historical period. To perform the calculations, we loaded all the projected changes in forage production into an array with the following dimensions: 1) sites, 2) parameters draws from the posterior distribution (500), 3) GCMs (11), 4) RCPs (2), 5) rate of acclimation (2). We then analyzed variation in this array using two approaches. First, we took a sums-of-squares approach, quantifying the proportion of the total sum-of-squares contributed by each of the five main effects (the dimensions of the array), as well as the four two-way interactions involving rate of acclimation. Second, we quantified the standard deviation of the marginal means of each main effect. For example, we calculated the mean change in productivity at each site, averaging over parameters, GCMs, RCPs, and rate of acclimation, then computed the standard deviation among site means as a measure of the variation among sites in projected change in forage production. We repeated this analysis of standard deviation for variation in absolute forage production, rather than variation in the change in forage production. For this case, we also included variation due to process error and random year effects, as estimated by the statistical models. Note that results of both the sum-of-squares partitioning and the analysis of standard deviation are affected by the number of levels of each main factor (e.g., sites, parameters, GCMs, RCPs, projection methods). Because we consider only the two extremes for the rate of ecosystem acclimation, our estimate of the relative contribution from this source represents an upper bound. New information on rates of acclimation could reduce this uncertainty either by moderating the extremes, or by working with objective distributions of rates of acclimation (as we currently do with parameter error). We also conducted a sensitivity analysis to help us understand the projections. For each ecoregion, we used the point estimates of model parameters to quantify the expected change in production at each pixel caused by 1) adding 1 cm to all mean precipitation values, 2) adding 1 cm to all annual precipitation anomalies, 3) adding 1°C to all mean temperature values, and 4) adding 1°C to all temperature anomalies. These perturbations fall well within the range of expected future changes in precipitation and temperature (Fig. S6). We recalculated all interaction terms after each perturbation. We report the mean change across locations for each of these four perturbations. We performed a final set of projections to represent an intermediate rate of ecosystem acclimation, using GCM output for mid-century as the reference for calculating each pixel's climate means and annual anomalies for the late-century period. We did not repeat the full uncertainty partitioning, but simply compared the averaged projected change in forage production for each pixel to the average change projected by the time-series (historical climate reference period) and spatial gradient (late-century climate reference period) approaches. Results The minimal acclimation and rapid acclimation scenarios led to dramatically different projections of forage production by late century (2061-2100) (Fig. 2). Models assuming minimal acclimation projected widespread declines in forage production across western US rangelands. The projected declines were greatest in the shortgrass steppe and the hot desert ecoregions. In contrast, models assuming rapid acclimation projected widespread increases in forage production or, at worst, modest decreases. Across 84% of the Cold Deserts and 89% of the Northern Mixed A sensitivity analysis clarified why differences emerged between the minimal and rapid acclimation scenarios: positive annual temperature anomalies decreased forage production, but spatial variation in mean annual temperature had weak effects (Fig. 3). Essentially, production decreases during hot years but not necessarily in hot locations in all regions except California, which has a cool, winter-spring growing season. Because the minimal acclimation approach represents future temperature increases as temporal anomalies, it leads to strong negative effects of increasing temperatures, while the rapid acclimation approach represents future temperature increase as shifts in mean temperature, which have little impact on production. Temperature sensitivities estimated from field-based estimates of production at five sites indicate that these results are unlikely to be an artifact of the algorithm used to generate the herbaceous production product we used (SI Supporting Methods and Table S4). Variation in mean annual precipitation had strong positive effects on production in all ecoregions, while precipitation anomalies had much weaker effects (Fig. 3). For example, predicted increases in precipitation (Fig. S6) in the Cold Deserts and Northern Mixed Prairies had strong positive effects on the rapid acclimation projections, whereas increasingly positive precipitation anomalies had little effect on projections assuming minimal acclimation. The rate of ecosystem acclimation was the dominant source of uncertainty in projections for five of six ecoregions, contributing more than any other single source of uncertainty we considered (Fig. 4). The exception was the Hot Desert summer precipitation ecoregion, where spatial variation contributed most to uncertainty. The main effect of acclimation rate plus its twoway interactions with the other sources of uncertainty contributed from 44% (Northern Mixed Prairies) to 54% of the total variation (Hot Deserts with summer precipitation) (Table S5). Figs. S7 and S8 further emphasize that variation in projected change among GCMs and RCPs is small relative to the difference between the minimal and rapid acclimation scenarios. Could ecosystem acclimation help maintain historical levels of forage production? In the Hot Deserts and Shortgrass Steppe, where GCMs predict decreasing precipitation, our models project production declines even under rapid acclimation. In California, they predict production increases even under minimal ecosystem acclimation. For many locations in the Cold Deserts and Northern Mixed Prairies, however, the models predict a switch from decreased production under minimal acclimation to increased production under rapid acclimation. We also considered an intermediate rate of acclimation by using mid-century GCM projections (2021-2060) as the reference for calculating late-century annual anomalies. Median changes in production under this intermediate acclimation scenario were close to zero in the Cold Deserts but were still negative in the Northern Mixed Prairie (Fig. S9). Discussion Our most important result is that much of the uncertainty in future forage production comes from the unknown rate of ecosystem acclimation. In fact, uncertainty about acclimation was much larger than uncertainties about future climate conditions. While studies of ecological impacts of climate change often include projections from an ensemble of climate models, or consider multiple emission scenarios, they rarely consider ecosystem acclimation as a potential source of uncertainty because assumptions about rates of acclimation are typically implicit in ecological models (Blonder et al. 2017). As a result, this source of uncertainty is often overlooked entirely. A better understanding of ecosystem acclimation could greatly reduce uncertainty in long-term projections of ecosystem services, independent of progress in climate science. We do not need precision; even estimates of acclimation rates and their ecosystem impacts (Fig, 1) to the nearest order of magnitude could greatly reduce the uncertainty of projections such as ours by identifying the appropriate climate reference period, or by informing a model weighting approach (Adler et al. 2020). Comparing simple models fit to time-series and spatial gradient data (Wilcox et al. 2016;Amburgey et al. 2018;Kleinhesselink & Adler 2018;Klesse et al. 2020) is a valuable first step in quantifying uncertainty related to ecosystem acclimation. Our sensitivity analysis (Fig. 3) suggests fundamentally different patterns of acclimation to future changes in temperature and precipitation. In most ecoregions, production was far more sensitive to annual temperature anomalies than to spatial variation in mean temperatures. Because the minimal acclimation scenario estimates temperature effects based on annual anomalies, it projected much stronger negative impacts of future temperature increases compared to the rapid acclimation scenario, in which temperature effects reflect spatial gradients in climate and production. The differing effects of temperature means and anomalies are consistent with a recent study of tree growth (Klesse et al. 2020) and suggest that at any given location an unusually hot year can limit production either by creating heat stress (Breshears et al. 2021) or exacerbating water stress (Hoover et al. 2014;Williams et al. 2020). In contrast, ecosystems have adapted to spatial variation in mean temperatures through a variety of mechanisms, including shifts in phenology (Rice et al. 1992) or species composition (Edwards & Still 2008). We found the opposite pattern for precipitation: production was much more sensitive to spatial variation in mean precipitation than to temporal variation in annual precipitation anomalies, consistent with previous work (Huxman et al. 2004;Felton et al. 2021). Thus, the positive effects of future increases in precipitation, predicted for the Northern Mixed Prairies, have a much stronger influence under the rapid acclimation approach than for the minimal acclimation scenario. Overall, these results suggest that acclimation processes may help dryland ecosystems maintain production in the face of long-term temperature increases (e.g., Liu et al. 2018), but have less potential to limit impacts of changes in mean precipitation. Could process-based models, an alternative to the phenomenological statistical models we built, accurately capture the mechanisms driving ecosystem acclimation? While possible in principle, we are skeptical that current process-based approaches could solve this problem. Process-based ecosystem models are often forced to make the same choice between time-series and spatial gradient approaches: studies focused on reproducing ecosystem dynamics typically tune their parameters using longitudinal data (e.g., Rollinson et al. 2017) while studies seeking to reproduce the broad-scale distribution of biomes tune their parameters with spatial data (e.g., Fisher et al. 2015). This choice represents an unrecognized assumption about the rate of acclimation. Furthermore, it may be computationally infeasible to simulate these models over the long temporal and broad spatial scales of our analysis, especially with the taxonomic or functional resolution necessary to capture the compositional changes central to ecosystem acclimation. The second important result from our analysis is that rapid acclimation may be required to maintain ecosystem services at historical levels in most western US rangelands. Unfortunately, such rapid acclimation may be unlikely in dryland ecosystems. A decade or more of experimental precipitation reduction or addition appears necessary to significantly alter abundances of dominant plant species (Evans et al. 2011;Collins et al. 2012;Estiarte et al. 2016). Paleoecological studies document dramatic compositional changes in the recent past, but cannot provide sub-century resolution (Mottl et al. 2021). Changes in soil structure and soil resources will also be important, and likely even slower to change (Burke et al. 1995). Furthermore, our rapid acclimation projections may be unrealistically optimistic because they ignore the potential for transient dynamics involving dispersal limitation (Urban et al. 2012) or novel, alternative stable states (Carpenter 2002;Blonder et al. 2017) to prevent ecosystem functioning from perfectly tracking a rapidly changing climate. An improved understanding of ecosystem acclimation rates and processes would be extremely valuable for climate adaptation management. For example, the "Resist-Accept-Direct" (RAD) framework Thompson et al. 2021;Schuurman et al. 2022) provides managers with three management pathways: Resist the trajectory of change, Accept it, or Direct it by actively steering the ecosystem toward a preferred, new configuration. However, managers often lack robust scientific guidance to inform these choices. Projections like ours could help fill this gap. For the California grasslands, both the minimal and rapid acclimation scenarios projected forage increase for most locations, making Accept an easy choice. In every other ecoregion, a Direct strategy to facilitate ecosystem acclimation would help limit production declines, as in the Shortgrass Steppe and Hot Deserts, or even prevent production declines, as in much of the Cold Desert and Northern Mixed Prairie Ecosystems. Of course, forage production is just one ecosystem service, and RAD decisions would consider many services, including biodiversity. In practice, tension may emerge between conserving historical species composition and maintaining ecosystem functioning by promoting compositional change (Fisichelli et al. 2016). Currently, the science is not mature enough to guide climate adaptation management. New research to better understand the rates and impacts of acclimation processes (Fig. 1) is needed to help managers effectively facilitate ecosystem acclimation. Such research would also reduce uncertainty in long-term projections of ecosystem services and advance our understanding of processes linking rapid environmental change, ecosystem structure, and ecosystem function. trade, firm, or product name is for descriptive purposes only and does not imply endorsement by the U.S. Government. We thank Brady Allred for helping us access the productivity data, and Jonathan Levine, Janneke HilleRisLambers, and Alan Knapp for comments that improved an earlier version of the manuscript. Figure 1. "Ecosystem acclimation" refers to the many processes by which ecosystems respond to climate change. The fastest processes, such as physiological adjustment or phenotypic plasticity, may keep pace with the rapidly changing climate. Slower processes, such as selection for drought-tolerant genotypes or species, species colonization and extinction, or changes in disturbances regimes and soil structure in terrestrial systems, may lag behind the changing climate, leading to loss of ecosystem services. We assume that the slowest processes will have the greatest impacts on ecosystem functioning in the long-term (a grassland cannot turn into a desert without species turnover and changes in disturbance regimes and soils), but that the uncertainty surrounding these impacts (size of circles) is also greater because dynamics of slow processes are harder to study. Models to project ecological impacts of climate change often make extreme, but implicit, assumptions about which processes will influence the focal ecological response (red and blue horizontal bars). Forage production is most sensitive to temporal anomalies in temperature and spatial variation in mean precipitation. Bars show the change in production in late century projections, averaged across each ecoregion, in response to: 1) a 1 cm increase in mean annual precipitation, 2) a 1 cm increase in annual precipitation anomalies, 3) a 1°C increase in mean annual temperature, and 4) a 1°C increase in annual temperature anomalies. The temporal anomalies drive changes projected by the minimal acclimation scenario, while spatial variation drives changes projected by the rapid acclimation scenario. Figure 4. In five of six ecoregions, the rate of ecosystem acclimation is the largest source of uncertainty in projections of future changes in forage production. Uncertainty was partitioned as the proportion of the total sum-of-squares of projected changes in production by late-century. Site: spatial variation among pixels with each ecoregion. Parameter: variation among model parameter estimates in the posterior distribution. GCM: variation among global circulation models. RCP: variation among greenhouse gas emission scenarios. Acclimation: variation between the minimal and rapid ecosystem acclimation scenarios.
2021-08-17T13:22:16.734Z
2021-08-12T00:00:00.000
{ "year": 2022, "sha1": "29daa6707b4762d2b5157cfcc5e58e5729184ee8", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/03/30/2021.08.11.455579.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "32ede9951c5a81836d3c1b20071b8ee21ebc8113", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
225461071
pes2o/s2orc
v3-fos-license
The Characteristics of Acacia mangium Stands at Site 23 B, RPH Maribaya, BKPH Parung Panjang, KPH Bogor which is attacked by pests and diseases RPH Maribaya, BKH Parung Panjang, KPH Bogor is an Industrial Forest Plantation dominated by A. mangium stands. The characteristics of A. mangium stands is done by calculating the frequency and intensity of pest and disease attacks on A. mangium stands. The method used in this research is referring to the Criteria and Score for Plant Pests / Diseases [1]. The research was conducted on 85 sample trees in 7 plots in the Plot 23b of the RPH Maribaya by Simple Random Sampling. The results of this research indicate that the highest attack frequency on A. mangium is in plot 4 is 80%, and the lowest is in plot 7 which is 0%. The highest attack intensity is found in plot 4 that is 50% and the lowest is in plot 7 which is 0%. The percentage of attack frequency ranges from 20% - 80% (low category) with an average percentage of attack frequency of 41%. The percentage of attack intensity ranged from 20% - 50% (mild category) with an average percentage of attack intensity of 23%. The pests found in A. mangium stands are sac caterpillars (Pteroma plagiophleps.), and the larva of white grub (Lepidiota stigma). Introduction Perum Perhutani (BUMN) is a State-Owned Enterprise that manages forests specifically in Java. It plays an important role in ensuring the conservation of forest areas in order to suppo rt the carrying capacity of the social and economic development of the community; one of which in the Forest Management Unit (KPH), Bogor, West Java. KPH Bogor is a forest management unit that is included in the Regional Division of West Java and Banten with a total forest area of 49,337.06 hectares. The management of these forest areas in the KPH Bogor is carried out by 5 BKPH (Forest Stake Unit Section) and 16 RPH (Forest Stakeholder Resort). One of the BKPH operates under the auspices of the KPH Bogor is BKPH Parung Panjang which is located in the villages of Gintung Cilejet and Jagabaya, Parung Panjang District, Bogor Regency, West Java, and is bordered by the Banten Province. BKPH Parung Panjang has 3 RPH (Forest Pemangkuan Resort) areas including the Maribaya RPH, Tenjo RPH, and Jagabaya RPH [2]. BKPH Parung Panjang has a forest area of 5,342.90 hectares with a relatively flat topography and manages plantations dominated by Acacia mangium. forestry plant; it is planted for land rehabilitation and HTI (Industrial Plantation Forest) development. Timber products from A. mangium have a high economic value for the use of pulp, paper and carpentry wood. A. mangium plantations in the forest area of BKPH Parung Panjang use the monoculture method. Monoculture [3]. This causes a lessening in which the characteristics of stand conditions wind up affecting the quantity, quality, sale value, and crop production of the asset. Based on the data from Perum Perhutani, A. mangium regarding wood production from 2014 to 2018, it has been reported that A. mangium plantations in BKPH Parung Panjang had a significant decline in wood productivity in 2015 by staggering 88%. To overcome this problem, a study was done to find out the information about the characteristics of A. mangium stands that are attacked by pests and diseases for better efforts to manage A. mangium plantations. The purpose of this study is to determine the frequency and intensity of the attacks on A. mangium stands. This determine the types of pest and disease attacks against A. mangium stands in the BKPH Parung Panjang forest area with their symptoms, and forms of control. Materials Pest or disease samples, pest and disease reference books, tally sheets, notebooks, and infraboards. System Random Sampling Field data collection was carried out in the A. mangium forest area BKPH Parung Panjang, Maribaya RPH plot 23 b by taking an IS (Intensity of Sampling) sample of 1% consisting of 7 plots producing 20 mx 20 m. Method of Criteria and Scores of Plant Pests / Diseases Observe A. mangium stands in each plot that experiences symptoms of pests / diseases and healthy ones. Observations were made by determining the value of pest and disease attack scores based on the symptoms of the attacks and the criteria as in Figure 1. Frequency of Attacks The frequency of attacks (FS) is determined by comparing the number of plants attacked by pests / diseases with the total number of trees observed, and expressed in percent (%) with the following formula: F = % Information: F: Frequency of attack Y: Number of trees attacked X: Number of trees observed Assessment of attack rates based on the percentage of plants attacked [4] The assessment of stand conditions is based on the percentage values of the intensity of attacks can be seen in Table 2. Determine Type of Pest or Disease Determination of the kind of pests and plant diseases is done through observing macroscopic symptoms. Pests and diseases are identified by comparing the morphological characteristics of insects or diseases that are found with the identification book of pests and diseases of forestry plants along with existing literature on pests and diseases in Acacia mangium plants. Determine The Pest / Diseases Control Determine the techniques of controlling and preventing the types of pests or diseases that are found correctly and methods of maintaining plants from various sources of literature and books. The Assessment of Criteria and Scores of Acacia mangium Pests / Diseases Based on the results of data collection in the field through the criteria assessment method and the score of pests / diseases of plants, the results obtained were the results of observations of pests and diseases in A. mangium based on the level of criteria for each damage in Figure 2. This pest replaces the top of the leaves, so the bite marks are dried and then created a hollow. Other symptoms of pest attacks found are root damage, dry, blackened and decaying stems Figure 4. the root neck area is peeled off, caused by root-eating pests or stems. These pests are generally attracted to unhealthy or diseased trees. Attack Frequency (FS) and Attack Intensity (IS) of Acacia mangium Based on the calculation results from the assessment criteria and the score of pests and diseases of A. mangium plants, the percentage of attack frequency (FS) was obtained in Figure 5. and the percentage of attack intensity (IS) in Figure 5. The biggest attack frequency is plot 4 with a total percentage of attack frequency is 80% and the lowest is plot 7 which is 0% figure 5. The frequency of attack rates of > 75% is included in the high category [4]. The greatest intensity of pest and disease attacks is plot 4 which is 50% and the lowest is plot 7 at 0% figure 6. The attack intensity by 50% is included in the category of the moderately damaged [5]. Plot 4 has the highest frequency of attack and intensity because it has high tree cover, the spa cing plant is not visible, and moreover there are several types of vegetation other than A. mangium, which will become a weed for A. mangium. Plot 7 has an overall topography of A. mangium plants, the spacing of plant is still regular, and no weeds grow around A. mangium plants [6]. Weeds act as an alternative host for pests and diseases, because weeds have a high competition with staple plants to get growing space, absorb water, nutrients, and sunlight. This causes physiological disorders for A. mangium and it is easily attacked by pests or diseases [3]. Based on figure 5. and figure 6., the percentage of attack frequency obtained between 20% -80% is included in the low category with an average percentage of attack frequency of 41% ] and intensity of attacks between 20% -50% included is in the mild category with an average percentage of attack intensity of 23%. The development of pest and pathogen populations are dynamic in accordance with environmental conditions that affect the activity of host pests and diseases [7]. The average value of the frequency and intensity in plot 7 have a value of 0%. This due to the fact that on plot 7 is beside the RPH Maribaya Office as shown in figure 7, therefore plantation managers can easily handle and maintain the plants with a variety of easily accessible facilities. Types of Pests A. mangium Found 3.3.1. Sac Caterpillar (Pteroma plagiophleps) Pteroma plagiophleps (Lepidoptera: Pyschidae), is a leaf pest that generally damages A. mangium plants [3]. The A. mangium forest area in BKPH Parung Panjang RPH Maribaya, Bogor Regency, West Java, obtained the results that leaf damaging pests found on A. mangium plants were Pteroma plagiophleps sacs shown in figure 8. [6]. A type of sac caterpillar that is generally causes a large attack on A. mangium, namely Pteroma plagiophleps [3]. The spread areas of this pest are Sri Lanka, Bangladesh, India, and Southeast Asia, especially in several regions in Indonesia [8]. This pest has several host plants, including Paraserianthes falcataria, Acacia mangium, A. auliculiformis, Rhyzophora spp., Eucalyptus spp., Pinus merkusii, Hevea brasiliensis, Shorea spp., Palmae spp. , and others [9] . Pteroma plagiophleps has several characteristics that are always live in cone-shaped bags, 16 mm long sacs, brown, made of several leaf substrate host glued together by silk producing by larvae, and these pests hang on branches or leaves [10]. Female pests do not have wings, while the males have small wings [3]. The chronological attack of Pteroma plagiophleps pests on plant leaves are at first pests eat young leaves, especially at the bottom of the leaves and result in hollow and dry leaves. These pests cause a damage to the plants in the lower epidermal layer and leaf mesophyll tissue leaving the upper epidermal tissue then the rest of the upper epidermis dries as the leaves will grow back [10]. This pest has a life cycle that starts with an egg phase that lasts for 10 days, larvae phase that lasts 49-62 days, and pupae phase that lasts for 14 days. Adult White Grub (Lepidiota stigma) This pest is classified as the super family Scarabaeoidea (Lamellicornia), and the order Coleoptera. This pest has local names in general in Central Java and East Java called gayas, in West Java called kuuk and in the Tapanuli area called guridap. Some types of pests that generally attack the firewall plants are Leucopholis rorida F., F., Holotrichia helleri Brsk. , Lepidiota stigma F., and H. constricta Bur. Some types of plants that have repaired attacks are teak, Paraserianthes falcataria, Altingia excelsa, Anthocephalus cadamba, and pine [12]. In the A. mangium forest area in BKPH Parung Panjang RPH Maribaya, Bogor Regency, West Java, obtained the results of data found in A. mangium plants need uret type Lepidiota stigma. L. stigma belongs to the super family of Scarabaeoidea, a pest that has a large size and fat. L. stigma pest found in the A. Mangium BKPH Parung Panjang Forest Area has a size of 5 cm and a relatively fat body that can be seen in figure 9. L. stigma has special features by this pest at the time published move on the ground level. This pest has a grayish brown body and is covered by yellow or yellowish white scales (if the scales are detached, then the pest's body is shiny brown) and a fat body. The body length of L. stigma differs depending on the sex of the pest, but in adult urets the length can reach 7.5 cm. The length of the female uret is around 4.3 -5.3 cm and the width is 2.2 -2.7 cm. Meanwhile, the length of the male uret is around 4.2 -5.3 cm and the width is 2.0 -2.6 cm. L. stigma has white patches on the tip of the elytranya measuring ± 1.5 mm consisting of tiny micro scales. This pest is very deadly to plants and very detrimental to the quality of Industrial Forest Plants because it has symptoms of an attack that at first the leaves will wilt, turn yellow, and dry, then followed by drying and rotting of the stems, and the plant will die quickly and will be seen in a period of 2 months after the beetle's flight. If the roots of the plant are examined, then uret bite marks will be seen on the roots of the branches that are still young and some urets that multiply [13]. Plagiophelps pteroma Control The form of pest control of P. plagiophelps pests can be done by adjusting plant spacing. Plant spacing in monoculture forests that are generally used are size 2.5 x 2.5 m, 3 x 1 m, 2 x 3 m, and 3 x 3 m [14]. In addition, there are several other forms of control, among others, by installing light traps, and using vegetable insecticides in the form of mahogany rind (200gr / L), mahogany seed extraction (150gr / L), and squeezed yam tubers (125gr / L). Among some of the effective insecticides is the squeeze of tuber juice. The yam tuber extract had a significant effect on the mortality of sac caterpillars [15]. The higher the concentration of yam tuber extract, the higher the larva mortality rate. At the highest concentration, yam tuber extract produced larval mortality of 97.78%. Apart from using plan t-based insecticides, eradication of bag worms can be carried out by using chemical systemic insecticides using Trichlorfos (95% WP 15 g) insecticide sprayed onto leaves [16]. In addition to the use of pesticides, bag caterpillars can be controlled using biopesticides with Beauveria bassiana fungi, because these fungi are pathogenic to bag caterpillars [17]. Lepidiota stigma Control The L. stigma control is done through this white grub larva collection during active larva strikes at the time of 5-9 months of plant life. The eradication of L. stigma chemistry is done by using an insecticide mixed with soil in the form of a solution, granules, and dust. Mixing of insecticides with soil is done before or during the planting process so that eradication of white grub can run effectively before the emergence of plants damaged by white grub attack. The type of insecticide that is effective for eradicating white grub is 3G Carbofuran [12]. uret pests with 3G carbofuran insecticide was declared effective at a dose of 10 grams per planting hole, can reduce percent white grub attack from 70% to 10% [18]. Plant Maintenance and Treatments In addition to eradicating pests, maintaining and treating plants intensively. This needs to be done to reduce and prevent pest and disease attacks on A. mangium plants. To carry out plant maintenance, the first step is weeding [19]. Weeding is the disposal / prunning of disturbing vegetation to avoid competition for nutrients, water and light. Weeding activities are carried out in the first year until the canopy closes or reaches a certain size, so that the plant can suppress the growth of weeds, so that the growth of weeds which are nests for pests will be controlled. Some things that are done in the weeding process are as follows control of weeds on the surface of the soil which is a host for pests, including control of grass, herbs and shrubs that directly compete with new crops in the first year of planting, and cleanup and disposal of disturbing plants by removing weeds on the surface. After weeding, other maintenance activities which become the main aspects in the success of planting good quality plants are through the fertilization process. In general, the fertilization process is carried out when there is a lack of nutrients caused by these plants growing in critical land, poor nutrient cycles, washing by rain water on plants that grow in areas of high rainfall, plants that grow in areas of rainfall low, and lack of mycorrhizae for soil fertility. Fertilization is carried out in plantations, generally done at some time after planting the first three months until the closing of the canopy, at the beginning of thinning, and 3 -10 years before the rotation of logging is done. After that there is a trimming process that is done by cutting branches still alive in order to improve the quality of wood products at the end of the cycle. In general, wood pruning is ICEFC2019 IOP Conf. Series: Earth and Environmental Science 528 (2020) 012043 IOP Publishing doi:10.1088/1755-1315/528/1/012043 9 carried out in industrial plantations whose wood will be produced for veneers, construction wood and pole wood to free it from the eyes and defects in the wood so that the wood produced has good quality. After the wood trimming process is done, there is one other process of activity that is needed in supporting plant maintenance, namely thinning activities. Thinning is the activity of reducing the number of trees in a stand which is carried out several years during the cycle and starts several years after the plant canopy is closed Conclusion The highest frequency of attacks is plot 4 with a total percentage of attack frequency of 80% (high category) and the lowest is plot 7 which is 0% (healthy category). The greatest intensity of pest and disease attacks is plot 4, which is 50% (moderate damage category) and the lowest is plot 7, which is 0% (healthy category). The average percentage of attack frequency of all plots was 41% (low category. The average percentage of attack intensity was 23% (mild damage category). Some of the symptoms of pests found in A. mangium are hollow leaves caused by Pteroma plagiophleps (small pocket caterpillars), and damage to roots, stems that dry out, blacken, and rot caused by stigma Lepidiota stigma. No symptoms were found in A. mangium.
2020-07-23T09:08:56.178Z
2020-07-22T00:00:00.000
{ "year": 2020, "sha1": "fbe379e8f61db64a9c76eac4bb43cc9aebfa2d55", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/528/1/012043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "218164f58a39790e3b5b92a31d4ad8de1c111493", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
266468962
pes2o/s2orc
v3-fos-license
An outbreak of subhyaloid hemorrhage after accidental laser exposure during an Indian festival Purpose: To report the clinical manifestations and outcomes of patients who experienced retinal damage due to accidental laser exposure during a festival in Kolhapur, Maharashtra. Methods: Consecutive patients who presented with sudden loss of vision following exposure to laser lights during a religious Indian festival (Ganapati festival) on the same day (9 September 2022) at the same location (idol visarjan procession) were identified from the medical records of various eye hospitals in Kolhapur district of Maharashtra. Eyes with persistent subhyaloid hemorrhage (SHH) were taken up for neodymium-doped yttrium aluminum garnet (Nd: YAG) posterior hyaloidotomy. Patients were examined at weekly intervals up to 1 month. Results: Thirty-four eyes of 34 men were identified with age ranging from 18 to 27 years. The mean duration of exposure to the laser projections was 4.9 ± 1.7 h, and the mean distance from the laser source was 7.3 ± 2.7 feet. All presented with SHH involving the macula. SHH had a median size of 3 disc diameters or larger in 30 eyes (88%), and 29 (97%) of these required hyaloidotomy while one patient underwent pars plana vitrectomy. The mean visual acuity improved from 1.45 ± 0.5 log of minimum angle of resolution (logMAR; 20/560 Snellen) to 0.11 ± 0.19 logMAR (20/25 Snellen) (P < 0.001). One eye showed full-thickness macular hole with visual acuity of 20/200. Conclusion: We report a large number of patients experiencing laser-induced SHH, resembling an outbreak, due to exposure to a malfunctioning high-powered recreational laser during a religious festival. Lasers have gained in popularity over the years and found increasing applications in various spheres of life including medicine, military applications, entertainment industry, and various manufacturing uses.Lately, a lot of large gatherings such as sporting events, music festivals, and so on have resorted to utilizing laser shows to engage the audience.3][4][5][6][7][8][9][10] The light energy from lasers focused into the eye is more hazardous than looking directly at the sun.The Food and Drug Administration (FDA) is concerned about illegally sold laser projectors that are powered above 5 mW, which is a standard for certain types of lasers and laser projectors used during laser light shows. [11]These are classified as Class IIIB and Class IV lasers and have the potential to cause immediate eye injuries and skin hazards from direct beam exposure.Also, public knowledge of the potential for such injuries is extremely less, with many wanting to focus directly at the laser source rather than the images being displayed. [12]jority of laser-related eye injuries have been due to handheld laser pointers, especially used by children as well as in the military, and most causative lasers have been of the Q-switched variety in the blue light spectrum. [10,13,14]anifestations including preretinal, intraretinal, and subretinal hemorrhage, vitreous hemorrhage, macular hole, peripheral retinal holes, and foveal burns have been reported in various case reports and series. [1,2,13,15][18][19][20] Here, we report the clinical presentation and outcomes from a series of 34 eyes that experienced laser-related eye injuries while attending the same religious festival during the same time.To the best of our knowledge, this is the largest series of laser-related eye injuries reported from a single public gathering. Methods This was a multicentric retrospective study conducted in Kolhapur city of Maharashtra, a western Indian state.Informed consent was taken from all patients, and the study followed the tenets of the Declaration of Helsinki.Case records of all the patients who presented with sudden painless diminution of vision following exposure to laser lights during a religious Indian festival (Ganapati festival) on the same day (9 September 2022) at the same location (idol visarjan procession) were Cite this article as: Shah RP, Dabre ZG, Sengupta S.An outbreak of subhyaloid hemorrhage after accidental laser exposure during an Indian festival.Indian J Ophthalmol 2023;72:S144-7. This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. For reprints contact: WKHLRPMedknow_reprints@wolterskluwer.comS145 identified from the medical records of various eye hospitals in Kolhapur district of Maharashtra, and those with a minimum follow-up of 30 days were included in this study. Patients were asked about the duration for which they observed the laser projections before visual loss and the approximate distance they were standing from the laser source.They were examined at their initial presentation and follow-up visits for full ophthalmologic examination, which included best-corrected visual acuity (BCVA), intraocular pressure, slit-lamp examination, and detailed fundus examination.Thereafter, patients were seen frequently in the first week and then weekly up to 1 month time point.Neodymium-doped yttrium aluminum garnet (Nd:YAG) posterior hyaloidotomy was done in patients who presented with subhyaloid hemorrhage measuring more than 3 disc diameters (DD) and involving the center of the fovea [Fig.1a].These patients had persisting subhyaloid hemorrhage for more than 5 days without any signs of resolution.Pretreatment and posttreatment fundus photographs (Intucam 50) and optical coherence tomography (OCT; RTVUE 100 from OPTOVUE) were performed wherever available.The size of the subhyaloid hemorrhage was documented in terms of DD centered on the foveola.A detailed discussion of the available treatment options was done, and the procedure was performed after an informed consent was taken.Rest of the patients were advised conservative treatment and strict close follow-up. Procedure of hyaloidotomy Mydriasis was achieved with tropicamide 0.5% eye drop.Topical anesthesia was achieved with lignocaine 4% eye drops.Q-switched Appaswamy Nd: YAG laser with a wavelength of 1064 nm, emitting single burst, was delivered through a slit-lamp delivery system using Goldman three-mirror contact lens until an opening was created in the posterior hyaloid membrane near the inferior edge of the hemorrhage [Fig.1b, c].The retinal blood vessels and fovea were avoided.Laser exposures were started with 5 mJ and then gradually increased until drainage of the pre-retinal blood into the vitreous cavity due to gravity was evident.Patients were advised weekly follow-up to 1 month postinjury. Statistical analysis All continuous variables were presented as means with standard deviations or median with interquartile range (IQR), while all categorical variables were presented as proportions (n, %).BCVA was converted to the logarithm of minimum angle of resolution (logMAR) for statistical analysis.Group differences between continuous variables was assessed using the Student's t-test or the Wilcoxon's rank-sum test for nonparametric distribution, while differences in categorical variables between groups were assessed using the Chi-square or the Fischer's exact test.Change in parameters before and after treatment was analyzed using the paired t-test. Results We found 34 eyes of 34 patients with laser-inflicted subhyaloid hemorrhage that occurred on 9 September 2022 during the same religious festival and at the same location.All patients were men with unilateral involvement and had a mean age of 23.1 ± 2.4 years (rang 18-27 years).The mean duration of exposure to the laser projections was 4.91 ± 1.69 h, and the mean distance from the laser source was 7.26 ± 2.68 feet.On extensive tracing, we were able to locate two laser sources that were used in the procession.One of these (Voltegic LED Laser light projector, Aliexpress Pvt Ltd) was found to be a diode-based laser in the red (250 mW, 650 nm) and green (100 Mw, 532 nm) spectra, while details of the other (called XTRDT laser light projector) were not available. Discussion In this series, we found 34 eyes with laser-induced subhyaloid hemorrhage due to exposure to a recreational laser used for a laser show during a public festival.Patients viewed the laser display for about 5 h and were standing relatively close, that is, about 7 feet from the causative laser source.The subhyaloid hemorrhage involved the macula in all eyes and was relatively large in the majority, necessitating drainage with a Nd:YAG laser.Visual acuity was understandably worse, that is, finger counting at 2-3 m in those with large hemorrhages compared to those with smaller hemorrhages.BCVA continued to improve every week, over the first 4 weeks, due to the resolving subhyaloid hemorrhage and resorption of blood from the vitreous cavity with time.The size of the hemorrhage at baseline did not influence the final outcome. Manifestations and outcomes due to handheld lasers and pointers have been extensively reported and summarized recently in a review article by Bhavsar et al. [3] However, retinal damage due to exposure to recreational lasers during public gatherings has not been reported in large numbers till date, presumably due to strict guidelines and deployment of safety measures during their use.Shneck et al. [18] reported a series of three cases, all with subhyaloid hemorrhage that occurred after attending a laser show, similar to our patients.All hemorrhages resolved spontaneously within 6 months with good outcomes.Boosten et al. [17] reported retinal hemorrhages and retinochoroidal coagulation spots in two patients who attended the same dance festival with an audience-scanning laser show.Similarly, Fossataro et al. [20] reported two young cases of macular hemorrhage, both of which occurred after laser exposure and cannabinoid intake during the same disco party.Aras et al. [16] reported one case from recreational laser exposure and Perez-Montaño et al. [15] reported one such case that occurred after laser exposure at a music festival. To explain the mechanisms causing various retinal manifestations due to lasers, Barkana and Belkin [2] suggested various laser-and eye-related factors.A laser beam can cause retinal lesions on account of its high directionality, energy, irradiance, coherence, and power.Also, the severity of retinal damage is determined by the wavelength, duration of exposure, spot size, power, and location. [2]Like ambient light, the optical structures in the eye condense the laser beam, especially those with a wavelength within the visual spectrum, on the retina, amplifying the laser irradiance by 5-6 times.This makes the retina the most vulnerable tissue to accidental laser injury, as seen in our series.Unfortunately, we could not find too much of technical information regarding the lasers that induced these injuries, but all patients gave the history of exposure to laser during the event.We suspect a malfunctioning high-powered laser (>5 mW output) to be the cause, producing localized suprathreshold heat leading to rupture of superficial retinal capillaries and accumulation of subhyaloid blood.Indeed, one of the lasers appeared to be in the red-green spectrum, with repeated exposures potentially leading to capillary rupture.The close proximity of our patients to the laser source and seeing the laser display for nearly 5 h may have also contributed to the occurrence of the subhyaloid hemorrhage. It is possible that more patients in the laser show may have experienced retinal hemorrhages but did not report to ophthalmic hospitals in the city due to macula sparing.Given these events, and the involvement of such a large number of young men in our series, resembling an outbreak, it is imperative that the government takes cognizance of such mishaps and places strict safety requirements for use of lasers during public events.Also, all laser users should be registered with the authorities before a show and be held accountable in case mishaps occur. Conclusion Recreational lasers used during a public event can cause sudden retinal damage in a large number of participants.Most of the retinal manifestations may require some form of intervention to reverse, and some causing macular scarring and hole formation could lead to permanent visual loss.To the best of our knowledge, this is the largest series of a recreational laser-induced retinal hemorrhage reported in literature.There is a pressing need to put in place stricter compliance and safety measures, so that such large-scale mishaps do not recur in the future.
2023-12-23T06:17:06.638Z
2023-12-22T00:00:00.000
{ "year": 2023, "sha1": "778dc961d0031cbf58b70f9b12a7f2992408ae23", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f340d76bc17159ec8d370cfba50e17ce6c3ce49c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
4991044
pes2o/s2orc
v3-fos-license
Pneumatization of the zygomatic process of temporal bone on computed tomograms Purpose: Zygomatic air cells (ZAC) are a variant of temporal bone pneumatization that needs no treatment. However, ZAC can have an impact on surgical procedures in the temporo-mandibular joint region. Recent reports suggest that computed tomography will disclose more ZAC than can be diagnosed on panoramic radiography. The aim of this study was to analyze ZAC prevalence on CT in a population that was not pre-selected by admission to a dental clinic. Furthermore, an extensive literature review was performed to assess the prevalence of ZAC and to address the impact of imaging technique on the definition of the item. Material and methods: Digitalized cranial CTs of 2007 patients were retrospectively analyzed. The Frankfort horizontal was used to define a ZAC on sagittal CTs. Results: In this study group, 806 were female (40.16%) and 1,201 were male (59.84%). Mean age was 49.96 years in the whole group (female: 55.83 years, male: 46.01 years). A ZAC was diagnosed in 152 patients (female: 66, male: 86). Unilateral ZAC surpasses bilateral findings (115 vs. 37 patients). ZAC were diagnosed in children 5 years of age and older. Sectional imaging techniques show a better visualization of the region of interest. However, presently an increase of ZAC prevalence attributable to imaging technique cannot conclusively be derived from the current literature. The normal finding of a ZAC on radiograms is a sharply defined homogenous transparent lesion restricted to the zygomatic process of the temporal bone that has no volume effect on the shape of the process. Conclusion: ZAC is an anatomical variant of the temporal bone that has come into focus of maxillofacial radiology due to its noticeable aspect on panoramic radiograms. The harmless variant can be expected in about one in thirteen individuals undergoing facial radiology. Panoramic radiograms appear to be sufficient to present ZAC of relevant size. However, in preparation for surgical procedures affecting the articular eminence the application of sectional images is recommended. Material and methods: Digitalized cranial CTs of 2007 patients were retrospectively analyzed. The Frankfort horizontal was used to define a ZAC on sagittal CTs. Results: In this study group, 806 were female (40.16%) and 1,201 were male (59.84%). Mean age was 49.96 years in the whole group (female: 55.83 years, male: 46.01 years). A ZAC was diagnosed in 152 patients (female: 66, male: 86). Unilateral ZAC surpasses bilateral findings (115 vs. 37 patients). ZAC were diagnosed in children 5 years of age and older. Sectional imaging techniques show a better visualization of the region of interest. However, presently an increase of ZAC prevalence attributable to imaging technique cannot conclusively be derived from the current literature. The normal finding of a ZAC on radiograms is a sharply defined homogenous transparent lesion restricted to the zygomatic process of the temporal bone that has no volume effect on the shape of the process. Conclusion: ZAC is an anatomical variant of the temporal bone that has come into focus of maxillofacial radiology due to its noticeable aspect on panoramic radiograms. The harmless variant can be expected in about one in thirteen individuals undergoing facial radiology. Panoramic radiograms appear to be sufficient to present ZAC of relevant size. However, in preparation for surgical procedures affecting the articular eminence the application of sectional images is recommended. Introduction The development of the skull is accompanied by multiple pneumatizations at different locations [39]. Some of these pneumatizations are air-filled cavities completely enclosed by bone, also known as air cells (AC) [12]. The best known of these are the mastoid AC in the temporal bone. These AC can cause severe problems due to pathologies, such as tumor or infection of the mastoid, and their consecutive transmission in adjacent regions, including the zygoma [12], [18], [68]. Origin and spread of temporal bone pathologies make clear the preserved connection between the disseminated AC within the temporal bone and the mastoid antrum ( Figure 1). Some locations of AC inside the temporal bone can be expected as a characteristic feature but others only occur occasionally [54], [61]. Indeed, the individual distribution of AC varies considerably and correlates poorly with age and gender [65]. Tremble classified 10 regions of AC in the temporal bone with particular reference to illustrate pathways of mastoid infection [61]. One of the temporal bone's AC locations is the root of the zygomatic process forming the glenoid fossa [18]. AC of the zygomatic process can occasionally even extend into the arch [39], [61]. Tyndall and Matteson re-addressed the phenomenon of the pneumatized articular eminence from a radiological point of view. They specified it as a radiolucent, asymptomatic 'defect' in the zygomatic process of the temporal bone on panoramic radiographs showing a similar appearance as the mastoid AC. The 'defect' optionally approximates, according to their statement, up to the temporo-zygomatic suture, but never crosses this junction. In addition, they also reaffirmed the zygoma as of normal shape without any kind of altered outline or destruction [62]. Pneumatization of the temporal bone's zygomatic process is a variant of human anatomy that needs no treatment [49] but is a relevant finding in the planning of temporomandibular joint (TMJ) surgery [26]. During TMJ operations a pneumatized fossa can hamper surgical procedures [7], [50], [67]. Indeed the alternative surgical treatment protocol was proposed for recurrent mandibular dislocation in patients with zygomatic AC (ZAC) [7]. Furthermore, extensive ZAC can be associated with TMJ dysfunction [25] and ZAC were surprisingly frequently diagnosed in patients with TMJ problems [22]. In addition, some surgical procedures on malar osteoplasty have to Tremble (1934) [61] consider the internal structure of the zygomatic arch [6], [59]. Nowadays, zygomatic abscess in the course of temporal bone infection is rare [61], [66], but knowledge about this way of infection is mandatory in differential diagnosis of inflammatory disease in the lateral skull base [36]. The aim of this study was to determine the frequency and topography of zygomatic air cells with respect to age and gender on computed tomograms and to compare our findings with results from other studies with reference to the impact of imaging method on the reported frequency of this osseous variant. Assortment of images The temporal bones were reviewed for zygomatic air cells in 2,007 patients who were subjected to regular skull computed tomography (CT) at the University Hospital Hamburg-Eppendorf. The period of the studied CT images ranges from 2009 to 2013. CT images of patients that were incomplete or had blurring, had been excluded from the patient population. Patients with a history of maxillofacial trauma or temporal bone pathology were excluded from evaluation as were patients with a known history of skull development alterations. CT scanners The CT scanners used for the study were the model Philips MX 8000 IDT 16, Philips Brilliance Brilliance iCT 64 and 256 (Philips Healthcare, Best, The Netherlands). The device MX 8000 IDT 16 is a 16 line coil computed tomography. The Brilliance 64 is also a coil CT scanner, which however, takes up 64 lines. The Brilliance ICT 256 writes up to 256 lines in the longitudinal axis. The tube voltage was 120 kV, the slice thickness 3 mm. After the pictures were taken, they were stored and edited in the Picture Archiving and Communication System (PACS IW, General Electric Healthcare, Milwaukee, USA). Analysis of images Digitalized skull CT images were screened for ZAC on a diagnostic monitor (Department of Radiology, UKE). Any pneumatization in following regions was recorded: the articular eminence, the articular tubercle or the zygoma cranial or anterior to the temporal fossa. The images were viewed in the sagittal plane, as well as in the axial plane. ZAC was defined using the Frankfort horizontal. On sagittal images, the dorsal definition of the Frankfort horizontal was the most cranial portion of the glenoid fossa and anterior definition the most cranial point of the infraorbital rim on the same CT sectional image. All patients were diagnosed as individuals with ZAC, in whom well defined air-equivalent disruptions of bone contour within the zygomatic arch were visible inferior to the Frankfort horizontal plane. This definition was chosen in order to avoid inaccurate topographical signs of the temporal bone. Statistical analysis For the statistical analysis of own findings, gender, age and location of the ZAC was collected. For data analysis, metric variables with mean value and standard deviation as well as categorical variables with the absolute and relative frequencies were determined. For the age dependency a logistical regression was computed, the dependent variable was formed by the presence of ZAC. Age was the independent variable. For the analysis of gender dependency, chi-square test was used (p-values are reported). A p-value under 5% indicates statistical significance. All tests are performed two sided. All analyses were computed with IBM SPSS™ (Version 21.0., IBM Corp., Armonk, NY) and PSPP (psppire 0.8.4, GNU project). Review Mesh terms 'pneumatization', 'zygomatic air cell', 'zygomatic air cell defect', 'pneumatized articular eminence', combined with 'computed tomography', 'cone beam computed tomography (CBCT)', 'panoramic radiograph' and combinations thereof were searched in PubMed database. In addition, these terms were also used for a Google™ search of references not provided by PubMed. Ethics The investigation was approved by the University Hospital Authority Board as a prerequisite to achieve the doctoral degree in medicine (LV). All patients had given informed consent for scientific investigation of medical findings. For this study, no ethics vote was needed. Results ZAC is a radiotranslucent finding restricted to the temporal part of the zygomatic arch. The cavity is a distinct osseous lesion with well demarcated borders. In no instance the cortical layer surrounding the ZAC is interrupted. CT signals of the cavity's internal structure are equivalent to air. Focal inhomogeneities projected into the lumen in a single section are in every case identifiable on perpendicular sections as the transition of the bone into the cavity. Therefore, radiopacities inside the cavity are judged to be no physiological finding of ZAC on CT. Total group Out of 2,007 patients 806 were female (40.16%) and 1,201 (59.84%) male. The mean age of the patients was 49.96 years (SD=22.03 years, rang 0 to 102 years). The mean age of women was 55.83 years (SD=22.82 years, range 3 to 102 years). The mean age of the men was 46.01 years (SD=20.66 years, range 0 to 100 years). Age distribution of patients with ZAC One hundred and fifty-two patients (7.57%) of the study had pneumatizations located within the articular eminence, the articular tubercle and in the zygomatic process of the temporal bone. In all these cases it was stated that once a pneumatization occurs in the zygomatic process it also extends over the glenoid fossa, in other words: the formation of ZAC are dependent on a pneumatized glenoid fossa. Sixty-six patients with ZAC (8.19%) were female, and 86 (7.16%) male. The mean age of patients with ZAC was 45.12 years (SD=20.38 years, range 5 to 90 years). Mean age of women with ZAC was 47.53 years (SD=20.51 years, range 5 to 90 years) and the mean age of men was 43.27 years (SD=20.20 years, range of 6 to 90 years). Table 1 and Table 2 and illustrated in Figure The chi-square test showed no significant correlation between the parameter 'gender' and 'age' and the prevalence of pneumatization (p<0.05). However, p-value verges on the significant range for the parameter 'age', indicating only a trend. The probability of occurrence of pneumatizations in adulthood decreases by 1% with each year the age increases. Literature review Publications were selected providing data on ZAC prevalence that were based on larger series of radiographs. In order to illustrate the impact of imaging technique on the prevalence rate, studies that had analyzed panoramic radiographs are reproduced separately from studies who evaluated multiple sectional images (Table 3 and Table 4). The prevalence of ZAC on panoramic radiographs of the published studies is usually less than 5% excepting 2 recent reports from Iran and India that presented prevalence higher than 5%. Furthermore, in one study with symptomatic TMJ the ZAC prevalence was exceedingly high (Table 3). In several publications the prevalence of ZAC is definitely higher on sectional cross sectional images than on panoramic radiographs (Table 4). However, 2 recent studies from Turkey and Brazil reveal ZAC prevalence on CBCT in the range of panoramic radiography derived results (Table 4). Mean value of ZAC prevalence depending on imaging technique is only slightly higher in computed tomography studies (9.7% vs. 7.8%). Therefore, the dependence of ZAC detection on the resolution of the applied imaging technique, that is obvious on the first glance, cannot be confirmed with respect to all currently published radiological studies. Indeed, the number of authors is extremely limited who are investigating ZAC to allow general recommendations for preferring an imaging method. Credit should be given to Andersen [1] who was very likely the first investigator publishing a large study on ZAC prevalence on rotational panoramic tomograms. This author reported on children aged 10 years or older showing ZAC by this method [1]. Discussion This study shows that about one in thirteen patients will show a pneumatization of the zygomatic arch on CT and is quite a frequent radiological finding. Furthermore, this variant of temporal bone pneumatization occurred not independently but in all our cases was associated with a pneumatized glenoid fossa, as already noted by the authors of the first CT-based study on ZAC prevalence [20]. However, these authors did not apply the Frankfort horizontal as a defining landmark to distinguish ZAC of the zygomatic process from an extended pneumatized glenoid fossa. Although the AC were predominantly diagnosed in adults, this study reveals foci of a pneumatized zygomatic arch even in children 5 years of age. This finding is in line with basic morphological studies on the very early onset of pneumatization in the human temporal bone [68]. Up to now the definition of what region should be recognized a preferential origin of the temporal bone's pneumatization relies on a classification popularized by Tremble [61] (Figure 1). Synonymous descriptions of this radiological findings are (zygomatic) AC [12], [57], cystlike pneumatization of articular tubercle [1], ZACD [56], [63], pneumatized articular eminence/tubercle or zygomatic pneumatization [32], [37], [47]. In 1985 Tyndall and Matteson re-examined the pattern of AC topography in the zygomatic process from a radiological point of view and emphasized the restriction of the pneumatization to the temporal part of the process. They also confirmed the integrity of the process' shape as a defining feature of the variant [62]. The term 'defect' was probably introduced to describe the striking radiotranslucency of the lesion on panoramic radiographs. This term calls on the association of a pathological finding and thus appears to be a misnomer what is just a variant of bone anatomy. Prior to the first study of ZAC prevalence, Roser et al. [53] and Kulikowski et al. [37] already had reported on this variant -as it can be seen on panoramic radiographs -and associated surgical implications in TMJ surgery. Furthermore, the first prevalence study on this subject based on panoramic radiographs was published prior to the reference paper of Tyndall and Matteson [1]. This publication showed a ZAC prevalence almost identical to the result published by Tyndall and Matteson one year later. However, what is now called ZAC has long been known in the otological literature as an osseous variant of variable size that requires diagnostic awareness, in particular the spread of temporal bone infections to the zygomatic arch [18]. Lang stressed the anatomical finding [39] that AC of the temporal bone's pneumatic spaces do not respect osseous boundaries, just like other paranasal sinuses [15]. However, within the limits of the presented study, expansion of ZAC is confined to the temporal part of the zygomatic arch confirming the original description of Tremble [61]. However, Levenson et al. [40] reported a case of extracranial pneumatocele with a herniated AC of the mastoid. In this case, the pneumatization of the mastoid reached out to the root of the zygomatic arch. The intact anterior part of the zygomatic arch was bowed outward by the herniated mastoidal AC. Since the first prevalence study of ZAC on panoramic radiographs, numerous radiological descriptions of the variant were published using this radiographic projection, initially as case reports [11], [41], [51], [63], [71], and later as large series (Table 3). These studies describe ZAC as an incidental finding that was assumed to occur only after puberty [28]. Indeed, ZAC is an asymptomatic incidental finding [62]. However, one report detailed in a female with a history of rheumatoid arthritis and chronic facial pain the synchronous finding of ZAC ipsilateral to the trigger point of pain attacks. This patient was successfully treated with anti-inflammatory medication. The authors explicitly addressed the coincidence of the findings and denied a causal relationship between the ZAC and pain development [58]. ZAC on panoramic radiographs is usually diagnosed in less than 5% of the respective study group. For example, Yavuz et al. describe the prevalence of ZAC on panoramic radiographs in a very large group (N=8,107) of patients investigated in a dental clinic, but the prevalence was 1.03% and relatively low. Thus, variants of ZAC prevalence within a small range appear to be normal [70]. Prior to the study of Yavuz et al. [70], Hofmann et al. [27] identified the first ZAC in a seven-year old boy. This result was confirmed by others [16], [47] and these findings disproved the thesis that ZAC develop only after the puberty [28]. Indeed, ZAC prevalence in children from the age of 9 years do not differ from that in adults [16] and the present CT study revealed ZAC in children younger than presently reported. Recently, the ability of panoramic radiography was denied to describe accurately the glenoid fossa and zygomatic process pneumatization [9]. According to these authors CBCT scans would be a more appropriate choice due to their three-dimensional imaging and better spatial resolution of the region of interest. Overall, they demonstrate in their study that the prevalence of AC in panoramic radiographs (1-3.5%) was significantly lower as opposed to those by CBCT (~8%) [9]. In the present CT-based study a trend shows that with a yearly age increase the probability of ZAC decreases by approximately 1%. This would raise the question whether an age dependent bone remodeling process leads to the fact that the pneumatization is a rare finding in aged individuals. However, ZAC occur even in very old people ( Figure 8). So far, throughout the reviewed literature there was no study on the prevalence of ZAC based on CT scans that was not taken from a dental clinic. This selection could have an impact on the frequency of ZAC due to the probability of a higher frequency of patients subjected for radiological investigations of the facial skeleton and skull base for diseases related to this region. Therefore, one aim of our study was to display the prevalence of ZAC using a less biased population. The presented data are in line with previous CT-based reports on ZAC prevalence [4], [20], [38], [43]. The numerically extensive investigation justifies the presented results. However, due to the selection of a high number of patients with adequate CT images, it could not be avoided to use pictures from the X-ray archive that had been generated by different CT scanners. In the study of Ladeira et al. [38] the images were recorded with cone beam CT or with a multislice CT. It made it no difference to use images from different sources to identify ZAC. Indeed, the prevalence of ZAC on CT is almost identical to CB-CT [43]. With reference to these reports it can be assumed for the present study that the use of two different multislice CT scanners does not substantially affect the imaging capability of ZAC. Ladeira's et al. study and others [20], [38] confirm that prevalence of pneumatization based on images derived from three-dimensional techniques is higher than by panoramic radiographs. Nevertheless, the presented review on ZAC radiology does not support the generalization of these authors' statements as far as the radiological identification of ZAC is concerned (Table 3 and Table 4). However, an important parameter for the evaluation of very fine air spaces could be the layer thickness at which the images were evaluated. In this work it was found with a relatively coarse layer thickness of 3 mm a still not very different prevalence of 7.57%. So far, Bronoosh et al. [4] used the thinnest layer (layer thickness = 0.625 mm) and published a slightly higher prevalence of ZAC (9.55%). Groell and Fleischmann [20] used a layer thickness of 1-2 mm and reported a prevalence of 5%. The work of Ladeira et al. [38] has the highest prevalence with 21.3%, however, in this work, unfortunately no information on the layer thickness could be found. In summary, the impact of layer thickness of cross sectional images apparently does not affect the imaging of ZAC. One advantage using CT is that more detailed analyzes can be done [52]. For example the volume of temporal bone pneumatization can be determined using 3D reconstructions [24], [30]. Ladeira et al. [38] pointed to the probability of laterality in ZAC. Unilateral ZAC occurred on the left side in 60.5% in their study. Left sided unilateral ZAC were more frequently in the present study, but the data are presently not conclusive to definitely address the potential laterality of ZAC. One study examined the correlation between ZAC and the degree of pneumatization of the remaining regions of the temporal bone [4]. It was stated that once ZAC is present, the degree of pneumatization in the rest of the temporal bone is significantly higher. We did not investigate the temporal bone pneumatization in general, but interpretation of data should consider the high variability of temporal bone pneumatization in human beings and the temporo-spatial effect of external factors on this phenomenon [68]. Radiotranslucent lesions of the zygomatic arch other than ZAC are very rare but have to be considered: haemangioma [8], [31], aneurysmal bone cyst [13], eosinophilic granuloma [19], [23], and metastasis [42]. Furthermore, extensive pneumatization of the sphenoid sinus can superimpose the zygomatic arch on panoramic radiographs [2]. ZAC can become involved in the spread of atypical mastoiditis to other pneumatized areas of temporal bone. Indeed, inflammation of the TMJ can be the result of mastoiditis [14]. On the other hand, a TMJ inflammation may cause an otitis externa [60]. Both ways of infection transmission do not require ZAC. TMJ surgery has to consider possible structural changes of the zygomatic arch [5], [26], [35], [37], [44], [51], [64]. Presently, magnetic resonance imaging is not well established in ZAC diagnosis. Low signals overlying the glenoid fossa were attributed to extensive temporal bone pneumatization impairing TMJ diagnostics [69]. However, Randzio et al. [51] showed pneumatization of the zygomatic arch could be distinguished from joint findings on sagittal sections of MRI. Conclusion ZAC is a relatively frequent anatomical variant of the lateral skull base that needs no treatment. Knowledge about this variant is mandatory in TMJ surgery planning, malar surgery including the distal zygomatic arch, and also in the assessment of topographically related pathologies. ZAC on CT is a distinct lesion within a zygomatic arch that shows no alteration of the bone's shape. ZAC is a homogeneously radio-translucent lesion. Attention should be paid to alterations of this radiological appearance. Presently, the literature does not provide convincing evidence in favour for sectional imaging to identify ZAC more frequently than do panoramic radiographs. However, the internal structure of ZAC and communication to the air cell system of the temporal bone can only be studied on adequate sectional images. Notes Competing interests The authors declare that they have no competing interests.
2018-04-03T05:58:44.700Z
2016-06-14T00:00:00.000
{ "year": 2016, "sha1": "9d2de85eebc57a8859056478f56965d6ef43da41", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9d2de85eebc57a8859056478f56965d6ef43da41", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51914155
pes2o/s2orc
v3-fos-license
The importance of preventative medicine in conjunction with modern day genetic studies Genetic screening in the primary care setting is the future of preventative medicine. Genetic testing is an important medical tool for assessing various inheritable diseases, conditions, and cancers. The ability to diagnose patients before symptoms surface can help lessen the severity of symptoms and promote quality of life. However, genetic screening can cause psychological distress from the knowledge of test results, in some cases only serving to increase the risk of developing a condition due to stress. Genetic testing can be conducted anytime in life, even before birth. In this review, a compilation of genetic testing's definitions and boundaries, factors influencing an individual's test outcomes, and an overview of a wide variety of diseases, conditions and cancers were collected. Introduction In the world of modern health care, innovation leads the way to not only better patient care, but also a deeper understanding of preventative medicine. The ability to treat a genetic disease before it may become severe is a scientific feat that, until recent years, has been unknown to physicians. Genetic testing saves countless lives that would otherwise fall prey to congenital diseases (i.e. Tay Sachs and Cystic Fibrosis) and cancers (i.e. breast and colon). 1e3 By understanding at a genetic level how a patient may be predisposed to certain health conditions, the planning for their care can start before an unfavorable prognosis, and not hastily chasing after one. 4 Primary care physicians possessing the advantage of an early start on planned treatment gives us the ability to elude health issues that may later consume a patient's life. As a result, this surge in genetic screening in primary care physician offices can change lives. 2 Unfortunately, for many diseases today genetic analysis will not be able to cure the patient completely ahead of the racing clock. However, it may have the ability to soften the blow of a debilitating disease by lessening the severity of symptoms or preventing it from manifesting early in life. 4 A physician's mission to care, cure, and comfort is enhanced with genetic screening. The faculty to understand and treat preventively can massively enhance the quality of life for a patient. In short, genetic testing in a primary care setting can transform otherwise sick lives into healthier, or even healthy lives. 4 Genetic disorders Genetic testing works by analyzing changes in DNA that have been linked to certain diseases, conditions and cancers. 5e8 There are over 7000 disorders that are believed to be linked to Mendelian genetics, and over 700 tests available currently. 9 Analyzing these changes through genetic testing is most useful when an individual has had family members affected by the condition, or if the disease could be linked to inheritance through the process of Mendelian genetics. 10,11 By seeing patterns of diagnosis in relatives, primary care physicians can use those past cases to recommend specific genetic tests to their patients. 5 Different diseases can be inherited in a variety of ways depending on the magnitude of influence the gene (or genes) being investigated may have. 4 If it is recessive or dominant, or if the severity is dependent on being heterozygous or homozygous in nature, this may also impact the severity of the condition. 6,12,13 More often than not, the severity of a disease or the probability of manifesting a disease's detrimental symptoms is increased when familial cases are common. 4 Genetic testing can also identify those that have a higher chance of showing symptoms of the condition or those that are at risk of passing it on if they have progeny. 6,7,10,12,14 This allows for primary care physicians to collect patient data for future generations, aiding in the tracking and prediction of future diagnoses. 15 Various unavoidable genetic diseases that impact an individual from birth are critical to identify early in a patient's life. This is in order to make informed decisions about choices of planned care to improve quality of life. 6,7,12 Inherited genetic diseases of a recessive nature have been shown to account for nearly 20% of total infant mortality and 10% of infant hospitalization in the United States. 12 Other conditions, such as congenital deafness, can also be linked to hereditary causes. 7 Down syndrome The age of a mother has also been shown to have a negative impact on the occurrence of genetic diseases such as Down syndrome, with higher incidence development of negative reactions to drugs found in women who give birth older than 35 years of age. 16 Due to rising popularity in screening, older mothers can obtain a better understanding of how probable passing on a hereditary condition such as Down syndrome would be. 16,17 Screening for theses genetic diseases from preconception has been shown to greatly reduce these statistics and reduce mortality in infant cases stemming from these genetic conditions. 12,13 Cystic Fibrosis, Tay Sachs, familial dysautonomia, and BRCA genes Multiple factors influence whether an individual will be tested, two of which include considerations such as family history and ethnicity. 14 As stated before, genetic testing is best utilized when closely related individuals to the patient have tested positive for the mutation or have the condition. 10,14 Certain ethnicities are more likely to carry certain genes. For example, due to the knowledge obtained from genetic screening, individuals of Ashkenazi Jewish descent have a higher percentage of particular genetic disorders including as Cystic Fibrosis, Tay Sachs, familial dysautonomia, and the development of BRCA genes. 1,6,11,14,16 Thus, as part of primary care a physician may recommend screening due to a patient's background. 4 This enables the physician to preventatively identify conditions before they may have the ability to become debilitating. Genetic screening has seen a surge in popularity among demographics such as individuals of Ashkenazi Jewish descent, and this increase in screening has shown to decrease the number of Tay Sachs cases in said population. 11,14 With genetics becoming underlined as a point of study in modern primary care, other diseases could see similar patterns. 11,18 This would result in early diagnosis, as well as the disease prevalence in its respective population decreasing in a similar fashion to Tay Sachs. This means that looking forward, genetic testing can be applied to any disease's preventative treatment so long as the disease has proven to be hereditary. Tay Sachs in the Ashkenazi Jewish population has greatly diminished from testing as well. 1,6 Early diagnosis of conditions such as Tay Sachs has been proven to favorably impact the survival and quality of life for those patients. 8,19 Cancer Colorectal cancer One of the more understood hereditary cancers is colorectal cancer. 20 Several diversified genes have been linked to different cancers of the colon such as familial adenomatous polyposis (FAP), hereditary nonpolyposis colorectal cancer (HNPCC), and Turcot's syndrome which has been linked to the same genes being altered from FAP and HNPCC. 21 Mutations, or altering of the gene(s) involved, occurs in the germ lining which can subsequently lead to the development of cancer. 3,8,14,21 Testing positive for FAP, HNPCC or Turcot's syndrome does not guarantee an individual will develop cancer. 20 However, it does mean that an individual will have an increased likelihood of developing cancer and are without a doubt carrying the gene for its existence. 21 Thus, they have the ability to pass it on. 21 As with many cancers, testing positive for these different gene mutations increases the chance of developing the cancer. 20 Nonetheless it should be said that environmental and lifestyle factors (i.e. diet and exercise) have the potential to largely influence the development of the colorectal cancer. 3,20 External factors of an individual's life can also negatively or positively influence other mutations to genes that are not testable at this time with modern techniques. 14 The colorectal cancers are autosomal dominant in inheritance; meaning that if one of the parents is a carrier, the offspring has a 50% chance of inheriting the mutated gene that is predisposed for the development of cancerous polyps. 14 Genetic screening will enable possible parents to take their inherited high probability of developing colorectal cancer into account when and if deciding to have children. 3,20 Breast and ovarian cancer Breast and ovarian cancer can also be hereditary with approximately 5e10% of all patients diagnosed having a genetic link to the autosomal dominant mutation of the BRCA1 or BRCA2 genes. 22 As stated before with the colorectal cancers, the presence of BRCA1 or BRCA2 genes means that an individual has a higher predisposition to developing cancer than someone without the gene present. 19 The gene is inherited in an autosomal dominant fashion, and thus can affect both men and women. 22 When at least one of these mutations is present, it is more likely that the cancer develops in an individual at a younger age and tumors will be identified bilaterally in the body. 14 Overall, the mutation of these genes are rare: estimated at 1 in 500 or 1 in 833. 14 These mutations in the BRCA genes are 10 times more common in those of Ashkenazi Jewish descent. 14 Genetic testing is important for identifying those at higher risk of cancer but can also be used for identifying cases where prevention and treatment are useful for understanding underlying genetics. 8,19 In both breast/ovarian cancer and colorectal cancer, the predictive test for the respective gene mutations for each cancer aids in raising awareness of symptoms as they present, so treatment may be done in a timely matter. 3,8 There is also data that exists suggesting that chemoprevention can help reduce the risk of breast cancer with those who have tested positive for only one of the BRCA genes. 23 Testing positive for the gene has also shown to have a positive impact on individuals because it allows them to live a healthier lifestyle by being more aware of their bodily health. 8 Neurological disorders Alzheimer's disease Some genetic diseases are more likely to affect individuals later on in life, such as the cancer, Alzheimer's, and Huntington's disease. 3,5 Many diseases that are linked to genetic inheritance have tests still in development. 13 Alzheimer's disease has been shown to have genetic inheritance, however the genetic testing may not be as useful as other tests. 24 This is solely because there is no stopping or preventing the disease progression in a method that exists yet in modern medical science. Because of this, a potential patient may not want to spend time and money on a genetic test for Alzheimer's when getting early treatment will do nothing to heighten quality of life with the disease as it progresses. 24,25 To add, the information given very early to a patient about the existence of the disease may be worse in the long run due to psychological impacts and long-term heightened stress. 19,24,25 Huntington's disease Genetic screenings play an integral role in preconception planning as symptoms may not have evolved to prominence yet in newborns, but have the capacity to subsequently be expressed during their lives. 2,8,14 For Huntington's disease, symptoms do not usually display themselves until after the point of reproductive maturity at which point an individual tries to have a child. This leads to higher chances of the autosomal dominant gene being passed on. Screening for Huntington's can be done at any time, pre-or post-natal which makes it easily operative to those susceptible. 23,26 Cardiovascular disorders Cardiovascular diseases have many different components but some have multigenic or monogenic causes which can be tested for. One monogenic cause that can lead to early cardiovascular disease is familial hypercholesterolemia. With cardiovascular disease, there are often many factors to consider and thus genetic testing can only show a predisposition. 25 Other diseases such as hypertension, hemochromatosis, polycystic kidney disease, and diabetes that have been linked to genetics also have existing tests. 5,27 Blood disorders Beta thalassaemia Genetic testing is useful in countless ways: from being a preventative method with cancer to reducing the prevalence of genetically predictable conditions overall. 6,28 Beta thalassaemia has had an 80e100% decrease in new births in Mediterranean countries where prenatal and postnatal genetic testing was implemented in the primary care environment. 28 Sickle cell anemia Diseases can vary in severity depending on how the symptoms manifest and by which route the condition was inherited. 13 For example, sickle cell anemia is an inherited autosomal recessive disease. However, in its heterozygous state it is survivable with symptoms ranging from rare to none if the person is a carrier. Having genetic testing conducted allows a better chance for autosomal recessive survival, because treatment can start quickly and lead to an improved prognosis. 28 It also enables carriers to be aware of their unseen genetic mutation and take that into account if in the future they would want to conceive a child. Endocrine disorders Examples of preventative measures after diagnosis via preventive genetic testing in multiple endocrine neoplasia type 2 has been shown. 19 Those that are carriers have benefited from preventative measures such as thyroidectomy which greatly reduces the chance of dying from thyroid cancer or an inherited disorder of the thyroid gland. 19 Treatment effects A new future in curing diseases via pharmaceuticals and other treatments, even if they are unrelated to hereditary, can be paved due to genetic testing for familial diseases. 13 As discussed before, certain ethnicities have various genetic patterns that can increase their likelihood of developing diseases, however these genetic patterns can also lead to development of negative reactions to drugs. 14 Response from drugs and varying levels of toxicity could have a large impact on the body due to an individual's genetics and certain genes. 13 Genetic testing methods Genetic testing can be accomplished through several different methods. Blood, saliva and amniotic fluid can be tested to find abnormal genomic sequences that may lead to familial conditions. 8,14,29 These bodily fluids are easily accessible by primary care physicians, and are thus samples that can be optimally collected in that setting. 4,15 Testing can also be conducted at practically any stage during a patient's life. 11 Individuals who know they are or can become carriers, and are seeking to have a child, may wish to have preconception genetic testing done to ensure their child will not be affected. 11 Testing can occur before a baby is born in the prenatal environment of the mother's womb via amniocentesis. 29 Testing can also be executed postnatally any time after birth. 15 More often than not, patients choose to have a genetic test conducted either when symptoms arise or as a preventative measure in patients with a family history of the condition. 11,23 The ease of collecting sample genetic material from a patient, as well as the flexibility of testing a patient anytime throughout their life are both positive attributes to introducing genetic testing in the primary care physician setting more throughout the coming decades. Testing can also be performed in hospitals. Genetic counselors in the hospital setting may recommend screening when a patient is administered due to symptoms noticed or analyzing patterns in a patient's family medical history. 4 Additionally, genetic testing is being introduced in the more informal setting via a primary care physician. 7,18 The introduction of genetic testing and screening programs into primary care is becoming paramount as it increases opportunities for people to become tested. 2 The more opportunities for easy and unintimidating genetic testing, the more people will be in charge of their medical future and plan of care. As a result, the people tested acquire a deeper understanding of genetics which can lead to quicker and more efficient treatment that ultimately leads to decreased mortality. 10,19 Factors influencing genetic screening participation There are some individuals who would prefer not to know if a familial affliction has been passed on to them for psychological reasons (i.e. trauma over realization of an inherited fatal disease). However, many individuals would like to know if a mutation is present within their genome to prepare for treatment and understand the potential the disease may influence on their future. 23,24 Many different factors play into getting tested such as financial security in order to pay for the test, psychological reasons as mentioned previously for reacting to austere results (including family members who may become alerted as well to the possibility of their own genetic mutations), and insurance companies reacting to the results and how their control may hike medical costs drastically for a patient. 23,26 All of these factors can impact an individual's decision to get tested or not. 26 Conclusion Genetic testing is a valuable source of a patient's medical wellbeing. If wielded productively, it can provide vital data to help predict hereditary disease contraction as well as provide time to plan for care if a detrimental genetic disease such as Tay Sachs, Alzheimer's disease, or Down syndrome, is confirmed. 1,15,23 In recent years the introduction of genetic screening via primary care physicians has ultimately lead to the prevention of and heightened quality of treatment for several familial diseases in countless cases. 2,12 In the years to come genetic tests will become increasingly more accessible in familiar outlets for efficient family health care, especially in the primary care setting. Only time will reveal how DNA tests will evolve to reach more people, utilizing cutting edge scientific research as its cornerstones to propel itself to new heights. Conflict of interest None.
2018-08-01T23:14:28.480Z
2018-04-12T00:00:00.000
{ "year": 2018, "sha1": "2fe056fcd7b480f180192ffc5cbcf01d64ff2b13", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.gendis.2018.04.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fe056fcd7b480f180192ffc5cbcf01d64ff2b13", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53588742
pes2o/s2orc
v3-fos-license
Oil Pulling : A Wonderful Ayurvedic Therapy Oil pulling is an important therapy for treatment of dental problems since ayurveda, the procedure of this method is clearly described in “Charaka Samhita” a book of ayurveda in India. Different kinds of edible oils are used for oil pulling such as sesame oil, coconut oil, olive oil and mustard oil, it involves taking 20 ml of oil in mouth, swishing for about 20 min and then spit it out in sink. The oil has turned milky and viscous due to presence of toxins and different microbes of our mouth. Oil pulling is not only beneficial in dental problems but also helpful in other problems such as eczema, psoriasis, constipation, halitosis etc. This method is very helpful for those people who cannot afford the expenses of regular visits of dentist as it takes edible oils which are easily available in every house. Introduction Oil pulling is an ancient technique mentioned in ayurveda, it involves gargling with oil for about 20 min on an empty stomach in the morning for oral and systemic health benefits.Moreover, it is also beneficial in detoxification of our body by removing all the toxins from our digestive tract [1].This concept of oil pulling is not new as it is described in ayurvedic text "Charaka Samhita", but with a different name known as gandoosha or kavala.This technique has been used for many year for improving dental health in all over the world.However, in India the use of oil pulling is not so common since people believe more in new technologies as compare to ayurveda [2].In addition to treatment of dental problems, it is claimed to cure many other diseases like thrombosis, itching, chronic infections, asthma and diabetes [3].As we all are aware that there are more than 60,000 varieties of microorganisms present in our mouth and the outer covering of microbes is water insoluble so it is not possible to remove them by gargling with water, i.e. why oil is more effective in dissolving them than water and causes removal of these bacteria from our mouth.The viscosity of oil is greater than water which inhibits bacterial attack and plaque deposition on teeth, this is another reason of efficacy of oil pulling.A variety of common edible oils are used for oil pulling therapy such as sesame oil, coconut oil and olive oil, but sesame oil is most common among them due to its antibacterial nature and high viscosity [4]. Procedure The oil pulling must be done in a proper sitting position with chin upright.For oil pulling, we need sesame oil or coconut oil near about 20 ml, with the help of spoon put this oil in mouth and swished between the teeth for 10 to 20 min, then it is needed to be spit out in sink, the color and viscosity of oil has completely changed as it contains thousands of bacteria and toxins.It is a very effective technique without any adverse effects and it can be practiced in any age except for the children below age of 5 year due to risk of swallowing [5]. Inclusion criteria:  Subjects with plaque induced gingivitis  Subjects having at least 20 permanent natural teeth Mechanism of Oil Pulling Therapy Oil pulling therapy is extensively used for many years using sesame oil for strengthening teeth, gums and for dryness of throat.Although exact mechanism of action of oil pulling is not known so far, but it is believed that the oil dissolves all the microorganisms of mouth and draws out toxins from digestive tract of human body.Because of fat soluble nature of microbes can easily mix with oil and move out of the body.Moreover, sesame oil is antibacterial in nature and its emulsification process may be responsible for the formation of a soapy layer which contains all microbes.The emulsification process alters the sticking of bacteria on the tooth surface and removes the superficial worn out squamous cells and improve oral hygiene [6]. Gingivitis It is one of the most commonly found oral disease, it is the beginning stage of periodontitis that occurs due to the deposition of microorganisms in the form of plaque on the tooth.Oil pulling with sesame oil or coconut oil is found effective in treatment of gingivitis as coconut oil contains about 50% of lauric acid which has antimicrobial effects.Plaque induced gingivitis is the result of an interaction between plaque and the tissues, oil pulling removes plaque by dissolving all microbes deposited in the form of plaque [7]. Dental Caries Dental caries can be effectively prevented and controlled by good plaque control methods.By continuous practice of oil pulling for 45 days, it has been found that bacterial concentration reduced from 60% to 20% in our mouth.Thus, the process of oil pulling decreases the susceptibility of bacteria to dental caries [8]. Detoxification Sesame oil is very high in antioxidants, its main chemical constituents are sesamin, sesamolin and sesaminol are highly effective against free radicals.Due to its antioxidant nature it reduces free radical injury to the tissues [9].Sesame oil also holds a high concentration of vitamin E and polyunsaturated fatty acids along with antioxidant properties.Being an antioxidant, sesame oil is very effective in stopping the absorption of negative forms of cholesterol in the liver.Moreover, it absorbs the toxins present in human digestive tract and makes it clear [10]. Eczema and Psoriasis Oil pulling is not only effective in dental problems but also used for the treatment of skin problems like eczema and psoriasis.Since it clarifies the skin by removing toxins and impurities [11].Psoriasis is an autoimmune disorder, it means that when our immune system becomes hyperactive and it is causing exponential growth of skin cells and inflammation. Halitosis The term halitosis denotes 'unpleasant breath odor'.It is not the problem associated with smoking, food intake or bad morning breath on awakening.Indeed, it is a problem which arises due to poor oral hygiene and plaque deposition in our mouth.People use oil pulling for cure of halitosis but there is no scientific proof to accept oil pulling therapy as a treatment adjunct to cure halitosis [12]. Xerostomia It is a disease associated with dry mouth as secretion of saliva decreases.Treatment of xerostomia by using oil pulling therapy has been found effective.According to a study, it has been proved that oil pulling is effective in reducing oral malador and relieving oral dryness in head and neck cancer patients [13]. Intestinal Ulcers Ulcer is an infection caused by Helicobacter pylorus, excessive production of acid in the stomach due to spicy food and smoking is the main reason behind ulcers in the stomach and small intestine.Main symptoms of ulcers are burning, gnawing and abdominal pain etc. Commonly two types of ulcers are there peptic ulcer and duodenal ulcer [14].All kinds of ulcers irrespective of their cause are cured with oil pulling.During first few days the stomach pain may increases but after 2 to 3 weeks the pain is completely disappeared and patients were completely free from ulcer. Advantages of Oil pulling Oil pulling is cheap; the only expense is for the oil you use.It is very easy; you simply swish oil in your mouth.Compared to other forms of detoxification it is relatively effortless.It does not require dieting, fasting, or consuming unpleasant, and often bowel loosening mixes of herbs and pills.And it is completely harmless [15].It is claimed to have advantages over commercial mouthwashes since it causes no staining, has no lingering aftertaste, causes no allergic reactions and is readily available in the household [16]. Tissue Regeneration In Ayurveda, the well-known rasayana herb, amla (the fruit of a tree) is considered a general rebuilder of oral health.Amla works well as a mouth rinse as a decoction.One to two grams per day can be taken orally in capsules for long-term benefit to the teeth and gums.Herbs such as amla that support the healing and development of connective tissue when taken internally also benefit the gums.The healing effect of these tonics takes longer to become apparent since they must saturate the whole body in order to work on the gums.The results, however, are more lasting.Bilberry fruit and hawthorn berry stabilize collagen, strengthening the gum tissue [17].Liquorice root promote santi-cavity action, reduces plaque, and has an antibacterial effect.In Ayurveda, teeth are considered part of Astidhatubone tissue, so that their sockets are like joints.Herbs taken internally to strengthen Astidhatu, i.e. the skeleton and the joints, are good for long-term health of the teeth.Outstanding examples include yellow dock root, alfalfa leaf, cinnamon bark, and turmeric root. Conclusion Ayurveda, a part of alternative medicine is recurring back its importance in various aspects of health, primarily due to its natural product usage.Oil pulling is one such age old process highlighted today due to its innumerable systemic and dental health benefits.It became a mystic cure for many systemic diseases like diabetes, migraine etc.Few short term studies have demonstrated the beneficial effects of oil pulling on plaque, gingivitis and dental caries prevalence suggesting its usage as an oral hygiene maintenance aid.Further research with long term studies is in need to explore the other effects of oil pulling therapy on overall health.Scientific validations of the ayurvedic dental health practices could justify their incorporation into modern dental practice.Publicity of these techniques, using appropriate media would benefit the general population by giving more confidence in the ancient practices, thereby preventing tooth loss.  Use of antibiotics or mouthwash in the past 3 months  Pregnancy/lactating women  Smokers (past/current) Children below age of 15 years  Migraine headache relief  Supporting hormone imbalances  Reducing redness and swelling in bones and joints International Journal of Pharmacognosy and Chinese Medicine Hooda R, et al.Oil Pulling: A Wonderful Ayurvedic Therapy.Int J Pharmacogn Chinese Med 2017, 1(2): 000112.Copyright© Hooda R, et al. 3  May help reduce congestion  Helps in normal kidney function  Helps promote normal sleep pattern  Some people report improved vision  Aids in reducing discomfort  Helps detoxify the body
2018-11-10T08:40:00.019Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "44df1ef04c25a6ddbf68c4ae1ecb469a0192262d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.23880/ipcm-16000112", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "44df1ef04c25a6ddbf68c4ae1ecb469a0192262d", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Art" ] }
15649840
pes2o/s2orc
v3-fos-license
The Relative Importance of Genetic Diversity and Phenotypic Plasticity in Determining Invasion Success of a Clonal Weed in the USA and China Phenotypic plasticity has been proposed as an important adaptive strategy for clonal plants in heterogeneous habitats. Increased phenotypic plasticity can be especially beneficial for invasive clonal plants, allowing them to colonize new environments even when genetic diversity is low. However, the relative importance of genetic diversity and phenotypic plasticity for invasion success remains largely unknown. Here, we performed molecular marker analyses and a common garden experiment to investigate the genetic diversity and phenotypic plasticity of the globally important weed Alternanthera philoxeroides in response to different water availability (terrestrial vs. aquatic habitats). This species relies predominantly on clonal propagation in introduced ranges. We therefore expected genetic diversity to be restricted in the two sampled introduced ranges (the USA and China) when compared to the native range (Argentina), but that phenotypic plasticity may allow the species' full niche range to nonetheless be exploited. We found clones from China had very low genetic diversity in terms of both marker diversity and quantitative variation when compared with those from the USA and Argentina, probably reflecting different introduction histories. In contrast, similar patterns of phenotypic plasticity were found for clones from all three regions. Furthermore, despite the different levels of genetic diversity, bioclimatic modeling suggested that the full potential bioclimatic distribution had been invaded in both China and USA. Phenotypic plasticity, not genetic diversity, was therefore critical in allowing A. philoxeroides to invade diverse habitats across broad geographic areas. INTRODUCTION Since Charles Elton published his classic book on biological invasions in 1958, ecologists have been seeking to determine the factors that make a species an aggressive invader (Williamson, 1996;Nentwig, 2007;Van Kleunen et al., 2015). The ability of alien species to cope with new and heterogeneous environments is essential for their successful establishment in areas outside their native ranges. Phenotypic plasticity, where one genotype can express different phenotypes, is frequently proposed as a characteristic that allows invaders to maintain components of fitness (e.g., growth, survival or fertility; Parker et al., 2003;Richards et al., 2006;Geng et al., 2007a) and ultimately overall fitness (Pichancourt and Van Klinken, 2012) in heterogeneous environments. Another hypothesis is local adaptation by postinvasion evolution (Lee, 2002;Maron et al., 2004;Colautti and Lau, 2015). In this scenario, rapid selection of adaptive genotypes, often facilitated by high levels of genetic diversity, can result in local adaption within the invaded range (Sakai et al., 2001;Lavergne and Molofsky, 2007;Xu et al., 2010a;Barrett, 2015). Both phenotypic plasticity and genetic diversity are effective in generating phenotypic variation in natural populations. Notably, these two mechanisms are not exclusive (Moroney et al., 2013;Si et al., 2014) and it is the total adaptive phenotypic variation, either due to phenotypic plasticity or due to genetic diversity, that will affect the realized performance of alien species in heterogeneous environments (Sultan, 1995;Falconer and Mackay, 1996). Indeed, phenotypic plasticity itself can be the target of natural selection and go through rapid evolution during the different phases of biological invasion (Lande, 2015). Many studies have highlighted the effects of local adaptation or phenotypic plasticity on invasiveness of alien species (Bossdorf et al., 2005;Davidson et al., 2011;Dlugosch et al., 2015), but few have examined the two factors simultaneously. As a result, the relative importance of the two adaptive strategies for invasive species remains largely unknown (Barrett, 2015;Bock et al., 2015). Clonality is also proposed as an important characteristic in invasive alien plants (Pyšek, 1997). Alien plant populations have higher frequencies of clonality than native species and some of the world's most damaging invasive plants are clonal species (Silvertown, 2008). Furthermore, some clonal plants can occupy disturbed and dynamic habitats across broad geographic distributions (Geng et al., 2007a;Ganie et al., 2015). However, for clonal species, many of the physiologically separated individuals are asexual offspring of the same genet and thus share a common genotype (Ellstrand and Roose, 1987;Silvertown, 2008). Theory predicts that clonal plants will only evolve slowly, making local adaptation more difficult to occur (Barton and Charlesworth, 1998;Silvertown, 2008). Phenotypic plasticity is therefore likely to be an important mechanism allowing clonal species to rapidly invade new and diverse environments (Riis et al., 2010;Keser et al., 2014;Roiloa et al., 2014). Alternanthera philoxeroides is native to South America and has become a problematic species in more than 30 countries (Holm et al., 1997). Interwoven stems can form large, dense monocultures, displacing native vegetation, blocking waterways, and causing significant economic impacts to agriculture (Wang and Wang, 1988;Sainty et al., 1998). In the introduced ranges (e.g., Australia, China, and the USA), A. philoxeroides rarely produces viable seeds, reproducing mainly through vegetative structures such as roots and broken stems Holm et al., 1997;Dong et al., 2012). Clonal integration among different ramets is proposed as a mechanism that allow A. philoxeroides to colonize habitats that are spatially heterogeneous at fine scale (Liu et al., 2008;Wang et al., 2009;Yu et al., 2009;Xu et al., 2010b;Guo and Hu, 2012;You et al., 2014). In China A. philoxeroides is widely distributed but genetically uniform DNA markers suggest genetic diversity is extremely low (Xu et al., 2003;Ye et al., 2003). This is consistent with its dominantly clonal reproduction. Despite this, A. philoxeroides occurs in diverse habitats in China, from fully aquatic (e.g., rivers, reservoirs) to terrestrial (e.g., roadside dry lands), and shows prominent phenotypic variation (Pan et al., 2006). Also, phenotypic plasticity, rather than local adaptation, is responsible for the phenotypic variation with relation to different water availabilities (Geng et al., 2007a). An interesting question is whether the species niche of A. philoxeroides in China is mainly determined by phenotypic plasticity and is not limited by low levels of genetic diversity. So far it is not known how the levels of genetic and phenotypic diversity observed in China relates to that present in it's native range and other introduced ranges. Direct comparison of the native and introduced clones is needed to determine the relative importance of genetic diversity and phenotypic plasticity during biological invasions. In this study, we conducted a series of intercontinental comparisons using A. philoxeroides clones collected from both native (Argentina) and two introduced ranges (the USA and China). Our major aim was to examine the relative importance of phenotypic plasticity and genetic diversity in determining invasion extent of A. philoxeroides in the USA and China. Molecular marker analyses and a common garden experiment were performed to compare the genetic diversity and phenotypic plasticity of A. philoxeroides among the three regions. We expected that the genetic diversity in the introduced ranges was lower, and phenotypic plasticity was higher, than in native range. In addition, we used a bioclimatic model fitted against native range distribution data to examine whether the full potential distribution of the species in the introduced ranges were invaded. If genetic diversity had played an important role in determining the niche range of A. philoxeroides, we expected that the lower levels of genetic diversity would limit its potential distribution in the introduced ranges. Study Species and Sampling Alternanthera philoxeroides (Mart.) Griseb., alligator weed, is a perennial stoloniferous herb. It can thrive in both terrestrial and aquatic habitats (Figure 1). High biomass allocation to root is an important factor determining the performance of A. philoxeroides in terrestrial habitats (Wilson et al., 2007;Geng et al., 2007a), including regeneration where cold winters damage most above-ground parts (Figure 1). In contrast, regeneration in aquatic habitats relies mainly on stems (Figure 1). A. philoxeroides is native to South America and has invaded many tropical and subtropical areas across the world (Holm et al., 1997). In Argentina, A. philoxeroides is mainly distributed along the Rarana and Uruguay rivers in the north and along the San Borombon and Salado rivers in the center of Buenos Aires province (Sosa et al., 2003; Figure 2). In the USA, A. philoxeroides is distributed in several states in the southern coastal plains from Virginia to southern Florida, and westward along coastal areas to Texas and California (Figure 2). In China, A. philoxeroides is widely distributed, including in most provinces south of the Yellow River (Figure 2). A total of 179 A. philoxeroides specimens were sampled from its distribution in Argentina (7 sites), the USA (9 sites), and China (9 sites) (Table S1, Figure 2). Specimens at a site were sampled from at least 10 m apart. For each individual, a stem fragment or thickened root was sampled in field. These were grown in a greenhouse in China (Shanghai) for about 6 months before the common garden experiment was performed. Molecular Marker Analysis To compare the genetic diversity measured by neutral molecular makers, all field-collected samples were analyzed using Inter Simple Sequence Repeat (ISSR) markers, which have proven effective in discriminating different clones of A. philoxeroides (Ye et al., 2003). In brief, we extracted total DNA using the cetyltrimethyl ammonium bromide (CTAB) protocol from fresh leaves of A. philoxeroides grown in the greenhouse and performed PCR using ISSR primers from the University of British Columbia primer set nine. Eight primers (UBC no. 811,813,823,835,840,841,880,and 887) were selected to genotype A. philoxeroides. For each sample, at least two PCR amplifications were performed to evaluate the reproducibility of the bands obtained. Each reaction was carried out in a total volume of 20 µl mixture consisting of 20 ng of template total DNA, 10 mM Tris-HCl (PH 9.0), 50 mM KCl, 0.1% Triton X-100, 2.7 mM primer, 1.5 unit of Taq polymerase and double distilled water. PCR was performed with an Eppendorf Mastercycler programmed for 5 min at 94 • C followed by 40 cycles of 45 s at 94 • C, 45 s at the appropriate annealing temperature (48-52 • C), and 2 min at 72 • C. The last cycle was 7 min at 72 • C, followed by a 4 • C soak. Amplification products were resolved by electrophoresis on 1.5% agarose gels buffered with 1 × TAE. Common Garden Experiment A common garden experiment (Fudan University, Shanghai; E121 • 29 ′ -N31 • 14 ′ ) was conducted to compare phenotypic plasticity of A. philoxeroides from all sampled ranges in response to different water treatments (aquatic and terrestrial habitat). Each habitat consisted of four rectangular plots (15 × 2 m). The aquatic habitat was simulated using 1 m deep ponds while the terrestrial habitat was simulated with raised garden beds. The aquatic and terrestrial plots were spatially alternated with each other, with adjacent pairs considered as replicates (blocks). Regional-level phenotypic plasticity was compared by randomly selecting one clone from each sampling site (i.e., 9 from the USA, 9 from China, and 7 from Argentina). As no plants produced seeds in the greenhouse, thick root fragments were used to produce asexual plants as experimental replications. For each of the 25 clones, eight asexual plants with two pairs of leaves grown in pots (30 cm in diameter and 35 cm in depth, containing 1:1 mixture of loam and sand) were allocated randomly to eight plots (i.e., 2 water treatments × 4 replicates) to give a total of 200 pots. Aquatic plants were monitored daily to ensure that the water level remained nearly 2 cm above the pots in ponds. In terrestrial plots, plants received natural precipitation (1200 mm/year) plus supplementary irrigation in continuous sunny days (1L/pot when surface soil in >50% pots are dry). Plants were harvested after 2 months of growth, which was before any flowers appeared. First, six morphological and physiological traits were measured following the protocol reported in Geng et al. (2007a): (1) leaf length, (2) stem diameter, (3) stem pith cavity diameter, (4) internode length (5) specific leaf area (SLA), and (6) relative chlorophyll content (measured using a chlorophyll meter, Minolta SPAD-502) which gives a value that is well correlated with chlorophyll content. In addition, each individual was separated into four parts: leaves, stems, thick storage roots, and fine roots (i.e. roots with diameter less than 1 mm). All plant materials were oven-dried at 80 • C for 48 h and weighed. Then, two allocation traits were obtained, root/shoot ratio and storage root/fine root ratio. The whole experiment was performed in a closed garden equipped with weed mat to prevent plants from escaping into the field. Analysis of Genetic Variation in Molecular Markers and Quantitative Traits Genetic diversity was assessed both by neutral molecular markers (then referred to as marker diversity) and quantitative traits under common garden conditions (then referred to as quantitative variation). In the molecular marker analysis we recorded ISSR bands as present (1) or absent (0) for each sample. Bands of the same molecular weight were considered to represent the same allele at a given locus. This dataset was analyzed in two ways. First, we used Popgene 1.32 (Yeh et al., 1999) to examine the genetic diversity measured by molecular markers at a regional level, using the following genetic variables: the percentage of polymorphic loci (P), the Nei's genic diversity index (He), and the Shannon diversity index (I). We performed a re-sampling procedure to control the confounding effect of uneven sample size (i.e., 21, 32, and 126 for Argentina, the USA and China, respectively). Specifically, we randomly sampled 21 individuals from the USA and China datasets respectively, from which we calculated the regional genetic parameters. This re-sampling procedure was then repeated 30 times, and the average value for each genetic variable based on the sub-dataset was reported along with average based on the whole dataset. Second, PAUP 4.0 (Swofford, 1998) was then used to determine the relationships among A. philoxeroides individuals from different geographical origins using neighbor-joining method. Estimates of similarity were calculated using the index of Nei and Li (1979). Bootstrap values for the neighbor-joining tree were calculated using 1000 replicated neighbor-joining searches. For the quantitative traits in the common garden experiment, we calculated the coefficients of genetic variation (CVg, Houle, 1992) as our estimation of quantitative variation. For each region in each habitat, the coefficient of genetic variation is calculated as CVg = Sqrt (Vg)/M, where Vg was the genetic variance components among clones within a region, and M was the mean value of different clones within a region. Analysis of Phenotypic Plasticity in Quantitative Traits We used the dataset from the common garden experiment that simulated aquatic and terrestrial habitats to compare phenotypic plasticity between the three study regions (Argentina, China and the USA). First, we examined the reaction norms at regional level by plotting the mean values of all clones from the same continent against two habitat treatments. We performed two-way nested ANOVAs to examine the effects of treatment, region, clone, and treatment-by-region interaction on each univariate trait, in which clone was nested in region as a random factor. The statistical model included the following terms: treatment, region, clone, treatment-by-region, treatment-by-clone, and error term. A significant effect of treatment suggests significant phenotypic plasticity of A. philoxeroides in terrestrial vs. aquatic growth conditions while the regional or clonal effect suggests differentiation of A. philoxeroides in phenotypic traits among different regions or clones. A significant treatment by region or treatment by clone interaction indicates that the level of phenotypic plasticity is different among regions or clones. We performed F tests by testing region effect over clone term; by testing both treatment and treatment-by-region effects over the treatment-by-clone interaction term; and testing treatment-byclone effects over the error term. We first used Log (initial stem length) as a covariate to examine whether the covariate explained a significant amount of variation. When the covariate was not significant, we performed an ANOVA instead of ANCOVA, and examined the assumptions of homoscedasticity and performed data transformation where necessary. Second, quantitative analyses of phenotypic plasticity were also performed based on the plasticity index (Schlichting, 1986). Specifically, for each trait of each clone, the plasticity index was calculated as: Ip = (Max (P1, P2) -Min (P1, P2))/Mean (P1, P2), where P1 and P2 were the average values of four replicates of the same clone under aquatic and terrestrial habitats, respectively. We performed one-way ANOVA on these indices to examine whether the effect of region was significant, with clone as error term. When a significant region effect was detected, we conducted post-hoc comparisons based on Bonferroni test to examine whether or not the phenotypic plasticity of A. philoxeroides in China and USA was significantly different from those from Argentina. As all Chinese clones in common garden experiments proved to be the same multi-locus genotype (i.e., C-Dominant), the plasticity index for Chinese clones might not be independent to each other. To examine the potential effects of pseudoreplication, we also performed a nested ANOVA on plasticity indices, in which region was the main factor and clone was nested in region as a random factor. The overall mean plasticity for Chinese clones was used in this nested ANOVA. In addition we conducted multiple comparisons via t-tests. Specifically, the overall mean plasticity index for Chinese clones was used as a fixed value in two single-sample t-tests (i.e., USA vs. China mean and Argentina vs. China mean), separately. One two-sample t-test was performed when comparing the USA and Argentina. The results of one-way ANOVA, nested-ANOVA, and multiple t-tests were similar (see Supplementary Files). For simplification, we reported the result of one-way ANOVA in main text. Third, to explore the effect of treatment on phenotypic correlation, we examined the Pearson's product correlations between paired traits in aquatic and terrestrial habitats, respectively. For each genotype, trait means were calculated per habitat. Based on the plasticity index, we examined the correlation of plastic responses across habitats among traits (i.e., plasticity integration). The critical probability levels for the correlation coefficients were Bonferroni corrected for multiple comparisons to α/36 = 0.0013 (i.e., there were 36 paired comparisons). Finally, to provide a multivariate perspective, we examined the overall phenotypic pattern of A. philoxeroides from different regions, using principal component analysis (PCA) conducted on the clone mean value of each trait (n = 50, 2 treatments × 25 clones). Trait data were standardized prior to PCA. We also performed constrained ordination analysis (e.g., redundancy analysis, RDA), in which two factors variables (i.e., treatment and region) were used as explanatory variables. The result of RDA was highly similar to that of PCA (see Supplementary Files). For simplification, we reported the result of PCA (biplot) in main text. All analyses were carried out with R (R Core Team, 2015). Correlation between Genetic and Phenotypic Dissimilarity To assess the correlation between molecular markers and quantitative traits, we further examined whether the differences among clones in their quantitative traits were related to their genetic marker dissimilarity. Specifically, we calculated the Euclidean distance matrix based on the quantitative traits of each genotype in terrestrial and aquatic habitats, respectively. The trait data were standardized prior to distance calculations. Mantel tests were then used to assess correlations between the trait matrix and genetic distance matrix based on molecular markers. In addition, the Euclidean distance matrix based on the phenotypic response to treatment (i.e., plasticity) for each genotype across terrestrial and aquatic habitats were also examined using the same method. The distance calculations and Mantel tests were done using the ecodist package (Goslee and Urban, 2007) in R (R Core Team, 2015). Bioclimatic Modeling A bioclimatic model was fitted against the native-range distribution of A. philoxeroides. It was then used to test whether the potential distribution in its introduced range (China and the USA) was fully invaded. The model was developed using CLIMEX Version 4.0 (Kriticos et al., 2015) and the world 10 min climate data set downloaded from CliMond (Kriticos et al., 2012). CLIMEX uses temperature and soil moisture data (calculated using rainfall and evaporation). Distribution data was obtained from Global Biodiversity Information Facility (GBIF, global species distribution dataset, http://www.gbif. org/species/3084923), supplemented by China distribution (NSII, China National Specimen Information Infrastructure, http://www.nsii.org.cn/), and the USA distribution (Early Detection and Distribution Mapping System, EDDMapS, http:// www.eddmaps.org/distribution/viewmap.cfm?sub=2779). A previously published CLIMEX model for alligator weed was modified (Table S2). Temperature parameters were adjusted so that stress only began once conditions were no longer suitable for growth (a CLIMEX requirement not adhered to in the original model), Moisture Index and Temperature Index parameters were tightened as much as possible without affecting the native-range fit, and the cold stress accumulation parameter reduced to better fit the southern-most distribution in Argentina. Outputs (Environmental Index scaled from 0 to 100) from the new model were plotted against distributional data for each region using QGIS 2.12 (Qgis Development Team, 2015). Genetic Variation of Molecular Markers and Individual Traits A total of 179 A. philoxeroides individuals from Argentina (21), the USA (32), and China (126) were analyzed using ISSR markers. The eight ISSR primers generated a total of 60 bands and 61 unique ISSR multi-locus genotypes. For samples from Argentina and the USA, each plant was characterized by a unique multilocus genotype. In contrast, 94% (119) of the Chinese samples were identical (referred as to "C-Dominant" in Figure 3). The eight Chinese genotypes clustered together as a single wellsupported clade in the neighbor-joining tree. This clade was closest to individuals from two sites in the USA (N4 and N8), although bootstrap value was low (Figure 3). In contrast, USA genotypes were represented in many clades, including ones that included Argentine genotypes. Individuals from the same site were usually grouped together, but there was no clear genetic structuring within each region. All three genetic variables (P, He, and I) also showed much higher genetic diversity for clones from Argentina and the USA than those from China, even after the confounding effects of different sample size were controlled (Table 1A). Genetic diversity as estimated using quantitative traits measure during the common garden experiment showed patterns consistent with those revealed by neutral molecular markers. For both habitats, quantitative variation was much lower among Chinese clones than among clones from Argentina and the USA for most quantitative traits examined (Table 1B). Phenotypic Plasticity of Individual Traits Generally, plants in aquatic plots had longer leaves, longer internodes, thicker stems and larger stem pith cavity, larger specific leaf area, lower root/shoot ratio, lower relative chlorophyll content, and lower storage root/fine root ratio, than those in the terrestrial plots (Figure 4). The two-way ANOVA revealed significant effects of treatment on all the traits, indicating significant phenotypic plasticity across all regions ( Table 2, treatment effect P < 0.01), despite significant differences among clones (Table 2, clonal effect P < 0.01). Contrary to our expectation that clones in introduced ranges had higher phenotypic plasticity than those in the native range, the levels of phenotypic plasticity for examined traits in response to water availability (terrestrial vs. aquatic) were consistent across all three regions (Figure 4). The one-way ANOVA, nested ANOVA, and t-tests also found no significant differences among regions for most examined traits when comparing plasticity indices ( Figure 5, Table S3). Consistent levels of phenotypic plasticity among regions were further supported by the nonsignificant effects of treatment by region interaction (Table 2). Although, plants had similar responses at region level, we did detect significant difference in plasticity among clones ( Table 2). The clone-level reaction norm suggested that the slope of most traits varied greatly, especially for the clones from Argentina ( Figure S1). Indeed, some Argentine clones were more plastic than those from USA and China (Figure 5, Figure S1). Phenotypic correlation analyses showed that some traits were significantly correlated with each other in both habitats ( Figure S2). For example, stem diameter was positively correlated with leaf length and stem-pith-cavity; specific leaf area (SLA) was negatively correlated with leaf length and stem diameter. The treatment changed the phenotypic correlation quantitatively (i.e., increased or decreased the correlation coefficients), but the overall correlation pattern remained unchanged ( Figure S2). For the plasticity integration, only one pair of trait plasticity showed a negative correlation (i.e., the stem pith cavity and relative chlorophyll content, Figure S2). Correlation between Genetic and Phenotypic Dissimilarity The Mantel test found the molecular marker distance to be positively correlated with the dissimilarity of quantitative traits in terrestrial habitat (r = 0.27, p = 0.04) and aquatic habitat (r = 0.23, p = 0.06, Figure S3). However, we detected no significant correlation between marker distance and dissimilarity of phenotypic plasticity across terrestrial and aquatic habitats (r = 0. 15, p = 0.29, Figure S3). Multivariate (PCA) Pattern of Quantitative Genetic Variation and Phenotypic Plasticity The PCA provided a multivariate perspective and indicated the similar pattern. The phenotypic variation among regions was mainly accounted for by the second principal component, which explained 19.53% of the total variation (Figure 6). Specifically, Chinese clones (red) formed a single cluster; in contrast, USA clones (blue) formed two discrete clusters and Argentina clones (green) were interspersed in the PCA space (Figure 6). The first principal component of PCA clearly separated aquatic and terrestrial treatments in PCA space, indicating that most (67.61%) of the phenotypic variation within the common garden experiment was a plastic response to habitat treatment. Bioclimatic Modeling Bioclimatic modeling suggested that the full potential distribution of the species in the introduced ranges were invaded in China and the USA. A. philoxeroides occurs in relatively diverse climates within its native range (Argentina), restricted largely by cold stress to the south and west, and heat stress in areas to the north of Argentina (Figure 2). Soil moisture had limited effect in the model as A. philoxeroides was present CV-terrestrial (-aquatic), coefficients of genetic variation in terrestrial (aquatic) plot. from wet to quite arid climates. The Environmental Index (EI) was high even in areas where A. philoxeroides has not been recorded in Argentina. These could not be excluded from the model by further restricting parameters without losing known locations, therefore suggesting non-climatic factors are also important. The potential distribution in China and the USA was largely restricted by cold stress to the north and heat stress within the range and to the west. In both cases most distribution records occurred within the potential range, although records did extend into areas where cold stress was expected to be too high. There were no extensive areas where EI was moderate to high (above 10) in which A. philoxeroides has not yet been reported. DISCUSSION Phenotypic plasticity and genetic diversity have long been proposed to contribute to the invasion success of alien plants, especially clonal invaders, but few studies have tested their relative importance (Barrett, 2015;Bock et al., 2015). In this study, we found high levels of genetic diversity in A. philoxeroides in the native range (Argentina) and one introduced range (USA), but not in another introduced range (China). Specifically, genetic diversity in the USA was similar to that in Argentina. In contrast, the levels of genetic diversity in China were extremely low and many individuals collected from geographically distant sites shared the same multi-locus genotype. Contrary to our expectations that clones in introduced ranges had higher phenotypic plasticity than those in native range, the phenotypic plasticity in response to different water availability (terrestrial or aquatic) was similar across all three regions. Despite the different levels of genetic diversity, bioclimatic modeling suggests that the full potential bioclimatic distribution had been invaded in both China and USA. Taken together, our results suggest that the ability of A. philoxeroides to successfully invade heterogeneous habitats and broad geographic distributions is the consequence of phenotypic plasticity rather than genetic diversity. Comparison of Genetic Diversity In this study, we used both molecular markers and quantitative traits to assess the genetic diversity of A. philoxeroides. Mantel test suggested that the correlation of the two measures is significant but weak (r = 0.27 and 0.23 in terrestrial and aquatic habitat, respectively), which is similar with previous results (r = 0.217) based on meta-analysis (Reed and Frankham, 2001). Both molecular markers and quantitative traits revealed a clear pattern that Chinese clones had much lower levels of genetic diversity, in terms of both marker diversity and quantitative variation, than those from Argentina and the USA. The extremely low levels of marker diversity among analyzed Chinese clones are consistent with previously reported results (Xu et al., 2003;Ye et al., 2003;Geng et al., 2007a). The regional-level genetic diversity in USA and Argentina may be underestimated due to smaller sample size, which means the overall pattern of genetic diversity between China and the other two regions may be even more prominent. The levels of genetic diversity of alien species are often shaped by population history (e.g., foundering effect, and whether multipleintroduction had occurred). In this study, all the Chinese samples clustered as a single well-supported clade in the neighbor-joining tree. This suggests Chinese populations may be the result of FIGURE 4 | Reaction norms of Alternanthera philoxeroides from Argentina, China and the USA against the two habitat treatments tested in the common garden experiment. Lines are the mean ± 1 SE. Results of Post-hoc comparison based on Bonferroni test are shown using capital letters (terrestrial plots) and small letters (aquatic plots). Values sharing the same letter do not differ significantly (α = 0.05). Abbreviations are the same as Table 1. a single introduction, with the low levels of genetic diversity among Chinese clones being the result of founding effects during invasion. In contrast, the USA clones were scattered on the neighbor-joining tree and were intermingled with Argentina clones, suggesting that the A. philoxeroides populations in the USA might have stemmed from multiple introductions. Indeed, the levels of genetic diversity of the clones in the USA were similar to that in Argentina, suggesting no obvious founding effect in the USA. The level of genetic diversity had little effects on the invasion potential of A. philoxeroides to invade its potential distribution within either China (low diversity) or the USA (high diversity) as assessed by a bioclimatic model fitted against the native-range distribution. Especially, the genetic uniformity Clone was nested in region as a random factor. Significance levels are given by *P < 0.05, **P < 0.01. Abbreviations are the same as Table 1. of Chinese clones did not appear to restrict the geographic and ecological distribution of A. philoxeroides. Similar results have also been reported in a few other invasive alien plants in their introduced ranges, e.g., Pennisetum setaceum (Poulin et al., 2005), Rubus alceifolius (Amsellem et al., 2000), and Fallopia japonica (Hollingsworth and Bailey, 2000). Notably, most of these invaders are selfing, or apomixis clonal species, which can usually avoid genetic erosion through Allee effects (e.g., inbreeding depression). Therefore, it seems that the levels of genetic diversity may not be a critical factor limiting the distribution and abundance of clonal invasive plants. So far, several welldocumented case studies on post-introduction evolution mainly involved out-crossing or mixed-crossing species, e.g., Hypericum perforatum (Hierro et al., 2005), Phalaris arundinacea (Lavergne and Molofsky, 2007), and Sapium sebiferum (Rogers and Siemann, 2004). Therefore, the role of genetic diversity in invasion success might be variable for plant species with different reproductive modes (e.g., mating system). Comparison of Phenotypic Plasticity Phenotypic plasticity is frequently envisaged as one of the characteristics that contribute to the adaptability and invasiveness of alien species (Parker et al., 2003;Richards et al., 2006) by allowing them to maintain or increase population growth rates across diverse environments (Pichancourt and Van Klinken, 2012). In this study, we found that all clones, regardless of their geographic origins, showed significant phenotypic plasticity in biomass allocation and morphological traits in response to varying water ability. This may play an important role in shaping its niche breadth in relation to water. In particular, there was no significant correlation between the dissimilarities of genetic markers and plasticity indexes among clones, suggesting the plastic response norm of A. philoxeroides in terrestrial vs. aquatic habitats is an inherent (species-level) acclimation to these habitats. Although, it is not easy to rigorously confirm the adaptive significance of phenotypic plasticity in non-model species (Sultan, 1995), we did find evidence that the phenotypic plasticity of A. philoxeroides is not the passive result of growth allometry or resource shortage. First, in a previous study, Geng et al. (2007b) found the plastic root/shoot ratio in response to different water treatments were the result of developmentally active adjustment (i.e., true plasticity) rather than ontogenetic drift along the same developmental trajectory (i.e., apparent plasticity). Second, the phenotypic changes are largely functionally adaptive. For example, terrestrial plants allocated more biomass to roots, and produced smaller and thicker leaves, had shorter internodes, which help plants to better balance the absorption and transpiration of water. Aquatic plants had a much larger stem-pith-cavity, which can act as highly efficient aerenchyma (Voesenek et al., 2006) and also enables the stem mats to float on water . Notably, the pattern of trait correlation was qualitatively similar across terrestrial and aquatic habitats ( Figure S2). We also detected a significant correlation in phenotypic plasticity between stem pith cavity and relative chlorophyll content. Such phenotypic integration may reflect adaptation within a certain environment (e.g., terrestrial or aquatic habitat) or could be by-products of the genetic/developmental constraints (Pigliucci, 2003), which may constrain the expression and evolution of phenotypic plasticity in dynamic environments (Gianoli and Palacio-Lopez, 2009). Terrestrial plants allocated much more resources to storage roots than those in aquatic habitats, which may be critical for A. philoxeroides to invade into both terrestrial and aquatic habitats. Specifically, A. philoxeroides is susceptible to seasonal disturbances (e.g., winter frost) in terrestrial habitats, which often kill all the above-ground parts (Figure 1). Thus, the belowground storage roots become the indispensable organs that allow plants to resprout and re-produce in terrestrial habitats (Wilson et al., 2007;Geng et al., 2007a). In contrast, regeneration of A. philoxeroides in aquatic habitats relies mainly on stems that can often survive winter (Figure 1). Indeed, manipulative experiments suggest that the storage roots had much lower resprout ability in aquatic habitats (Geng et al. unpublished data), suggesting decreased functional importance for storage roots in aquatic habitats. The observed phenotypic plasticity was consistent across the native and two introduced ranges, suggesting it is an inherent (species-level) acclimation pattern for growing in diverse habitats. Invasive species are expected to be more plastic than their native conspecifics (Parker et al., 2003;Richards et al., 2006), which particularly applies to A. philoxeroides, given the extremely low genetic variation and broad niche in Chinese populations. However in this study, we found no evidence of this. Although, plants from different regions had similar plastic responses, we did detect significant difference among clones. Clone-level reaction norms suggested that the slope (i.e., amount of plasticity) varied greatly, especially in the clones from Argentina (native region). Indeed, phenotypic plasticity levels of Chinese and USA clones fell within the variation ranges Table 1. of Argentine clones. In other words, some native clones were even more plastic than the introduced clones. Previous studies on the comparison of phenotypic plasticity between native and introduced populations/species have produced mixed results. For example, Bossdorf et al. (2005) synthesized 10 case studies, of which half suggested that introduced populations were more plastic than native ones, while the other did not. In more recent meta-analysis studies, (Davidson et al., 2011) found that invasive species were more plastic than non-invasives, while (Palacio-López and Gianoli, 2011) found no significant difference between invasive and native species. Theoretically, it has been proposed that phenotypic plasticity may be favored by natural selection only in the initial phase of invasion, resulting in a transient increase in plasticity; in later invasion phases, plasticity will reduce due to plasticity costs because the novel habitat poses continuous directional selection on the optimum phenotype (Palacio-López and Gianoli, 2011;Lande, 2015). However, such a process of genetic assimilation is less likely to occur in asexual clonal species like A. philoxeroides. Indeed, the absence of different plasticity levels between native and introduced populations in A. philoxeroides does not seem to be the result of transient and reversible post-invasion evolution, but an inherent characteristic of A. philoxeroides. Phenotypic plasticity may be much more important than genetic diversity in determining the success of clonal invasive species like A. philoxeroides. In non-clonal species with high levels of genetic diversity, local adaptation and post-invasion evolution are frequently observed (Lee, 2002;Maron et al., 2004;Novy et al., 2013;Turner et al., 2014). However, in clonal species, the efficiency of natural selection is often constrained and rapid evolution is more difficult to occur (Barton and Charlesworth, 1998;Silvertown, 2008). In the case of A. philoxeroides, phenotypic plasticity, rather than genetic diversity, may be critical for the potential to cope with heterogeneous habitats with variable water availabilities and climate conditions, i.e., the basic niche, which can translate into a broader realized niche in introduced ranges when the co-evolved competitors (Gurevitch, 1986) and natural enemies (Louda and Rodman, 1996) are absent. This is partially supported by the bioclimatic model result, which greatly overestimated the native-range distribution of A. philoxeriodes, suggesting that other factors such as topography and competition are important in limiting the distribution of A. philoxeriodes in the native range. Including other factors in the distribution model, as has been done for similar species (Murray et al., 2012), would therefore help to demonstrate the key factors that lead to the niche expansion of A. philoxeroides in introduced ranges. A biogeographical approach is often proposed to compare the introduced populations with their native counterparts (Bossdorf et al., 2005;Hierro et al., 2005). If we regard biological invasion as a "natural experiment, " the repeated invasion success by some global invaders (e.g., A. philoxeroides) provides valuable information akin to experimental replications. In cases where repeated invasions indeed share common features, comparisons among these replications may help identify the relative importance of different factors in determining invasion success. In this study, we compared the genetic diversity and phenotypic plasticity of A. philoxeroides across three regions. Our results revealed that the pattern of "lower genetic diversity" in one introduced range (i.e., China) was not found in another introduced range (i.e., the USA), reflecting the heterogeneous nature of biological invasions even for the same invader. In contrast, high levels of phenotypic plasticity were found across all three regions, highlighting the importance of phenotypic plasticity as a common feature underlying successful invasions of A. philoxeroides. Accordingly, this multi-region comparative approach, including two or more biogeographical replicates, may be especially indicative for understanding the relative importance of different factors underlying successful invasion. AUTHOR CONTRIBUTIONS YG, BL, JC, and CX designed the research. YG performed the wet lab work. RV performed the climate niche modeling. YG and CX performed the data analysis. YG, AS, CX participated in the sampling. YG, RV, BL, JC, and CX drafted and revised the manuscript. All authors carefully read and approved the final manuscript. FUNDING This study was supported by the National Natural Science Foundation of China (31000112, 31260055), and the International Foundation for Science (Grant A/4424-1).
2016-05-04T20:20:58.661Z
2016-02-24T00:00:00.000
{ "year": 2016, "sha1": "aa6232f78b1f3c1f8609a60aea95351c0e30f274", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00213/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa6232f78b1f3c1f8609a60aea95351c0e30f274", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
27246260
pes2o/s2orc
v3-fos-license
NuSTAR Observations of Heavily Obscured Quasars at z ~ 0.5 We present NuSTAR hard X-ray (3-79 keV) observations of three Type 2 quasars at z ~ 0.4-0.5, optically selected from the Sloan Digital Sky Survey (SDSS). Although the quasars show evidence for being heavily obscured Compton-thick systems on the basis of the 2-10 keV to [OIII] luminosity ratio and multiwavelength diagnostics, their X-ray absorbing column densities (N_H) are poorly known. In this analysis: (1) we study X-ray emission at>10 keV, where X-rays from the central black hole are relatively unabsorbed, in order to better constrain N_H; (2) we further characterize the physical properties of the sources through broad-band near-UV to mid-IR spectral energy distribution (SED) analyses. One of the quasars is detected with NuSTAR at>8 keV with a no-source probability of<0.1%, and its X-ray band ratio suggests near Compton-thick absorption with N_H \gtrsim 5 x 10^23 cm^-2. The other two quasars are undetected, and have low X-ray to mid-IR luminosity ratios in both the low energy (2-10 keV) and high energy (10-40 keV) X-ray regimes that are consistent with extreme, Compton-thick absorption (N_H \gtrsim 10^24 cm^-2). We find that for quasars at z ~ 0.5, NuSTAR provides a significant improvement compared to lower energy (<10 keV) Chandra and XMM-Newton observations alone, as higher column densities can now be directly constrained. INTRODUCTION Quasars are the sites of the most rapid black hole growth in the universe (Salpeter 1964;Soltan 1982). They represent the luminous end of the active galactic nucleus (AGN) population, often outshining their host galaxies. The first unobscured ('Type 1') quasars were discovered over 50 years ago (Schmidt 1963;Hazard et al. 1963), and more than one hundred thousand have now been spectroscopically identified (e.g., Véron-Cetty & Véron 2010; Pâris et al. 2012). For obscured ('Type 2') quasars 1 the situation is not as advanced. Similar to the early Type 1 quasars, Type 2 quasars were ini-tially identified from radio selection (e.g., Minkowski 1960), and over the following decades several hundred powerful 'radio galaxies' (as such radio-selected Type 2 quasars are typically called) were identified (for reviews, see McCarthy 1993; Miley & De Breuck 2008). However, it is only in the past decade that radio-quiet Type 2 quasars have been found in large numbers. Such sources are generally identified on the basis of either their relatively hard X-ray spectral slopes (e.g., Norman et al. 2002;Stern et al. 2002), optical spectral features (e.g., Steidel et al. 2002;Zakamska et al. 2003), or midinfrared (mid-IR) colors (e.g., Lacy et al. 2004;Stern et al. 2005). Importantly, mid-IR color selection of Type 2 quasars using the all-sky Wide-Field Infrared Survey Explorer (WISE; Wright et al. 2010) survey identifies several million Type 2 quasars, roughly down to the bolometric luminosity of the primary Sloan Digital Sky Survey (SDSS; York et al. 2000) Type 1 quasar spectroscopic survey (Stern et al. 2012;Assef et al. 2013;Donoso et al. 2013). The exact nature of Type 2 quasars is still under debate. A simple extension of the orientation-driven unified model of AGN (Antonucci 1993;Urry & Padovani 1995) to high luminosities can account for their existence. However, there is also observational evidence for an evolutionary link to Type 1 quasars (e.g., Sanders et al. 1988;Hopkins et al. 2008). The importance of Type 2 quasars to the cosmic evolution of AGN is further demonstrated by their requirement in models of the cosmic X-ray background (CXB) (e.g., Treister & Urry 2005;Gilli et al. 2007;Treister et al. 2009). However, the observed X-ray properties of Type 2 quasars are poorly constrained at present. Consequently, the column density (N H ) distribution 2 and Compton-thick 3 fraction of quasars are poorly known, which has implications for both AGN and CXB models (e.g., Fabian et al. 2008;Draper & Ballantyne 2010). To date, the largest sample of spectroscopically confirmed (radio-quiet) Type 2 quasars at z 1 is that of Zakamska et al. (2003) and Reyes et al. (2008). Zakamska et al. (2003) selected 291 Type 2 quasars at redshift 0.2 z 0.8 from the SDSS based on their optical properties: high [O III] λ5007 line power and narrow emission lines. Reyes et al. (2008) used the same approach and more recent SDSS data to extend the sample to 887 objects. While X-ray selections of Type 2 quasars at 10 keV are biased against the most heavily obscured sources (e.g., Maiolino et al. 1998), [O III] emission is mostly produced on ∼ 100 pc scales and is thus relatively unaffected by nuclear obscuration, allowing larger numbers of the heavily obscured, X-ray faint objects to be found. Following up [O III] selected, rather than X-ray selected, objects with X-ray observations thus gives a less biased estimate of the N H distribution of AGN (e.g., Risaliti et al. 1999). The X-ray properties of the Zakamska et al. (2003) and Reyes et al. (2008) Type 2 quasar sample have been studied using Chandra and XMM-Newton observations (Ptak et al. 2006;Vignali et al. 2006Vignali et al. , 2010Jia et al. 2013). Vignali et al. (2006Vignali et al. ( , 2010 measured column densities for a handful of sources through 'direct' means (i.e., using X-ray spectroscopic analysis). The highest column densities measured in this manner were N H ≈ 3 × 10 23 cm −2 . However, distant obscured quasars are X-ray weak and in most cases direct constraints are not feasible. Instead, an 'indirect' approach to estimating column densities can be used where the observed X-ray emission is compared with a proxy for intrinsic AGN power (e.g., the mid-IR continuum emission from hot dust or high-excitation emission lines; Bassani et al. 1999;Lutz et al. 2004;Heckman et al. 2005;Alexander et al. 2005Alexander et al. , 2008Cappi et al. 2006;Panessa et al. 2006;Meléndez et al. 2008;Gandhi et al. 2009;LaMassa et al. 2009LaMassa et al. , 2011Gilli et al. 2010;Goulding et al. 2011). Vignali et al. (2006 were limited to indirect absorption constraints for the majority of their Type 2 quasar sample, and found in every case that Compton-thick absorption (i.e., N H > 1.5 × 10 24 cm −2 ) is required to explain the X-ray suppression in these sources. To first order, there appears to be a bimodal N H distribution for optically selected Type 2 quasars, with ∼ 40% having N H = 10 22 -3 × 10 23 cm −2 and ∼ 60% being Comptonthick. This is interesting given that a continuous N H distribution is measured for Type 2 Seyferts (e.g., Bassani et al. 1999;Risaliti et al. 1999;LaMassa et al. 2009LaMassa et al. , 2011 To better constrain the N H distribution of Type 2 quasars, more robust identifications of Compton-thick absorption must be obtained through either: (i) measurement of strong Fe Kα emission, with EW ≥ 1 keV, which results from the Fe Kα line being viewed in reflection against a suppressed continuum (e.g., Ghisellini et al. 1994;Levenson et al. 2002); or (ii) measurement of high column densities through spectroscopic analysis at high energies above the photoelectric absorption cutoff (i.e., above observed-frame 8 keV for z ∼ 0.5 and N H ∼ 10 24 cm −2 ), where X-ray emission is relatively unabsorbed. The recent launch of the Nuclear Spectroscopic Telescope Array (NuSTAR, Harrison et al. 2013) will see a breakthrough in our understanding of heavily obscured AGN and the CXB population in general. NuSTAR is the first orbiting observatory with the ability to focus high energy ( 10 keV) X-rays using grazing incidence optics. It provides a two orders of magnitude improvement in sensitivity and over an order of magnitude improvement in angular resolution over previous hard X-ray observatories. The high energy range at which NuSTAR operates (3-79 keV) means that the intrinsic, unabsorbed emission of AGN is observed for all but the most heavily obscured, Compton-thick objects. At z 1, it is now possible to directly constrain column densities an order of magnitude higher than those achievable with Chandra and XMM-Newton alone (e.g., Luo et al. 2013). In this paper, we present exploratory NuSTAR observations of three optically selected Type 2 quasars at z ≈ 0.4-0.5. All three have been identified as Compton-thick candidates in previous studies (Vignali et al. 2006Jia et al. 2013). We use X-ray data from NuSTAR, Chandra and XMM-Newton, and near-UV to mid-IR data from other observatories to determine the physical properties of the quasars. In particular, we use a combination of direct and indirect methods to constrain the absorbing column densities. The paper is organized as follows: our sample selection is detailed in Section 2; we describe the observations, data reduction and data analysis in Section 3; our main results regarding X-ray absorption constraints are presented in Section 4; we summarize our main conclusions in Section 5. The cosmology adopted throughout this work is (Ω M , Ω Λ , h) = (0.27, 0.73, 0.71). SAMPLE SELECTION First, we selected objects at z ≈ 0.4-0.5 from the Chandra and XMM-Newton studies of SDSS selected Type 2 quasars by Vignali et al. (2006Vignali et al. ( , 2010 and Jia et al. (2013). Although the objects have narrow Hβ line emission, the Hα line lies outside the SDSS spectral range at these redshifts. Therefore, we cannot rule out that these quasars are luminous versions of the Type 1.9 Seyferts that show evidence for a broad Hα component but no broad Hβ component (Osterbrock 1981). Second, we selected quasars with low observed X-ray to [O III] luminosity ratios, L 2−10keV /L [OIII] < 2.5. This threshold corresponds to a two orders of magnitude suppression of the observed X-ray luminosity, assuming the Mulchaey et al. (1994) relation between [O III] and intrinsic 2-10 keV flux (taking into account the variance of the relation), which is consistent with Compton-thick absorption. This is a conservative selection, since the Mulchaey et al. (1994) relation was calibrated for Type 2 Seyferts, and Type 2 quasars typically have larger X-ray to [O III] luminosity ratios (Netzer et al. 2006). Third, we made sub-selections of three quasars which show evidence for extreme obscuration on the basis of different diagnostics: • SDSS J001111.97+005626.3 (z = 0.409, L 2−10keV = 3.1 × 10 42 erg s −1 , L [OIII] = 1.8 × 10 42 erg s −1 ; Reyes et al. 2008;Jia et al. 2013) has a flat X-ray spectral slope at observed-frame 0.3-10 keV (Γ = 0.6 +1.17 −1.15 ; Jia et al. 2013), which suggests that the X-ray emission is rising steeply towards high energies (> 10 keV). Unlike the other two quasars, there is no mid-IR spectroscopy available. NUSTAR AND MULTIWAVELENGTH DATA In our analysis of the three Type 2 quasars, we used NuS-TAR observations in conjunction with lower energy X-ray observations from Chandra and XMM-Newton, and near-UV to mid-IR data primarily from large-area public surveys. Hereafter we refer to the quasars using abbreviated SDSS object names. 3.1. NuSTAR Observations NuSTAR consists of two co-aligned X-ray telescopes (focal length = 10.14 m) which use grazing incidence optics to focus hard X-rays (3-79 keV) onto two focal plane modules (FPMs A and B; Harrison et al. 2013). Each FPM provides a ≈ 12 × 12 field of view at 10 keV, and a pixel size of 2.46 . The NuSTAR PSF is characterized by a full-width half maximum (FWHM) of 18 , and a half-power diameter of 58 . The astrometric accuracy for bright X-ray sources is ±8 (90% confidence; Harrison et al. 2013). The Type 2 quasars, SDSS J0011+0056, SDSS J0056+0032 and SDSS J1157+6003, were observed by NuSTAR with nominal exposure times of 19.6 ks, 23.5 ks and 23.3 ks, respectively. Details of the observations, including net exposure times, are provided in Table 1. We processed the data using the NuSTAR Data Analysis Software (NuSTARDAS) v. 1.3.0. Calibrated and cleaned event files were produced using the NUPIPELINE script and the NuSTAR CALDB 20131007 release with the standard filter flags. Photometry and Source Detection To characterize the high energy X-ray emission and determine whether sources are detected, we performed photometry in the observed-frame 3-24 keV, 3-8 keV, and 8-24 keV bands for both of the NuSTAR FPMs following Alexander et al. (2013). We avoided using photons above 24 keV, where the drop in effective area and the prominent background features (see Figure 2 and 10 of Harrison et al. 2013, respectively) hinder the analysis of faint X-ray sources such as Type 2 quasars. We split the NuSTAR event files into individual band images using DMCOPY, part of the Chandra Interactive Analysis Observations software (CIAO, v4.4;Fruscione et al. 2006). 4 We extracted the gross source counts (S) from a 45 radius aperture centered on the SDSS position. For a source at the NuSTAR aim point, and for the energy range (3-24 keV) and spectral slopes (Γ = 0.6-1.8) used in this study, this aperture encloses ≈ 65% of the full PSF energy. We extracted the background counts (B) from an annulus with an inner radius 90 from the source and an outer radius 150 from the source, which allowed the local background to be sampled while minimising contamination from the source. To obtain the background counts in the source extraction region (B src ), we multiplied B by the area scaling factor between the source and background regions (A S /A B ). Net source counts were calculated as S − B src , and corresponding 68.3% confidence level uncertainties were taken as S + B(A S /A B ) 2 . For non detections, we calculated 99.7% confidence level upper limits using the Bayesian method of Kraft et al. (1991). The NuS-TAR photometry is given in Table 2. To test whether the quasars are detected in the individual NuSTAR band images, we looked for significant source signals at their SDSS positions. We assumed binomial statistics and calculated false probabilities, or 'no-source' probabilities (P ), using the following equation: where T = S +B and p = 1/(1+B/B src ). P is the probability that, assuming there is no source at the SDSS position, the measured gross counts in the source aperture (S) are purely due to a background fluctuation (Weisskopf et al. 2007). Given that the three Type 2 quasars are faint at 3-8 keV (see Table 2 for Chandra and XMM-Newton fluxes and upper limits), and likely have flat X-ray spectra with emission rising steeply to higher energies, NuSTAR is most likely to detect the sources above 8 keV (observed-frame). At these energies Chandra and XMM-Newton have little to no sensitivity. In Figure 1, we show the S and B src values measured with NuSTAR for the 8-24 keV band (filled symbols), and the no-source probabilities to which they correspond (dashed lines). 5 For the purposes of this figure, Poisson statistics have been assumed; for our sources, B is large and the Poisson integral thus provides a good approximation of Equation 1 (Weisskopf et al. 2007). Taking binomial no-source probabilities greater than 1% to indicate non detections, neither SDSS J0056+0032 nor SDSS J1157+6003 are detected in either FPM. SDSS J0011+0056, on the other hand, is detected in FPMA with a binomial no-source probability of 0.093%. 6 The NuSTAR image corresponding to this detec- (3) and (4): NuSTAR observation ID and start date. (5): Net on-axis NuSTAR exposure time. This value applies to both FPMA and FPMB. (6) Lower energy X-ray observatory data used (Chandra or XMM-Newton). (7), (8) and (9): Chandra or XMM-Newton observation ID, observation start date, and net on-axis exposure time, corrected for flaring and bad events. (2) and (3) at observed-frame 8-24 keV for SDSS J0011+0056, SDSS J0056+0032 and SDSS J1157+6003 (circles, squares and diamonds, respectively). Background counts were measured using two approaches: direct measurement from the NuSTAR images (filled symbols), and from model background maps (empty symbols). The A and B labels correspond to FPMA and FPMB, respectively. The dashed lines indicate Poisson no-source probabilities. There is one significant detection: SDSS J0011+0056 is detected with FPMA. tion is shown in Figure 2. The source is not detected in FPMB, which has higher background noise relative to FPMA for this observation; indeed the net source counts for FPMA ture results in a lower no-source probability of 0.049%. are consistent with the upper limit for FPMB (see Table 2). SDSS J0011+0056 is also weakly detected in the 3-24 keV band for FPMA, with a binomial no-source probability of 0.58%. Aside from this, none of the quasars are detected in the 3-8 keV and 3-24 keV bands. The no-source probability is sensitive to the background region sampled. To partially address this we also measured the background from model background maps produced using NUSKYBGD (Wik et al., in prep.), summing counts within the 45 radius source aperture. These measurements are shown as empty symbols in Figure 1. SDSS J0011+0056 is still detected in FPMA using this approach, with a no-source probability of 0.033% at 8-24 keV. Flux Calculation For each NuSTAR energy band we determined the conversion factor between net count rate and source flux using XSPEC v12.8.1j (Arnaud 1996), taking into account the Response Matrix File (RMF) and Ancillary Response File (ARF) for each FPM. We assumed a power-law model with Γ = 1.8, consistent with that found for AGN at observedframe 3-24 keV . We corrected fluxes to the 100% encircled-energy fraction of the PSF. The NuS-TAR fluxes are given in Table 2. For the NuSTAR-detected quasar, SDSS J0011+0056, we measure an observed-frame 8-24 keV flux of 1.32 × 10 −13 erg s −1 cm −2 . This value is consistent with extrapolations from the XMM-Newton 0.5-10 keV count rate given the photon index constraints of Jia et al. (2013), Γ = 0.6 +1.17 −1.15 , and assuming a simple unabsorbed power-law model. Additionally, as we later show in Section 4.2, our X-ray flux measurement for SDSS J0011+0056 is consistent with that expected from its 6 µm luminosity, which is assumed to result from the reprocessing of AGN emission by obscuring dust. Lower Energy X-ray Data For SDSS J0011+0056 we used the archival XMM-Newton EPIC observation, first published in Jia et al. (2013). We analyzed the Pipeline Processing System (PPS) data products for this observation using the Science Analysis Software 7 (SAS v.12.0.1). The MOS1 and MOS2 data were coadded with the SAS task EPICSPECCOMBINE. The PN data were excluded, since SDSS J0011+0056 is close to a chip gap. The source counts were extracted from a 15 radius aperture and the background counts were extracted using an 80 radius source-free aperture, selected to sample the local background while avoiding chip gaps and nearby serendipitous sources. We used XSPEC to convert from count rate to flux, assuming a power-law model with Γ = 1.8 and using the XMM-Newton RMF and ARF. Throughout this work, we neglect the crosscalibration constants between MOS and NuSTAR as the current best estimates are ∼ 7 ± 5% (Madsen et al., in prep.), and a change on this scale does not affect our results. For SDSS J0056+0032 and SDSS J1157+6003 we used the archival Chandra observations, first published in Vignali et al. (2006Vignali et al. ( , 2010. We reprocessed the data using CHANDRA REPRO, 8 a CIAO pipeline, to create event files. The source counts were extracted from a 3 radius aperture, and the background counts were extracted from an annulus with an inner radius 10 from the source and an outer radius 30 from the source. As SDSS J0056+0032 and SDSS J1157+6003 are non detections at observed-frame 3-8 keV, we calculated 99.7% confidence level upper limits for the source counts using the Bayesian method of Kraft et al. (1991). To calculate fluxes, we converted from Chandra count rates with the HEASARC tool WebPIMMs 9 (v4.6b) assuming a power-law model with Γ = 1.8, and corrected to the 100% encircled-energy fraction of the PSF. As the Type 2 quasars are faint at X-ray wavelengths, we are unable to fit the spectra accurately. For instance, SDSS J0011+0056 is detected with XMM-Newton, but using the combined MOS1+MOS2 data we only extract 5.6 and 20.6 net source counts in the observed-frame 0.5-3 keV and 7 http://xmm.esa.int/sas/ 8 http://cxc.harvard.edu/ciao/ahelp/chandra repro.html 9 http://heasarc.gsfc.nasa.gov/Tools/w3pimms.html (Zakamska et al. 2004). This data point was not used in the SED modeling; h bestfit parameters (corrected for dust reddening) from the SED decomposition described in Section 3.3:â is the fractional contribution of the AGN to the 0.1-30 µm emission; L 6µm is the rest-frame 6 µm luminosity (νLν ) of the AGN in units of 10 44 erg s −1 . The uncertainties are standard deviations, derived from the Monte Carlo re-sampling of the photometric data. 3-8 keV bands, respectively. We list the Chandra and XMM-Newton 3-8 keV fluxes and upper limits in Table 2. Near-UV to Mid-IR Data and SED Decomposition To investigate the multiwavelength properties of the three Type 2 quasars, in particular the mid-IR emission from the AGN, we collected photometric data at 0.3-30 µm (i.e., at near-UV through mid-IR wavelengths). We used imaging data from public large-area surveys, primarily the Sloan Digital Sky Survey (SDSS; York et al. 2000), the UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence et al. 2007), and the WISE all-sky survey (Wright et al. 2010). Additionally, for SDSS J0056+0032 and SDSS J1157+6003, we used Spitzer photometry from the Spitzer Enhanced Imaging Products Source List. 10 The photometric dataset, not corrected for Galactic extinction, is provided in Table 3. We note that since the observations are not contemporaneous, AGN variability may affect the SED analysis at longer wavelengths, where the AGN is bright with respect to the host galaxy. We used the near-UV through mid-IR photometric data to produce broad-band spectral energy distributions (SEDs) for our sample. We modeled the SEDs using the Assef et al. (2010) 0.03-30 µm empirical low-resolution AGN and galaxy templates. Each SED was modeled as a best-fit combination of an elliptical, a spiral and an irregular galaxy component, plus an AGN. We refer the reader to Assef et al. (2008Assef et al. ( , 2010Assef et al. ( , 2013 for further details. In Fig. 3 we present the SEDs and best-fitting model solutions. For SDSS J1157+6003 we also show the IRAS 60 µm flux measured by Zakamska et al. (2004, green data point in Fig. 3), which lies beyond the wavelength range of the galaxy templates and was therefore excluded from the SED modeling. The data point is consistent with a simple extrapolation of the best-fitting model from shorter wavelengths. Zakamska et al. (2004) also detect SDSS J0056+0032 at 60 µm, but at a low significance level (80% confidence). In Table 3 we provide the best-fitting parametersâ (the fractional contribution from the AGN component to the 0.1-30 µm emission after correction for dust reddening; Assef et al. 2010) and L 6µm (the luminosity of the AGN component at rest-frame 6 µm after correction for dust reddening; νL ν ). The uncertainties onâ and L 6µm are standard deviations, derived from the Monte Carlo re-sampling of the data according to the photometric uncertainties. Both parameters are well constrained. 11 Since our SED modeling uses a single AGN template, it does not account for the fact that AGN show a range of heated dust emission relative to the bolometric emission of the accretion disk. For instance, assuming the distribution of quasar covering factors found by Roseboom et al. (2013) would introduce an additional uncertainty to the 6 µm luminosities of ≈ ±0.5L 6µm . Our three Type 2 quasars have highâ values, which indicates that they are AGN-dominated at 0.1-30 µm. For SDSS J0056+0032 and SDSS J1157+6003 this is in agreement with the Spitzer-IRS spectroscopy of Zakamska et al. (2008), which shows the sources to be AGN-dominated at mid-IR wavelengths (for SDSS J0011+0056 there is no mid-IR spectroscopy available). RESULTS The three Type 2 quasars in this work bear the signatures of heavily obscured, Compton-thick AGN based on multiwavelength diagnostics (see Section 2 of this work; Zakamska et al. 2008;Vignali et al. 2010;Jia et al. 2013). Here we present the results of our analysis, which is aimed at assessing the prevalence of extreme absorption in these systems. X-rays provide a direct measure of AGN emission that has been subject to circumnuclear absorption. As such, the characterisation of X-ray spectra is necessary to obtain reliable estimates of absorbing column densities (N H ). 12 For SDSS J0011+0056 we detect X-rays over the observed-frame 3-24 keV energy range, and for SDSS J0056+0032 and SDSS J1157+6003 we place upper limits on the 3-24 keV emission (see Table 2). As the quasars are at best weak detections at 3-24 keV, detailed modeling of their X-ray spectra is unfeasible. For SDSS J0011+0056 we characterize the observed-frame 3-24 keV X-ray spectrum using the ratio of hard (8-24 keV) to soft (3-8 keV) emission, which provides a direct absorption constraint (see Section 4.1). For the remaining two quasars we are limited to indirect absorption constraints from the comparison of the observed X-ray emission with the intrinsic X-ray emission implied by infrared measurements (see Section 4.2). Direct (X-ray) Absorption Constraints SDSS J0011+0056 is detected with NuSTAR in the 8-24 keV band, but not in the 3-8 keV band. We measure a 99.7% confidence level lower limit for the NuSTAR X-ray band ratio (i.e., the ratio of 8-24 keV counts to 3-8 keV counts), of > 1.0. In Figure 4 we show the NuSTAR band ratio against redshift for SDSS J0011+0056 and the first 10 sources detected in the NuSTAR extragalactic survey ; the SDSS J0011+0056 band ratio is amongst the most extreme. We compare the band ratio with predictions from a simple absorbed power-law (ZWABS·POW) model and the MYTORUS model (Murphy & Yaqoob 2009), both of which are implemented in XSPEC. MYTORUS is a self-consistent physical model that is valid for the energy range 0.5-500 keV, and for column densities of N H = 10 22 -10 25 cm −2 . It is more suitable than the ZWABS·POW model for column densities of N H 5 × 10 23 cm −2 , where a careful treatment of scattering and reflection is needed (for instance, see Figure 5). In the MYTORUS model, an obscuring torus reprocesses Xrays from a central source, and the resulting X-ray spectrum has both transmitted and scattered components. In the current implementation of MYTORUS, the half-opening angle of the obscuring medium is fixed to 60 • (i.e., a covering factor of 0.5), a value inferred from the obscured AGN fraction of Seyfert galaxies. We note that a larger half-opening angle could be more appropriate in this study of Type 2 quasars, since the obscured AGN fraction is observed to decrease with luminosity (e.g., Ueda et al. 2003;Lusso et al. 2013). We assume a specific MYTORUS model with an intrinsic photon index of Γ = 1.8 (typical value for AGN at observed-frame 3-24 keV; Alexander et al. 2013) and an inclination angle of θ obs = 70 • , referred to as Model A hereafter. Varying θ obs between 65 • and 90 • , where 90 • corresponds to an edge-on view through the equatorial plane of the torus, makes a negligible difference to the MYTORUS band ratio tracks in Figure 4. We avoid using θ obs values close to 60 • , below which the line-of-sight X-ray emission does not intercept the torus and the MYTORUS model therefore describes an unobscured AGN. As shown in Figure 4, the NuSTAR band ratio lower limit for SDSS J0011+0056 corresponds to an absorbing column density of N H 2.5 × 10 23 cm −2 . This implies heavy, but not necessarily Compton-thick, absorption. Since XMM-Newton is more sensitive than NuSTAR at < 8 keV, we also measure an X-ray band ratio for SDSS J0011+0056 using the XMM-Newton 3-8 keV data and NuSTAR 8-24 keV data, which gives a NuSTAR/XMM-Newton band ratio of 1.2 ± 0.6 (68.3% confidence level). One limitation of the measurement is that we are unable to assess whether the X-ray emission of SDSS J0011+0056 has varied significantly in the ∼ 6.5 years between the XMM-Newton and NuSTAR observations; if the XMM-Newton count rate is relatively low, we overestimate the band ratio, and vice versa. In Figure 5, we compare the measured NuSTAR/XMM-Newton band ratio with predictions from the MYTORUS and ZWABS·POW models as a function of column density. We fixed the model redshifts to that of SDSS J0011+0056 (z = 0.409), used a range of intrinsic photon indices corresponding to those observed for unobscured AGN (1.7 < Γ < 2.3; e.g., Mateos et al. 2010;Scott et al. 2011), and used a range of inclination angles in the MYTORUS model (65 • < θ obs < 90 • ). The resulting tracks in Figure 5 suggest that SDSS J0011+0056 is absorbed by N H 5 × 10 23 cm −2 , which is consistent with the NuSTAR band ratio analysis (Figure 4). Assuming Model A (Γ = 1.8 and θ obs = 70 • ), the observed NuSTAR/XMM-Newton band ra- Assef et al. (2010). The photometric data (black data points) and best-fitting parameters are given in Table 3. The IRAS 60 µm flux for SDSS J1157+6003 (green data point) was not used in the SED decomposition. -NuSTAR X-ray band ratio (8-24 keV to 3-8 keV counts ratio) against redshift for SDSS J0011+0056 (black circle), and the NuSTARdetected sources in Alexander et al. (2013) (grey squares). The dashed and dotted lines show band ratio predictions from MYTORUS and simple ZWABS·POW models respectively, for a variety of column densities, and assuming a spectral slope of Γ = 1.8. Varying θ obs makes a negligible difference to the MYTORUS tracks. Based on the 99.7% lower limit for the NuSTAR band ratio, SDSS J0011+0056 is consistent with being heavily obscured. tio for SDSS J0011+0056 implies a column density of N H = (8.1 +2.9 −3.4 ) × 10 23 cm −2 (i.e. heavy, but not clearly Comptonthick, absorption is required to produce the observed 3-24 keV X-ray spectrum). This result is consistent with column density estimates from indirect methods, as shown in Section 4.2. For comparison, the highest column densities directly constrained by Vignali et al. (2006Vignali et al. ( , 2010 in their < 10 keV analysis of SDSS-selected Type 2 quasars are N H ≈ 3 × 10 23 cm −2 . The above N H constraint for SDSS J0011+0056 must be treated with a degree of caution, since it depends on the assumed X-ray spectral model. Here we assess the impact on our result of two spectral complexities, both of which are important in the case of Type 2 quasars. First, a soft 'scattered' power law component is commonly observed for obscured -NuSTAR/XMM-Newton X-ray band ratio (NuSTAR 8-24 keV to XMM-Newton 3-8 keV count-rate ratio) against line-of-sight X-ray absorbing column density (N H ). The grey shaded area shows the 68.3% confidence level region for the observed band ratio of SDSS J0011+0056. The hashed regions show the range of band ratios predicted with MYTORUS (blue) and a simple ZWABS·POW model (red) for z = 0.409, and for a range of intrinsic photon indices (1.7 < Γ < 2.3). The MYTORUS region was computed for a range of inclination angles (65 • < θ obs < 90 • ). According to these models, SDSS J0011+0056 is absorbed by N H 5 × 10 23 cm −2 . We also show band ratio predictions for a specific MYTORUS model with Γ = 1. AGN which may be either nuclear emission scattered by hot gas (e.g., Turner et al. 1997), or 'leakage' of nuclear emission due to partial covering (e.g., Vignali et al. 1998;Corral et al. 2011). Adding a scattered component which is 2% of the primary transmitted power law (a typical X-ray scattering fraction for Type 2 Seyferts; e.g., Turner et al. 1997) to Model A, we obtain a consistent result: N H > 4.9 × 10 23 cm −2 (68.3% confidence level lower limit). Second, the absorbing medium may have a complex geometry (e.g., a clumpy torus) that requires the equatorial and line-of-sight column densities of the MYTORUS model (N H,eq and N H , respectively) to be treated independently. Decoupling these two parameters in Model A and setting N H,eq to the maximum possible value of 10 25 cm −2 yields a consistent result: N H = (7.7 +2.8 −3.4 ) × 10 23 cm −2 . Last, we emphasize that although MYTORUS is a relatively complex model, the N H constraints do not differ significantly from those using a simple ZWABS·POW model in the Compton-thin regime (see Figure 5). We conclude that the inferred N H for SDSS J0011+0056 does not change significantly with the assumed spectral model. Indirect Absorption Constraints The X-ray emission in heavily obscured AGN is subject to significant absorption along the line of sight. The mid-IR emission, on the other hand, has been reprocessed by the dust obscuring the AGN and is less sensitive to extinction. The mid-IR luminosity therefore provides an estimate of the intrinsic AGN power. As such, the presence of absorption in an AGN can be inferred from the observed X-ray to mid-IR luminosity ratio (e.g., Lutz et al. 2004;Alexander et al. 2008;LaMassa et al. 2009;Goulding et al. 2011;LaMassa et al. 2011). We note that the mid-IR emission is also significantly absorbed for ≈ 50% of Compton-thick AGN (e.g., Bauer et al. 2010;Goulding et al. 2012). Indeed, SDSS J0056+0032 has significant Si-absorption at 9.7 µm, in contrast to SDSS J1157+6003 (see Section 2). To account for this, we have corrected our mid-IR luminosities for dust reddening (see Section 3.3). In Figure 6 we compare the restframe X-ray luminosities (L X ) of our three Type 2 quasars with the rest-frame 6 µm luminosities (L 6µm ), exploring both the low energy (2-10 keV) and high energy (10-40 keV) Xray regimes. For SDSS J0011+0056, L 2−10keV was obtained through photometry in the rest-frame 2-10 keV band using XMM-Newton data (see Section 3.2). For SDSS J0056+0032 and SDSS J1157+6003, L 2−10keV was obtained through photometry in the observed-frame 0.5-8 keV band using Chandra data (see Section 3.2), and an extrapolation to the rest-frame 2-10 keV band assuming a power-law model with Γ = 1.8. The L 10−40keV values were obtained through a photometric analysis in the rest-frame 10-40 keV band using NuSTAR data (see Section 3.1). The 6 µm luminosities are from SED fitting (Section 3.3) and relate specifically to the emission from the AGN. In the rest-frame 2-10 keV band, the Type 2 quasars fall below the intrinsic X-ray-mid-IR luminosity relation found for AGN in the local universe (Lutz et al. 2004); see Figure 6a. For comparison, we also show the non-beamed sources detected in the NuSTAR extragalactic survey , which lie within the scatter of the Lutz et al. (2004) relation. The 2-10 keV luminosity suppression of the three Type 2 quasars is expected given our selection and has previously been demonstrated for SDSS J0056+0032 and SDSS J1157+6003 (Vignali et al. 2006. Assuming the suppression of the X-ray emission is due to absorption, as opposed to intrinsic X-ray weakness, we estimate the column densities of these systems by comparing with the X-ray to mid-IR luminosity ratios for AGN absorbed by N H = 10 24 cm −2 and N H = 5 × 10 24 cm −2 (dash-dotted and dashed lines in Figure 6a, respectively). On the basis of this analysis, the 2-10 keV luminosities of SDSS J0056+0032 and SDSS J1157+6003 are consistent with being absorbed by a factor of 300, and therefore lie well within the Comptonthick region with N H 5 × 10 24 cm −2 . The X-ray emission from SDSS J0011+0056, on the other hand, is suppressed by a factor of ≈ 7, but is still consistent with being Compton-thick or near Compton-thick (N H ≈ 10 24 cm −2 ). Since our 2-10 keV luminosities were calculated assuming a Γ = 1.8 power-law, which is probably not consistent with heavy absorption at z ∼ 0.5, we repeated the flux calculations in Section 3.2 assuming Γ = 0.6 (the spectral slope of SDSS J0011+0056 as measured by Jia et al. (2013); see Section 2). This results in L 2−10keV values which are higher by a factor of ≈ 1.9; not enough to significantly change the conclusions drawn from Figure 6a. In the rest-frame 10-40 keV band, the X-ray emission is only strongly suppressed for column densities of N H 5 × 10 24 cm −2 , and therefore NuSTAR observes the intrinsic X-ray emission for all but the most heavily obscured AGN; see Figure 6b. For comparison, Matsuta et al. (2012) studied Swift/BAT-detected AGN and found that for 14-195 keV, only ≈ 60% of Compton-thick objects have significant X-ray suppression with respect to the intrinsic X-ray to mid-IR luminosity ratio. The results in Figure 6b suggest that the X-ray emission from SDSS J0011+0056 is not significantly suppressed at 10-40 keV, and is absorbed by N H 10 24 cm −2 . This is consistent with the X-ray band ratio analysis in Section 4.1. SDSS J0056+0032 is consistent with being Compton-thick, with N H 10 24 cm −2 . SDSS J1157+6003 is the strongest candidate for being Compton-thick based on this analysis. Its 10-40 keV luminosity is consistent with being absorbed by a factor of 10, despite the high X-ray energies being probed, which again suggests an extreme column density of N H 5 × 10 24 cm −2 . Assuming Γ = 0.6, rather than Γ = 1.8, for the NuSTAR count rate to flux conversion (Section 3.1.2) results in L 10−40keV values which are a higher by a factor of ≈ 1.4; again, not enough to significantly change the conclusions drawn from Figure 6b. As an independent test, we repeated our indirect analyses using [O III] luminosity as a measure of intrinsic AGN power (i.e., using L X /L [OIII] ). This yielded very similar results; NuSTAR observes the intrinsic Xray emission of SDSS J0011+0056, while SDSS J0056+0032 and SDSS J1157+6003 are consistent with being heavily Compton-thick (N H 5 × 10 24 cm −2 ). However, since our sample was originally selected on the basis of high [O III] luminosity (Zakamska et al. 2003;Reyes et al. 2008), we consider the L X /L 6µm results to be more reliable. Nevertheless, the L X /L 6µm ratio alone is not a robust indicator of Compton-thick absorption, even if the 6 µm emission accurately reflects the intrinsic power of the AGN. First, some quasars can be intrinsically X-ray weak (e.g., Gallagher et al. 2001;Wu et al. 2011;Luo et al. 2013;Teng et al. 2013, ApJ, submitted). Second, inferred column densities depend on the assumed X-ray spectral model (e.g., Yaqoob & Murphy 2011;Georgantopoulos et al. 2011a). For instance, adding an additional soft scattered component, with a scattering fraction of 2%, to the MYTORUS model predicts a L 2−10keV /L 6µm ratio for N H = 5 × 10 24 cm −2 which is a factor of three higher than that shown in Figure 6b. However, this is not enough to change our broad conclusions regarding the column densities of SDSS J0056+0032 and SDSS J1157+6003. Ultimately, deeper X-ray observations, with simultaneous coverage at low and high energies, are required to directly constrain N H and provide more robust evidence for or against the presence of Compton-thick absorption in these Type 2 quasars. SUMMARY AND FUTURE WORK We have presented the first sensitive high energy (> 10 keV) analysis of optically selected Type 2 quasars. The sample consists of three objects that show evidence for -Rest-frame X-ray luminosity against rest-frame 6 µm luminosity for: (a) 2-10 keV luminosities calculated using XMM-Newton or Chandra data; and (b) 10-40 keV luminosities calculated using NuSTAR data. The X-ray luminosities are not corrected for absorption. SDSS J0011+0056, SDSS J0056+0032 and SDSS J1157+6003 are shown as white, grey and black circles, respectively. We compare with sources detected as part of the NuSTAR extragalactic survey (open squares; Alexander et al. 2013). We also compare with an intrinsic relation for 2-10 keV, calibrated using local AGN (dotted line, with a shaded region indicating the scatter; Lutz et al. 2004). This relation has been extrapolated to the 10-40 keV band assuming Γ = 1.8, and to relations for AGN absorbed by N H = 10 24 cm −2 (dash-dotted line) and N H = 5 × 10 24 cm −2 (dashed line) assuming a MYTORUS model with Γ = 1.8 and θ obs = 70 • . If we assume that low X-ray luminosities are due to absorption, sources that lie below the N H = 10 24 cm −2 tracks may be Compton-thick. • For SDSS J0011+0056, we characterize the 3-24 keV spectrum using the X-ray band ratio and find evidence for near Compton-thick absorption with N H 5 × 10 23 cm −2 ; see Section 4.1. This is consistent with the column densities inferred from the 2-10 keV to mid-IR ratio, the 10-40 keV to mid-IR ratio, and the X-ray to [O III] ratios; see Section 4.2. • For SDSS J0056+0032 and SDSS J1157+6003, we find evidence for a significant suppression of the rest-frame 10-40 keV luminosity with respect to the mid-IR luminosity. If due to absorption, this result implies that these Type 2 quasars are extreme, Compton-thick systems with N H 10 24 cm −2 ; see Section 4.2. The characterisation of distant heavily obscured AGN is clearly an extremely challenging pursuit. Nevertheless, as we have demonstrated, the sensitive high energy observations of NuSTAR provide a significant improvement compared to Chandra or XMM-Newton observations alone; for quasars at z ∼ 0.5, high column densities of N H 5 × 10 23 cm −2 can now be directly constrained. Based on the results obtained in this exploratory study, we are now extending the analysis of optically selected Type 2 quasars to a larger sample which is currently being observed by NuSTAR. Furthermore, NuS-TAR is undertaking deep surveys in the ECDFS (Mullaney et al., in prep.) and COSMOS (Civano et al., in prep.) fields, along with a large-area serendipitous survey , that are likely to uncover a number of heavily obscured quasars. These upcoming studies will provide a leap forward in our understanding of the column density distribution of distant luminous AGN. the NuSTAR Data Analysis Software (NuSTARDAS) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (USA).
2014-02-11T21:15:55.000Z
2014-02-11T00:00:00.000
{ "year": 2014, "sha1": "4b6306f186dbf403cc9c8236c870fb8bcdc0ab78", "oa_license": null, "oa_url": "https://backend.orbit.dtu.dk/ws/files/90445581/0004_637X_785_1_17.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c2e23f71801ea337e12e67f6c926cd88ee555df2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244464593
pes2o/s2orc
v3-fos-license
Undifferentiated laryngeal carcinoma with hyaline bodies in a cat Background Primary laryngeal neoplasms are rare in cats, with lymphoma and squamous cell carcinoma being the most commonly diagnosed tumour types. These tumours are usually highly aggressive, difficult to treat, and have a poor prognosis. Here an undifferentiated laryngeal carcinoma with hyaline bodies in a cat is reported. Case presentation A 13-year-old cat was presented for progressive respiratory signs. Diagnostic procedures revealed a partially obstructive laryngeal mass. Cytology was compatible with a poorly differentiated malignant tumour, with neoplastic cells frequently containing large intracytoplasmic hyaline bodies. After 1 month the patient was euthanised due to a worsening clinical condition and submitted for post-mortem examination, which confirmed the presence of two laryngeal masses. Histopathology confirmed the presence of an undifferentiated neoplasm with marked features of malignancy. Strong immunolabelling for pancytokeratin led to a diagnosis of undifferentiated carcinoma, however, histochemical and immunohistochemical investigations could not elucidate the origin of the large intracytoplasmic hyaline bodies observed in tumour cells, which appeared as non-membrane bound deposits of electron-dense material on transmission electron microscopy. Conclusion This is the first report of primary undifferentiated laryngeal carcinoma in a cat. Our case confirms the clinical features and the short survival that have been reported in other studies describing feline laryngeal tumours. Moreover, for the first time in feline literature, we describe the presence of intracytoplasmic hyaline bodies in neoplastic cells that were compatible with the so-called hyaline granules reported in different human cancers and also in the dog. Background Laryngeal tumours are rare in companion animals. In a 10-year survey conducted on biopsy and necropsy specimens, laryngeal tumours accounted for 0.2% of canine cases and 0.14% of feline cases, respectively [1]. The most commonly diagnosed laryngeal neoplasms in cats are lymphoma (which represents up to 50% of feline primary laryngeal tumours) and squamous cell carcinoma, with adenocarcinoma and undifferentiated round cell tumours being less represented [2][3][4][5][6][7]. Affected cats are usually presented with severe respiratory signs such as dyspnoea, stridor, and coughing. Regardless of the tumour type, primary laryngeal neoplasms are usually highly aggressive and poorly responsive to therapy. Treatment options for laryngeal lymphoma include chemotherapy, radiation therapy and tracheostomy, or combinations of these, while carcinomas have been treated with permanent tracheostomy or palliative treatment such as prednisolone or COX2 inhibitors [2-5, 7, 8]. In most studies, however, cats are usually euthanised at diagnosis due to the severity of symptoms, the invasiveness of some treatment Torrigiani et al. Acta Veterinaria Scandinavica (2021) 63:45 procedures (i.e., tracheotomy, permanent tracheostomy), and the poor prognosis [2][3][4]. Reported survival times for feline laryngeal neoplasm are usually low and range from a median of 1 day in untreated patients to medians of 134.5-150 days in patients with lymphoma treated with multimodal therapy [2][3][4]. Here we describe the first reported case of primary undifferentiated carcinoma of the larynx in a cat and characterize intracytoplasmic hyaline bodies observed within tumour cells. Case presentation A 13-year-old spayed female domestic shorthair cat was presented to the Veterinary Teaching Hospital of the University of Padua with a recent history of weight loss, progressive dyspnoea, respiratory crises and multiple episodes of regurgitation in the previous week. According to the owners, the cat also experienced difficulty in eating and drinking. On physical examination, the patient had decreased muscular masses with a reduced body condition score (BCS 4/9), mucus membranes appeared slightly pale with capillary refill time of 2 s. The patient exhibited marked tachypnoea (65 breaths/min), with inspiratory dyspnoea and bilateral reinforced respiratory sound with stridors and wheezes on thoracic auscultation, and slight bradycardia (140 bpm). Diagnostic workup included haematology, biochemistry and urinalysis, which were within normal limits, and serum protein electrophoresis that revealed a slight decrease in the albumin fraction and an increase in the α2-and β1-globulin fractions. Head, neck and thoracic x-rays performed under sedation revealed a soft tissue 2.6 × 2.2 cm mass that displaced the larynx dorsally. Moreover, a mild pulmonary interstitial pattern was observed bilaterally. The laryngeal mass was inspected under sedation by oral cavity examination and sampled for cytology through a trans-oral fine needle aspiration. Cytology revealed a highly pleomorphic cell population. Cells were polygonal or elongated in shape, organized in small groups or individually, with abundant, lightly basophilic cytoplasm that often showed one large, intensely eosinophilic, perinuclear body. Nuclei were round to irregular in shape with granular chromatin and prominent nucleoli. Anisocytosis and anisokaryosis were marked. Additionally, macrokaryosis and multinucleated cells as well as cellular cannibalism were frequently observed ( Fig. 1). Based on the cytological findings, diagnosis of a malignant, poorly differentiated neoplasm was made. Based on cytomorphology, the differential diagnoses included a poorly differentiated carcinoma and a poorly differentiated soft tissue sarcoma. Following the cytology results, a contrast-enhanced CT scan of the head, thorax and abdomen was performed for staging purposes and for surgical planning. CT findings showed an ill-defined, moderately contrast enhancing (pre-contrast HU: 36, post-contrast HU: 96) mass surrounding and largely invading the larynx. Both retropharyngeal lymph nodes appeared markedly enlarged and necrotic (Fig. 2). No further tomographic changes in distant organs were detected. The extension of the tumour and its location were not amenable to conservative surgery with complete surgical Fig. 1 Trans-oral fine needle aspiration cytology: A groups of highly pleomorphic neoplastic cells with marked anisokaryosis and multinucleation, often containing one large, deeply eosinophilic, perinuclear body (asterisk). B Cellular cannibalism: neoplastic cells engulfing cell types of different lineages (i.e. erythrocytes and neutrophils) (arrows). The large perinuclear bodies were consistently observed in cytological samples (asterisk). May Grunwald Giemsa stain, ×400 margins. Due to poor prognosis owners declined further investigations and opted for symptomatic treatment with meloxicam 0.6 mg/kg daily. After a partial remission of clinical signs over a period of 10 days, the patient showed progressive deterioration of clinical conditions and severe respiratory sings and was euthanized 1 month after first clinical presentation. Euthanasia was performed under general anaesthesia using a combination of a curariform-like agent, a narcotic, and a local anaesthetic. On post-mortem examination, two distinct, bilateral laryngeal masses were found. Both tumour masses arose from the mucosa of the cuneiform process of the arytenoid cartilage, and protruded in the laryngeal inlet dramatically reducing the lumen. The larger mass measured 2.8 × 2 cm and extended caudally involving the vocal cords, whilst the smaller was located more cranially involving the aryepiglottic fold. Both masses were well demarcated, exophytic, with firm texture, smooth, with white-tan colour on cut surface (Fig. 3). Mild hyperaemia of the surrounding structures was observed and no other alterations were found on necropsy. Both laryngeal masses were sampled, formalin fixed, routinely processed and haematoxylin and eosin stained for histological examination. All sections were characterised by an infiltrative neoplastic population that expanded the laryngeal submucosa and extended deeper involving the arytenoid cartilage. Neoplastic cells were observed in poorly demarcated, partially encapsulated nodules, in which they were arranged in nests and cords supported by a moderate to abundant fibrovascular stroma. Cells were moderately preserved, oval to elongated in shape, with mostly distinct borders and a variable amount of pale eosinophilic cytoplasm, with scattered basophilic granules and an occasional large (up to 20 μm), brightly eosinophilic, glassy round to oval body, mainly located in the perinuclear area. Nuclei were round to oval, occasionally indented, with coarsely stippled chromatin and one prominent nucleolus. Histological criteria of malignancy, including anisocytosis, anisokaryosis, and macrokaryosis, were marked. Multinucleated cells (up to 6 nuclei per cell), and cellular cannibalism were frequently observed, while mitotic figures were 4 per 10 high power fields (HPFs; diameter of the field of view = 0.55 mm; HPF = 237 mm 2 ; 40× magnification, Nikon Eclipse Ci-L, Nikon Instruments, Japan). Lymphocytic infiltration of the tumour mass and peripheral aggregates of lymphocytes with rare histiocytes were also present (Fig. 4). Based on histology, a diagnosis of undifferentiated malignant laryngeal tumour was made and any possible cellular origin of the tumour between epithelial, mesenchymal, or neuroendocrine was included. In order to further investigate the tumour phenotype, an immunohistochemistry (IHC) panel including pancytokeratin (PanCK), cytokeratin 5/6 (CK 5/6), and cytokeratin 8/18 (CK 8/18) as epithelial markers, p63 and calponin as myoepithelial markers, vimentin as mesenchymal marker, neurofilaments and chromogranin as neuroendocrine markers was performed, while CD3 and CD20 were included to better characterise the lymphocytic infiltrate. All antibodies included in the IHC panel were previously tested in cats (Table 1). Virtually all neoplastic cells showed a strong and diffuse membranous and cytoplasmic immunolabelling for PanCK. In addition, the same cells also showed a diffuse cytoplasmic positivity that ranged from mild to intense for CK 5/6, while mild cytoplasmic positivity for CK 8/18 was observed in scattered neoplastic cells. Occasionally, tumour cells showed a nuclear immunolabelling for p63. Interestingly, aggregates of p63-positive cells were observed specifically adjacent to, and palisading on connective stromal septa. Calponin immunoreactivity was only observed in stromal cells and on vessel walls. Vimentin expression was limited to tumour stromal cells, infiltrating inflammatory cells (lymphocytes and histiocytes) and blood vessels. Immunolabelling for neurofilaments was only observed in the peripheral nerves within tumour nodules, whilst chromogranin was negative. Lymphocytic aggregates at the periphery of the tumour had an equal distribution of CD3+ and CD20+ cells, whilst tumour infiltrating lymphocytes were mainly CD3+ (Fig. 4). In the light of the IHC results, the tumour was deemed an undifferentiated laryngeal carcinoma. In order to better clarify the content of the large cytoplasmic vacuoles that were consistently observed in both cytological and histological samples, a periodic acid-Schiff reaction (PAS) was performed. However, both on cytological and histological samples, cytoplasmic vacuoles were PAS-negative. Neoplastic tissue was then excised from paraffin and processed for transmission electron microscopy (TEM) examination. Ultrastructural results confirmed the presence of pleomorphic neoplastic cells with cytoplasmic vacuolations. Moreover, the presence of cytoplasmic, non-membrane bound, electron-dense deposits was also observed (Fig. 5). TEM results therefore confirmed the diagnosis of undifferentiated laryngeal carcinoma, with the presence of cytoplasmic accumulations of electron-dense material that was compatible with the eosinophilic bodies observed both on cytology and histopathology. Discussion and conclusions In this study, we describe an undifferentiated laryngeal carcinoma in a cat characterized by eosinophilic perinuclear bodies compatible with hyaline granules. Laryngeal tumours are extremely rare in cats. Due to their low prevalence, studies describing feline laryngeal tumours are limited to case reports or case series in which neoplasms are often grouped with other laryngeal diseases [2][3][4][5][6]. In studies describing laryngeal disease in cats, the incidence of primary laryngeal neoplasms ranges from 28.5 to 34.7% [2,3]. In a study describing masses located in the larynx and in the trachea, neoplastic diseases accounted for 77.8% [4]. According to the literature, lymphoma is the most commonly diagnosed primary neoplasm of the larynx in cats, with squamous cell carcinoma being the second most common tumour in this location [2][3][4][5][6][7]. Other tumour types such as adenocarcinoma and undifferentiated round cell tumour have only been reported in very few cases [2][3][4]. In our case, cytological findings were strongly suggestive of malignant neoplasia, but, due to the marked cellular pleomorphism, the scarce differentiation, and the unexpected presence of the large eosinophilic perinuclear bodies a definitive diagnosis could not be achieved. Histopathology confirmed the presence of a malignant highly pleomorphic tumour with large eosinophilic perinuclear bodies. Immunohistochemical examination confirmed the diagnosis of carcinoma due to a diffuse and strong expression of PanCK by neoplastic cells, and the lack of positivity for mesenchymal and neuroendocrine markers. Further extension of IHC panel showed diffuse, mild to intense positivity for CK5/6 (basal epithelial and myoepithelial marker), whilst CK8/18 (luminal epithelial marker) was only expressed in scattered neoplastic cells. This allowed for further inference about the nature of the tumour that was deemed an undifferentiated carcinoma with a basal phenotype, confirmed by the p63 positivity of neoplastic cells resting on collagenous septa [9]. The striking microscopic feature of this tumour, that was consistently observed in both cytological and histological samples, was the presence of variably sized, well demarcated, intensely eosinophilic, glassy, round, PASnegative cytoplasmic bodies more often found in the perinuclear area. Similar structures have been described in human pathology in several tumour types, as well as non-neoplastic diseases, and are referred to as hyaline granules (HGs) or hyaline bodies [10][11][12][13][14]. Regardless of the tumour of origin, HGs seem to have a heterogeneous origin. Histological, immunohistochemical, and ultrastructural investigations conducted on different tumour types have revealed HGs as accumulations of granular or amorphous secretory material within dilated rough endoplasmic reticulum, Golgi apparatus, or non-membrane bound structures, swollen mitochondria, accumulations of intermediate filaments or of components of the basement membrane [10,12]. In humans, HGs have been reported in 8 to 50% of cases of hepatocellular carcinoma (HCC) and their presence is considered of great diagnostic significance for primary and metastatic lesions, while their prognostic value is still debated. More specifically, a study describing these entities in cytological samples of human HCC found that HGs were associated with the granular cellular type, but failed to demonstrate an association between the presence of HGs and tumour grade [10], whilst other studies reported the association of HGs with more differentiated HCCs [11]. On the contrary, in a recent report, the presence of HGs on histology was associated with shorter overall survival in patients with HCC [12]. HGs have also been extensively described in renal cell carcinomas (RCCs) and oncocytomas in human patients [10,13,14]. HGs have been described in up to 17% of RCCs, can greatly vary in size (reported range 1-30 μm), can be positive or negative to PAS staining, and are more commonly composed of basement membrane material that is secreted and accumulated within rough endoplasmic reticulum or membrane-bound organelles [10,13,14]. More specifically, HGs are more commonly associated with granular and mixed granularclear cell RCCs, while they are considered an exclusion criterion for the diagnosis of chromophobe type of RCC and oncocytoma of the kidney [10]. As for HCC, HGs are found in both primary and metastatic RCC lesions [10]. In our study the large eosinophilic cytoplasmic bodies were observed in both cytological and histological samples. Their microscopic features, as well as their ultrastructural appearance, were compatible with those observed for HGs in human cancer. Unfortunately, immunohistochemical analysis failed to further characterise these cytoplasmic bodies, which were negative for CK8/18. In fact, a subset of HGs in HCC can have the ultrastructure of Mallory bodies and are described as accumulations of intermediate filaments (as well as sequestrosome 1/p62 and ubiquitin) consistently positive for CK8 and CK18 [10][11][12]. Moreover, hyaline bodies in our case were diffusely PAS-negative on both cytological and histological samples. HGs found in RCC are usually PAS-positive but a subset of HGs in renal tumours, as well as in other neoplasms, can be PAS-negative [10,13,14]. In veterinary medicine, hyaline bodies have been reported in a case of HCC in a dog [15]. In their case, Masserdotti et al. [15] described these entities as round to polygonal, glassy, eosinophilic intracytoplasmic bodies in cytological and histological specimens. As in our case, a wide IHC panel failed to identify the content of these bodies which were focally PAS positive. Ultrastructural analysis identified these deposits within the rough endoplasmic reticulum or its remnants, therefore strengthening the hypothesis of these bodies being accumulations of proteinaceous material. Based on microscopic and ultrastructural appearance we can also infer that the eosinophilic bodies found in our case are composed of deposits of proteinaceous material possibly aberrantly produced by neoplastic cells. To the best of our knowledge this is the first report of undifferentiated laryngeal carcinoma in a cat. Our case confirms the clinical features and the negative prognosis of feline laryngeal tumours. Moreover, for the first time in feline tumours, we identified structures compatible with HGs on both cytology and histology and we described their microscopic, immunohistochemical, and ultrastructural features. Future investigations are warranted to evaluate the presence of HG-like structures in feline tumours and their diagnostic and prognostic value.
2021-11-22T14:31:27.077Z
2021-11-22T00:00:00.000
{ "year": 2021, "sha1": "7bfcdb508c8441e991382654fd857255882aea97", "oa_license": "CCBY", "oa_url": "https://actavetscand.biomedcentral.com/track/pdf/10.1186/s13028-021-00613-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7bfcdb508c8441e991382654fd857255882aea97", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253031805
pes2o/s2orc
v3-fos-license
Perfluorinated pinacol promotes efficient amidination of 2-aminophenylboronic acid This work reports the use of halogenated alcohols in catalyzing a unique amidination reaction using 2-aminophenylboronic acid. Trials using acetonitrile as the reactant nitrile showed that the amidination efficiency increased from 33% with salicylic acid, to 78% with 2,2,2-trifluoroethanol and finally quantitative yields with perfluorinated pinacol. This protecting group proved to be highly efficient for amidination of several different nitrile groups with only mild heating Introduction Amidines are found in a number of biologically active molecules, but this molecular structure is underused when compared to the structurally analogous amide bonds.This can best be attributed to the lack of mild but effective ways to install this functionality.Many amidination methods have been developed, but often they require specific precursors limiting the utility of these methods.A review by Granik provides a summary of traditional methodologies, 1 many of which invoke activating the nitriles before the introduction of the amine.Several groups have done this using the Pinner reaction, which works by converting the nitrile into a protonated alkyl iminoester intermediate that is more vulnerable to nucleophilic amines. 1,2These iminoesters have been produced using simple alcohols reacting with gaseous HCl or HBr, methyl sulfofluoridate, boron trifluoride or methanol at high pressures. 3An example of this can be seen in Scheme 1. Scheme 1. Typical Pinner reaction showing the progression of nitriles to amidines. This two-step synthesis has been used widely since the 1960's and is robust enough to accept both primary and secondary amines.The many problems with the Pinner system and its immediate derivatives are that the reagents used are quite harsh potentially causing secondary side reactions.Significant research has gone into developing new methods that either minimise the use of the harsh reagents or eliminate their use altogether by employing powerful catalysts.Another common method is the use of strong Lewis acids as electron withdrawing groups to catalyse amidine synthesis.In this case, Lewis acids like AlCl3 and FeCl3 are used as electron withdrawing groups when interacting with the nitrile nitrogen, the amine then readily attacks the nitrile in a simple one step reaction.However, this reaction can require temperatures in excess of 150 °C and even then, it is sometimes inefficient. Prior to our work, only one group had used organoboron compounds for the synthesis of amidines, this work was shown across two articles in 1987 and 1989. 4,5In their first publication, Dorokhov et al. described the synthesis of N-(5-tetrazolyl)amidines from 5-aminotetrazole through a borane-promoted amidination reaction (Scheme 2). 4 This reaction used an organoborane rather than the Pinner conditions due to the low basicity of the compound and the lower potential for side products.Scheme 2. Amindination by activation using boranes. In this two-step reaction, the organoborane activates the nitrile through its Lewis acidity and coordinates this activated nitrile with the 5-aminotriazole after extrusion of a propyl group.The organoborane was cleaved by HCl-promoted solvolysis with butanol, and after column chromatography pure product was isolated.Common nitriles were tried including p-toluenenitrile, o-toluenenitrile, phenylnitrile and acetonitrile achieving 67-85% yields.Their next report was similar, this time toward synthesis of N- (1,2,4-triazol-5-yl)amidines. 5 For optimum reaction efficiency, the final borane cleavage should be conducted in a sealed ampule which increased the cleavage from 42 to 92%. 5 This method was able to produce yields of between 54-92% for the same nitriles used in the previous study.Finally, our group has previously investigated the use of boronate protecting groups to promote amidination of 2-aminophenylboronic acid (2-APB). 6An example of this chemistry can be seen below in Scheme 3. Scheme 3. Amidination reaction promoted by salicylic acid protecting group. We reported that boronate esters can facilitate amidination of proximal amines under mild conditions initially discovered by reaction with MeCN as solvent.This reaction was similar to an unexpected amidation in this system involving EtOAc as solvent. 7The presence of the salicylate ester promotes B-N coordination that both enables the reaction and allows crystal formation in many of the products even on 20 mg scale reactions.The chemistry was best suited to the use of aliphatic nitriles, furthermore, reactive functionalities such as bromides can be tolerated.One significant limitation of this method was the inability to readily remove the salicylate ester.Herein, we expand on our prior research to further improve both the efficiency and utility of this amidination reaction. Results and Discussion Previosuly, we demonstrated that the amidination efficiency of 2-APB was primarily altered by the electron withdrawing effect of the bound substituent on the boronate ester.To advance this idea, we attempted to synthesise 1 using halogenated alcohols to increase the Lewis acidity of the resultant boronate ester.This reaction and the tested alcohols with their relative conversion efficiencies can be seen in Table 1.N.D. = not detected Among trifluoro-, trichloro-and tribromoethanol, only TFE showed significant conversion to amidine when used in excess.There are two possible explanations for this change in reaction efficiency.The first is electronic, F (3.98 χp) has a greater Pauling electronegativity, i.e., greater electron withdrawing potential, than Cl (3.16 χp) and Br (2.96 χp) which would decrease the electropositivity of the boron moving from TCE to TBE thereby decreasing the reaction efficiency. 8The second difference is steric, F is smallest halogen with a radius of only 64 pm and it possible that the larger Cl (99 pm) and Br (114 pm) increased steric hindrance limiting the reaction. 9Importantly, it was also possible to readily convert the TFE ester back to the free boronic acid after amidination by simple hydrolysis. With the results clearly favoring fluorine-substituted alcohols, additional more complex fluorinated alcohols were tested.These results clearly depict perfluoropinacol [F12-(OH)2] being the most efficient protecting group for promoting amidination.While hexafluoroisopropanol [F6-OH] did produce product, it was limited whereas perfluoro-t-butanol [F9-OH] failed to produce any product.Both F6-OH and F9-OH had low boiling points of just 59 and 45 °C, respectively, limiting the amount the reaction could be heated at standard pressure.The other contributing factor that could cause lower yields is the electronic repulsion of these highly fluorinated compounds if both OH groups of the boronic acid became substituted; whereas the full protected form of F12-(OH)2 likely increases the reactivity of the boron ester due to enhanced inductive effect and ring strain.Distortion of a neutral boron atom out of plane can increase its Lewis acidity. 10 As synthesis of amidine 1 using F12-(OH)2 proved to be very efficient, additional nitriles were explored.One uniform method was developed for this comparative study shown in Scheme 4. Scheme 4. Amidination 2-APB using a perfluoropinacol protecting group. In the F12-(OH)2 trial, the temperature was placed at 60 °C to provide sufficient activation energy and each reaction was performed for 18 h.An excess of the nitrile and F12-(OH)2 was used and where available, the liquid nitrile was used as the solvent to help drive the reaction.In reactions with solid nitriles, 4 equivalents were used; in these cases, toluene was used solvent.To ensure sufficient boronate ester formation, 4 equivalents of the F12-(OH)2 was also used.In reactions where dinitrile compounds were used, an additional variation was performed reacting 2-APB in a 2:1 ratio with the nitrile to attempt to create di-amidine compounds.These variations and their reaction efficiencies (determined by NMR) can be seen in Table 2. This method proved significantly more efficient than our previous protocol, often delivering quantitative yields.Separation of products was simple, as most recrystalised in both the nitrile solvents and toluene.The most efficient was 3-methoxypropionitrile (3) mirroring the previous work 6 with quantitative yields; this nitrile even displayed 50% conversion at room temperature.By removing the ether linkage, 3-hydroxypropionitrile (2) had approximately half the conversion, and this may be owed to the alcohol competing with the F12-(OH)2 for boronic acid binding.Among the other products linear, non-aromatic nitriles appeared to be favored, but dinitrile molecules had lower reactivity.This was likely attributed to the electron withdrawing effect of the second nitrile reducing the nucleophilicity of the other nitrile toward boron. Unlike the NMR data previously obtained for the salicylate derivatives, 6 the perfluoropinacol protecting group caused the two signature N-H peaks to shift from 9.5 and 11 ppm to 7.5-8 and 11.5-12 ppm, respectively.This provided a unique avenue to evaluate which amine was responsible for which of the two proton peaks using 2D NMR.The 3-methoxypropionitrile amidine product was chosen for this analysis due its high purity and clean integration.The proton, HSQC and HMQC NMR data for this compound are shown in Figure 1. ) and toluene was used as solvent d Reaction: For solid mono-nitriles (1 equiv.)or di-nitriles (0.5 equiv.)toluene was used as solvent The HMQC provides the best evidence as the location of the 7.95 ppm proton, when looking at the peak (δ 7.96, 126.02) it is clear that this proton is two bonds away from an aromatic carbon which could only be the secondary amine.When comparing the molecules shown below, the significant shift from 9.37 to 7.95 ppm indicates that the change in the boronic acid protecting group caused the largest shift observed. One apparent problem that surfaced with this perfluoropinacol method was the boronate deprotection; unlike the other fluorinated compounds, F12-(OH)2 could not be removed through hydrolysis and application of strong vacuum.Like the previous salicylic acid examples, it was resistant to the application of acids and bases owing to strong coordination by the amidine.One final option was explored using transesterification of polymer bound boronic acid shown in Scheme 5. Scheme 5. Deprotection via transesterification of perfluoropinacol boronate.This deprotection was adapted from Pennington et al. who used polystyrene-bound boronate which allowed for the separation of the deprotected product by simple filtration. 11There was a minor loss in product with the deprotection recovering 98% of the expected yield, but the product was pure and there was no trace of F12-(OH)2 detectable by 19 F NMR. Based on this reaction, not only is this a viable amidination method, but the deprotection allows it to be used in the production of free boronic acids for testing as potential carbohydrate sensors, the ultimate goal of our work. 12 Conclusions This work has identified a new method to produce amidines and, through initial optimization, is a viable method for a variety of nitriles.Compared to the traditional Pinner synthesis, the new method described herein is more efficient, likely due to interactions between the boronic acid and the activated nitrile.It is a one-step reaction that can be completed using simple conditions compared to methods which require preparatory steps that use strong acids or metal activation of nitriles. 13,14While a significant amount of optimization has been performed on the amidination, further improvements can be proposed.One option is to test how amidination is affected by dehydration methods to enable boronate ester formation as this was driven only by heating for operational simplicity.[17][18] Table 1 . Amidination reaction trialing a variety of halogenated alcholols (ROH) ability to promote amidination of 2
2022-10-21T15:38:17.190Z
2022-11-21T00:00:00.000
{ "year": 2022, "sha1": "b073bb91fe25c54b98cf14d57658fda625cbda06", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/77742/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "74f507641b30ce7f53a8a0d153e313337856d212", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
251995099
pes2o/s2orc
v3-fos-license
Numerical investigation of the effects of environmental conditions, droplet size, and social distance on interpersonal droplet transmission in a deep urban street canyon This study investigated the interpersonal droplet transmission between a healthy and an infected person in a deep and narrow street canyon using Computational Fluid Dynamics (CFD) simulation. The CFD simulations modelled various droplet sizes (DDp), background wind speeds (UUref), relative humidity (RH), and social distances (D) to estimate their effects on interpersonal droplet transmission. The results revealed noticeably opposite effects of these factors. For example, small background wind moved droplets upward and suspended them in the air for a longer time while high wind speeds distributed droplets in the street canyon with few of them retained in the air. Relative humidity had a trifling impact on dispersing small droplets (10μm, 25μm, 50μm), whereas it significantly modified the dispersion of large droplets, especially in small background wind speeds. Furthermore, small droplets travelled longer distances in dry air and were either deposited on the surrounding buildings’ walls or suspended in the air. In contrast, larger droplets in moist air rapidly deposited on the ground or the infected person’s body. In dry air, 45% of large droplets were inhaled or suspended in the air, exposing pedestrians to contaminated droplets. Large social distances significantly diluted the small droplets but increased the infection risk from large droplets because of the complex interaction of the ambient airflow and the gravity. It is recommended to keep social distances of 2 m and 4 m for pedestrians in deep urban street canyons in Windy condition and Calm-Wet condition, respectively. Introduction The novel coronavirus pneumonia (Covid-19) causes more than 5.7 million deaths globally up to date and poses a grave danger to human health for foreseen future. Unlike other respiratory infectious diseases, Covid-19 is particularly menacing public health due to its fast transmission route via interpersonal droplet transmission. The research community insofar overlooks the infection risk in outdoor spaces because of the presumption of less infection risk due to high ventilation rates and inadequate wind circulation in these spaces. However, emerging evidence confronts this common belief. Motivated by the importance of understanding and the challenges in modeling interpersonal droplet transmission in urban street canyons, the current study set its main goal as to investigate how airborne transmissions trigger outbreaks of respiratory diseases in urban outdoors under the effects of environmental conditions, droplet characteristics, and social distance as a preventive measure. Two wind velocities: 1.54m/s, and 6.68m/s, two relative humidity: 35% and 95%, and four social distances: 0.5m, 1m, 2m, 4m were selected in current study. Governing equations CFD modelling of interpersonal droplet transmission requires solving the mass continuity, momentum conservation, energy conservation These governing equations were solved by using the Reynolds averaged Navier-Stokes (RANS) models. The Euler-Lagrangian approach was adopted to predict trajectories of droplets under the evaporation as: where ����⃗ is the particle velocity, and �⃗ is the wind velocity. is the drag force, and ���⃗ is the gravitational force. ���⃗ is the additional force. The infection risk of the healthy person by exposure to contaminated droplets in the street canyon is estimated by calculating the viral loads of droplets. The viral load of droplets (VL) was assumed to be proportional to the droplet initial volume without accounting for the effects of evaporation (Eq. 2). where is the mean viral load of Covid-19 (3.3× 10 6 copies/ml) as detected by To et al. [1] with saliva specimen. is the number of droplets that the healthy person inhales, and Dp is the droplet initial diameter. Computational domain and solve settings The current study modelled one of the street canyons of an urban street canyon model tested in a boundary layer wind tunnel by Zhang et al. [2]. The wind tunnel test model was fabricated on a length scale of 1:200 and had a total of 25 street canyons. The current study modelled the 12th street canyon from the upstream edge of the wind tunnel model in CFD simulation because the selected street canyon had a fully developed wind field ( Figure 1). The CFD simulations of this study modelled the target street canyon with the full-scale dimensions: Hs = 24m, Bs = Ws = 10m ( Figure 2). The domain contained the street and portions of the upstream and downstream buildings and the atmospheric boundary layer up to a 4Hs height. The grid arrangement had the minimum wall-adjacent cell size of 0.1m. Two similar height (1.68m) persons -one infected with respiratory disease (i.e., infector) and other healthy -in an upright position were modelled in the middle of the street canyon. The tetrahedral grids were generated inside the street canyon, while hexahedral grids were applied from the building roof to the top boundary. The total cell count of this grid configuration is 1,767,195. All CFD simulations in this study were performed using ANSYS Fluent commercial software (v19.0) on the Tianhe-2 supercomputer with the support of the National Supercomputer Center in Guangzhou, China. The SIMPLE algorithm was employed for the pressure-velocity coupling. The second-order upwind scheme was applied for discretizing the convection and viscous terms of the governing equations. The diffusion-controlled model was used for modelling the heat and mass transfer of expelled droplets. The convergence of the simulation was assumed when residuals reached the following minimum values: of 10 −8 for x-, y-, z-momentum, and k, and ε, 10 −6 for the continuity and displayed no further decrease nor fluctuation with the number of iterations. (Figure 3(b)). The upward wind circulation in the weak background wind governed by the buoyancy-driven wind flow near human bodies, whereas strong background wind suppressed it. Effect of wind speed The trajectories of 50μm diameter droplets shown in Figure 3(b) closely followed the wind flow patterns and moved upward and westerly in weak, and strong background winds, respectively rather than moved downward under gravity. Dominant buoyancy-driven wind in the weak background wind transported droplets upward and away from the breathing zone of the healthy person, while strong advection in high background wind speeds carried droplets to the listener's face risking him to breath contaminated air. Figure 4 shows droplets dispersion for two droplet sizes: 25μm and 100μm at three RH levels: 35%, 60%, and 95% and in background wind speed of 1.54m/s. Figure 4(a) shows very similar dispersion patterns for the droplets with 25μm in weak background wind for all RH levels. Insignificant effects of RH on the dispersion of smallsized droplets attribute to quick evaporation of smallsized droplets. In contrast, the dispersion patterns of large-sized droplets (100μm) in weak background wind showed noticeably variations with RH (Figure 4(b)). At low RH = 35%, large droplets mainly transmitted upward with the updraft and rapidly became smaller in size (diameter ~ 40 -60μm) due to evaporation. Unlikely at low RH, large-sized droplets remained the same size at RH = 95% due to less evaporation at high relative humidity. Almost all large-sized particles at RH = 95% moved downward in between the two persons due to their heavy mass. Figure 5 shows the fate of droplets with four diameters (10μm, 25μm, 50μm, and 100μm) in the street canyon. Small droplets in low background wind speeds such as = 1.54m/s tend to deposit on the walls of the surrounding buildings or suspend in the air for long periods, thus posing a little risk of inhaling them or deposited on the healthy person's body. In contrast, healthy people are at high risk of inhaling small particles, and deposition them on their bodies in high background winds such as = 6.68m/s. Therefore, in case of an outbreak, it is advisable to search for contaminated droplets on the ground, and buildings' walls if the street canyon is exposed to weak background wind. If the background wind is strong, pedestrians should be tested for infection and should search their bodies and clothes to detect evidence of contaminated droplets. Figure 6 shows viral loads of contaminated droplets with four diameters in different environmental conditions and social distances. In addition, a threshold of 300 copies of virions is marked in every subplot as the infectious dose of COVID-19 as suggested by Saikat Basu [3]. The viral load of the droplet with a diameter of 100μm had noticeable variations with RH and social distances in weak background wind speed of 1.54 m/s (Figure 6(a)). In most cases, low RH levels led to higher viral loads (> 1000 copies) despite the difference in the viral load being minimal between RH = 35% and 60%. At 1 m distance from the infector, the viral load decreased 46% at RH = 35% and 17% at RH = 60%, and these reductions further grew into 70% for RH = 35% and 83% for RH = 60% at the 4 m social distance. Interestingly, the viral load was 23% higher at 1m social distance at RH = 60% than that at RH = 35%. In contrast to high viral loads in dry air, moist air (RH = 95%) resulted in the minimum viral loads (~ 0 to 40 copies) and also the minimum variations across all tested social distances. Exposure risk for droplets under four social distances The small-sized droplets (diameters 10μm and 25μm) in strong background wind (6.68 m/s) posed no infection risk to the healthy person at any social distance as the maximum viral loads were always smaller than 300 copies (Figures 6(b) and (c)). In addition, these cases displayed similar variations in viral load with social distances for all RH levels. Indeed, viral load at all RH levels steadily decreased with social distance as such the viral load reduced by 88% as the social distance increased from 0.5 m to 1m and eventually reached 0 at the 4 m social distance. In contrast. Moderate-sized droplets (diameter of 50 μm) at all RH levels posed some infection risks for the healthy person at 0.5 m away from the infector. The viral load at RH = 95% was 29% smaller than that at RH = 35% and 60%, this discrepancy diminished at 2m social distance and reached the zero viral load at 4m social distance (Figure 6(d)) for all RH levels. Viral load of large-sized droplets (diameter 100 μm) exhibited diverse variations in with social distance. However, they posed no infection risk to the healthy person as the maximum viral loads were less than 300 copies across all tested cases. Figure 6. Viral load of droplets in the air with RH = 35%, 60%, and 95% inhaled by the healthy person stood 0.5m, 1m, 2m, and 4m from the infected person for the conditions: (a) p = 100μm, = 1.54m/s, (b) p = 10μm, = 6.68m/s, (c) p = 25μm, = 6.68m/s, (d) p = 50μm, = 6.68m/s, I p = 100μm, = 6.68m/s. Concluding remarks This study investigated the effect of wind speed, relative humidity, droplet size, and social distance on interpersonal droplet dispersion in a two-dimensional deep urban street canyon using CFD simulation. The followings are the valuable insights derived from this study to understand interpersonal droplet dispersion in deep, urban street canyons and to stipulate guidelines to preserve public health in cities in the post-pandemic era: 1. Low background wind speeds reduce the interpersonal droplet transmission in street canyons by moving droplets upward with the buoyancy-driven wind flows. Conversely, high background wind speeds pose a greater infection risk for healthy people with the strong advection. The numbers of small to moderate-sized droplets inhaled by a healthy person in high background wind speeds is nearly nine times larger than that in low wind speeds. 2. Relative humidity has insignificant impacts on the dispersion of small-sized droplets because of the rapid evaporation of droplets. However, relative humidity substantially affects the dispersion of largesized droplets at low wind speeds. 3. Small droplets were either deposited on the surrounding buildings' walls or suspended in the air. In contrast, larger droplets in moist air rapidly deposited on the ground or the infected person's body. 4. A social distance of 4m is recommended for deep, urban street canyons filled with low wind speeds to minimize the infection risk by interpersonal droplet transmission. If the prevailing wind speeds are high, 2m social distance is adequate to reduce the infection risk of a person by 94% than he stands 0.5 m near to an infected person.
2022-09-02T15:27:40.419Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "73fea20edff805bf79c5614fe969c95e9fe4e3a6", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2022/23/e3sconf_roomvent2022_04029.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ede4bff3a48ef734ec0c644c64159f382636152e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
257679236
pes2o/s2orc
v3-fos-license
Magnitude of the Potential Screening Gap for Fabry Disease in Manitoba: A Population-Based Retrospective Cohort Study Background: Fabry disease is a rare disorder caused by the deficient activity of α-galactosidase A (GLA) that often leads to organ damage. Fabry disease can be treated with enzyme replacement or pharmacological therapy, but due to its rarity and nonspecific manifestations, it often goes undiagnosed. Mass screening for Fabry disease is impractical; however, a targeted screening program for high-risk individuals may uncover previously unknown cases. Objective: Our objective was to use population-level administrative health databases to identify patients at high risk of Fabry disease. Design: Retrospective cohort study. Setting: Population-level health administrative databases housed at the Manitoba Centre for Health Policy. Patients: All residents of Manitoba, Canada, between 1998 and 2018. Measurements: We ascertained the evidence of GLA testing in a cohort of patients at high risk of Fabry disease. Methods: Individuals without a hospitalization or prescription indicative of Fabry disease were included if they had evidence of 1 of 4 high-risk conditions for Fabry disease: (1) ischemic stroke <45 years of age, (2) idiopathic hypertrophic cardiomyopathy, (3) proteinuric chronic kidney disease or kidney failure of unknown cause, or (4) peripheral neuropathy. Patients were excluded if they had known contributing factors to these high-risk conditions. Those who remained and had no prior GLA testing were assigned a 0% to 4.2% probability of having Fabry disease depending on their high-risk condition and sex. Results: After applying exclusion criteria, 1386 individuals were identified as having at least 1 high-risk clinical condition for Fabry disease in Manitoba. There were 416 GLA tests conducted during the study period, and of those, 22 were conducted in individuals with at least 1 high-risk condition. This leaves a screening gap of 1364 individuals with a high-risk clinical condition for Fabry disease in Manitoba who have not been tested. At the end of the study period, 932 of those individuals were still alive and residing in Manitoba, and if screened today, we expect between 3 and 18 would test positive for Fabry disease. Limitations: The algorithms we used to identify our patients have not been validated elsewhere. Diagnoses of Fabry disease, idiopathic hypertrophic cardiomyopathy, and peripheral neuropathy were only available via hospitalizations and not physician claims. We were only able to capture GLA testing processed through public laboratories. Patients identified to be at high risk of Fabry disease by the algorithm did not undergo GLA testing due to a clinical rationale that we were unable to capture. Conclusions: Administrative health databases may be a useful tool to identify patients at higher risk of Fabry disease or other rare conditions. Further directions include designing a program to screen high-risk individuals for Fabry disease as identified by our administrative data algorithms. Introduction Fabry disease is an X-linked lysosomal storage disorder caused by the deficient activity of α-galactosidase A (GLA). This leads to the accumulation of its substrates, predominately globotriaosylceramide (GL-3) in lysosomes of tissues, resulting in cellular dysfunction that affects multiple organs. 1 Classic Fabry disease is a burdensome and life-limiting disease, shortening life-expectancy in affected individuals by an average of 17 to 20 years. 2 Affected patients are at high risk of developing peripheral neuropathy, progressive proteinuric chronic kidney disease (CKD), kidney failure, fibrotic cardiac disease, progressive hypertrophic cardiomyopathy, and ischemic stroke. 3 An early diagnosis of Fabry disease is essential so that enzyme replacement or pharmacological chaperone therapy can be initiated, with the aim of limiting potentially irreversible organ damage. Symptoms of Fabry disease are often experienced in early childhood, but diagnosis is frequently delayed until adolescence or early adulthood due to its rarity and the nonspecific nature of its manifestations. It is likely that many individuals with mild or late-onset disease are never tested as their symptoms are attributed to other diagnoses. 4 The condition is diagnosed in men by GLA activity testing in blood leucocytes. In affected women, random X-inactivation may result in the expression of GLA activity in the plasma or leucocytes within the normal range. Hence, genetic testing for the GLA gene is necessary for a diagnosis of Fabry disease in women. 5,6 Furthermore, identification of a GLA variant is insufficient for a diagnosis of Fabry disease in heterozygous individuals in whom a pathogenic GLA variant is also required. 7 The prevalence of Fabry disease is estimated to be between 1 in 40 000 and 1 in 120 000 individuals. 8 These rates reflect clinically driven diagnoses; that is, testing is performed when symptoms are present or when the disease is suspected based on the familial pedigree. However, recent screening studies show that the prevalence of Fabry disease may be markedly higher than that previously estimated. [9][10][11][12][13] Acknowledging that many of these Fabry gene variants are of unclear significance, this discordance between the observed frequency of clinical disease and gene mutations admits the possibility that an uncertain proportion of Fabry disease patients may remain undiagnosed. Several clinical scenarios have been associated with a higher frequency of Fabry disease including ischemic stroke in a young patient, 14,15 idiopathic hypertrophic cardiomyopathy, 16,17 proteinuric CKD or kidney failure of unknown cause, 18,19 and peripheral neuropathy of unknown cause. 20,21 These scenarios are associated with up to a 4% probability of Fabry disease. Therefore, screening for Fabry disease would be reasonable in these high-risk populations; however, it is not commonly performed. As a result, there may exist a large screening gap, representing missed opportunities for diagnosis and treatment. Our objective was to identify patients at high risk of Fabry disease in Manitoba, Canada, using population-level administration data and to estimate the proportion of patients at risk who were actually tested for Fabry disease. Data Sources We performed a retrospective cohort study using linked population-level administrative databases from Manitoba, Canada, provided by the Manitoba Centre for Health Policy (MCHP) at the University of Manitoba. 22 Study Design and Population The study included all patients residing in Manitoba from January l, 1998, to December 31, 2018. We then stratified patients into 2 groups: (1) those who had evidence of existing Fabry disease and (2) those without any evidence of Fabry disease. An individual was identified as a Fabry disease case if they had a hospitalization during the study period with 1 of 2 specific International Classification of Diseases (ICD) diagnostic codes as either a primary or underlying diagnosis (ICD-9 code 272.2 or ICD-10 code E75.2) or if they had a prescription for enzyme replacement therapy (agalsidase alfa or agalsidase beta) or pharmacological chaperone therapy (migalastat). Prescriptions were identified via the following drug identification numbers: 02248965, 02248966, 02249057, 02468042. In the group of patients without any hospitalization record for Fabry disease, we created a cohort who had at least one of the following high-risk comorbidities: ischemic stroke <45 years of age, idiopathic hypertrophic cardiomyopathy, kidney failure or proteinuria of unknown cause, and peripheral neuropathy (Appendix 1). We excluded patients with known contributing factors for these high-risk conditions by assessing laboratory tests, prescription drugs, and ICD diagnostic codes from physician billing and hospitalizations (Appendix 2). For example, for ischemic stroke <45 years of age, we first gathered a group of patients with a diagnosis of ischemic stroke and then excluded different groups of patients: age ≥45 years, diabetes, hypertension, autoimmune disease, and cancer, until we had a final list of patients. The known contributing factors used for exclusion criteria were assessed at any time before the first occurrence of a diagnosis of a high-risk condition, and up to 1 year after the first occurrence of a diagnosis of a high-risk condition. Those who remained and did not have evidence of prior GLA testing facilitated through DSM laboratories were assigned a probability of having Fabry disease based on estimates from the literature. We used 2 systematic reviews to estimate the prevalence of Fabry disease in patients with stroke at a young age, idiopathic hypertrophic cardiomyopathy, and kidney failure. 19,25 For stroke at a young age, the probability of Fabry disease was estimated to be from 0.1% to 4.2% in men and 0.1% to 2.1% in women. In individuals with idiopathic hypertrophic cardiomyopathy, the probability of Fabry disease was estimated to range from 0.7% to 2.1% in men and 0.7% to 2.3% in women. And in patients with kidney failure on hemodialysis, the probability of Fabry disease was estimated to range from 0.2% to 0.3% in men and 0% to 0.2% in women. For our probability estimates of peripheral neuropathy of unknown cause, we applied a 0% to 4.2% probability for both men and women based on lower and higher end estimates from 7 studies. 20,21,[26][27][28][29][30] Laboratory Methods For males, the method of GLA testing was to assess GLA activity measurement in serum by fluorometry. In females, GLA testing was done via gene sequencing to identify Fabry disease-causing mutations. In some cases, GLA activity was measured in leukocytes, and GL-3 was measured in urine tests. Outcomes Our primary outcomes were to determine the screening gap for Fabry disease and the potential number of missed Fabry disease cases in Manitoba, Canada. The screening gap was defined as the total number of patients who were considered to be at high risk of the disease subtracting those who have already been tested. In those patients with a screening gap, we assessed whether they had visited a cardiologist, nephrologist, or neurologist through specialist codes provided in the Medical Services and Claims data set. The potential number of missed Fabry disease cases was provided as a range and defined as the number of patients who comprised the screening gap multiplied by the lower probability estimate and higher probability estimate for each high-risk condition stratified by sex. For patients with Fabry disease, we reported the clinical incidence during the study period, and the distributions of age (mean and standard deviation) and sex (frequency and percentage). All analyses were performed using SAS 9.4 (SAS Institute, Inc., Cary, North Carolina). Patients With Fabry Disease Throughout the study period, 2 063 280 instances of health insurance coverage were registered in the Manitoba Health Insurance Registry for 1 949 633 individuals: 980 512 (50.29%) men and 969 121 (49.71%) women. During the study period, 4 941 637 hospital admissions were reported in 1 280 066 patients. Among those hospitalizations, 186 were for Fabry disease associated with 17 patients: 10 (58.82%) men and 7 (41.16%) women. The average age of men when Fabry disease was diagnosed was 29 years (±24.50), and the average age of women at the time of diagnosis was 36 years (±24.45). The 17 cases of Fabry disease in 1 949 633 individuals correspond to a clinical incidence of 1 in 114 684 over the study period. There were 0 patients found to have a prescription for enzyme replacement or pharmacological chaperone therapy. Patients at Risk of Fabry Disease We found 60 447 individuals with a history of ischemic stroke, 11 684 individuals with potential idiopathic hypertrophic cardiomyopathy, 72 207 individuals with proteinuria or kidney failure, and 524 individuals with peripheral neuropathy ( Figure 1). Applying our exclusion criteria reduced these numbers to 204, 655, 391, and 138, respectively. After combining all 4 high-risk disease groups and removing duplicates, the total number of individuals with at least 1 high-risk clinical condition for Fabry disease in Manitoba was 1386 (729 men and 657 women). A total of 511 GLA tests were conducted in 416 unique patients (178 women and 238 men). The median age of these patients was 46 years (interquartile range: 18-57). In our Of those 1364 individuals, 932 were still alive and residing in Manitoba (504 men and 428 women) at the end of the study period according to the Manitoba Health Insurance Registry. If those patients were screened, we would estimate a range of 3 to 18 previously undiagnosed cases that would test positive for Fabry disease (Tables 1 and 2). Discussion In this retrospective cohort study of adults in Manitoba, we found an incidence of confirmed Fabry disease that is similar to clinical incidence rates that have been observed in other jurisdictions. 8 Throughout our study period, we found more than 1000 Manitobans at high risk who have not been screened for Fabry disease and had idiopathic hypertrophic cardiomyopathy, an ischemic stroke <45 years of age, kidney failure or proteinuria of unknown cause, or peripheral neuropathy without known contributing factors. This screening gap suggests a possibility, indicated by previous screening studies, that there may be a higher prevalence of Fabry disease in Manitoba than what is generally estimated based on the clinical diagnosis and pedigree analysis. [9][10][11][12][13] We found there may be between 3 and 18 individuals who have undiagnosed Fabry disease who could potentially benefit from enzyme replacement or pharmacological chaperone therapy if identified. Because Fabry disease is so rare, a screening program for the general public would not be cost-effective as the number of new cases discovered would be too few relative to the costs of such a program. For a screening program to detect rare diseases to be cost-effective, it must be conducted in specific target population. Previous research has shown that screening for diseases in specific at-risk population could be cost-effective. 31 However, identifying patients who meet such criteria prospectively would be a challenge. Therefore, population-level administrative health databases, such as those held at the MCHP, can be a useful tool to identify these patients, and although the identity of the individuals cannot be known to researchers, there are tools available to have the Ministry of Health contact these patients for screening. Large administrative data sets have been used in the past to identify rare diseases. Smith et al 32 developed an algorithm to find possible cases of muscular dystrophy through a linkage of private and public insurance programs and an allpayer hospital discharge data system in the United States and followed up to determine whether the algorithm correctly identified the possible cases. Of the 537 cases they found, a Individuals with more than 1 high-risk condition were counted more than once. Individuals with more than 1 high-risk condition were counted more than once. 260 were determined to be true cases of muscular dystrophy compared to a nationwide frequency of 5.1 per 100 000. 33 In addition to ICD-9 and ICD-10 codes associated with muscular dystrophy, they also collected demographics, other neurological disorders, setting of care, provider specialties for all physician visits, and prescriptions of corticosteroids, which improved the probability that true cases of muscular dystrophy would be identified. This demonstrates the additional value administrative data can provide to screen for a rare disease and confirms the feasibility of using administrative data for that purpose. Efforts to use large data sets to identify those at risk of Fabry disease are already underway. Jefferies et al 34 have developed an artificial intelligence (AI) tool following the predictive AI modeling methodology 35 to identify individuals with undiagnosed Fabry disease by using 4978 individuals with confirmed Fabry disease as a training data set to create their tool and tested their tool on a data set of 1 000 000 individuals without confirmed Fabry disease that was derived from various electronic medical records with health claims, medication history, and laboratory results. The researchers looked to see if the individuals the tool identified as high risk had a longitudinal medical history of various phenotypic signatures that medical experts would expect to see in patients with Fabry disease. They found that the tool showed very good discrimination with a c-statistic of 0.82 and projected a prevalence of Fabry disease of 1 in 2090 in the top 1% at-risk individuals without a confirmed disease, which is much higher than that in the general public. However, without actually screening these individuals, we cannot know for sure whether that prevalence is accurate, nor can we know which probability level assigned by the tool would be the appropriate cutoff to implement a cost-effective screening program. Our approach attempted to narrow the focus by identifying individuals with specific conditions known to be disproportionately associated with Fabry disease and excluded patients with known underlying causes of those conditions. This method may increase the probability that Fabry disease is the cause of those conditions, and therefore, we would expect a much high prevalence of true disease in our high-risk cohort than what was estimated in the top 1% at-risk individuals identified by the AI tool. There have also been cohort studies in Canada with smaller samples sizes that have screened for Fabry disease in high-risk patients. A cohort of 397 patients with proteinuric CKD and at least 1 clinical characteristic for Fabry disease found 7 abnormal GL-3 test results in 4 men and 3 women, but none of those 7 individuals had confirmed Fabry disease after GLA testing. 36 However, in contrast to our analysis, they did not exclude patients with an explanation for proteinuric CKD. Therefore, we may expect to see more than 0 cases in our subgroup of high-risk patients with proteinuric CKD or kidney failure of unknown cause. In a single-center, hospital-based cohort of 100 patients aged 16 to 55 years who had a cryptogenic ischemic stroke, 1 patient was discovered to have Fabry disease due to a GLA mutation and high GL-3 levels, corresponding to a prevalence of 1% (95% CI: <.01%-6%). 37 Meanwhile, in a prospective multicenter Canadian cohort of 365 patients aged 18 to 55 years with a cryptogenic ischemic stroke or transient ischemic attack, 2% of patients screened positive for Fabry disease through GLA testing, but the prevalence was 0.3% when limiting to the p.R118C variant. 38 These ranges of confirmed Fabry disease are within what we have estimated we may find in our cohort of patients with a stroke <45 years of age. This analysis has multiple strengths; as Canada has a universal health system, we were able to establish a true denominator for the incidence of Fabry disease in Manitoba and a comprehensive case definition to guide screening. In addition, we found a much higher percentage of previous GLA testing in our high-risk cohort than in those who did not meet our definition of being at high risk of Fabry disease (1.59% vs 0.02%), which is an indicator that the high-risk criteria chosen for this study are consistent with established clinical practice. However, there were limitations. The algorithms we used to identify our patients have not been validated elsewhere, and due to the specificity of the ICD-9 and ICD-10 codes, our algorithms were sometimes limited to data from reported hospitalizations only as we could not reliably search physician claims for Fabry disease, idiopathic hypertrophic cardiomyopathy, or peripheral neuropathy as these claims only document the first 3 digits of the ICD code for all diagnoses. Additionally, we were only able to capture proteinuria as well as GLA testing for genetic markers of Fabry disease processed through public laboratories. Private GLA testing arranged through manufacturers of enzyme replacement or pharmacological chaperone therapy would not have been captured, and as such, the true screening gap may be smaller than what is reported. Finally, there is a possibility that patients identified to be at high risk of Fabry disease by the algorithm did not undergo GLA testing due to a clinical rationale that we were unable to capture. Future aims should test the feasibility and diagnostic yield of a comprehensive screening strategy for patients identified to be at high risk of Fabry disease in Manitoba. Our data show that 92.9% of patients identified to be at high risk of Fabry disease in our study have visited a specialist, and a future provincial screening program may be able to optimize case detection independent of care provider clinical impression, which may be a bias. For those who test positive, disease management should include genetic counseling, consideration of enzyme replacement or pharmacological chaperone therapy, organ-specific therapies (eg, renin-angiotensin-aldosterone system inhibition, cardiovascular risk reduction), and interdisciplinary care. Results from such an initiative will inform future strategies to improve the identification and outcomes of patients with Fabry disease worldwide. Future analyses could leverage these data to develop cost-effectiveness and willingness-to-pay threshold analyses for Fabry disease screen-and-treat strategies. Conclusions Fabry disease is a rare, multifaceted disorder that profoundly impacts the quality of life. Recent analysis and screening programs including this report have suggested that the condition may be underdiagnosed. Administrative health databases may be a useful tool to identify patients at higher risk of Fabry disease or other rare conditions. Further directions include designing a comprehensive screening program to identify previously undiagnosed cases of Fabry disease. Ethics Approval and Consent to Participate Ethics approval for the study was provided by the University of Manitoba Health Research Ethics Board (file #: HS23595). Consent for Publication All authors have reviewed the manuscript and consented to publication. Availability of Data and Materials Data used in this article was derived from administrative health and social data as a secondary use. The data was provided under specific data sharing agreements only for approved use at Manitoba Centre for Health Policy (MCHP). The original source data is not owned by the researchers or MCHP and as such cannot be provided to a public repository. The original data source and approval for use has been noted in the acknowledgments of the article. Where necessary, source data specific to this article or project may be reviewed at MCHP with the consent of the original data providers, along with the required privacy and ethical review bodies. Requests to access to statistical and anonymous aggregate data associated with this paper, along with metadata describing the original source, can be made by contacting the corresponding author.
2023-03-23T15:31:47.798Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "f35522182979759b03aaa8cdfab950bc203cf2a5", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "618748bb644146c4ee335e16b18b622b887f6a3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15460762
pes2o/s2orc
v3-fos-license
Data from two different culture conditions of Thalassiosira weissflogii diatom and from cleaning procedures for obtaining monodisperse nanostructured biosilica Diatoms microalgae produce biosilica nanoporous rigid outershells called frustules that exhibit an intricate nanostructured pore pattern. In this paper two specific Thalassiosira weissflogii culture conditions and size control procedures during the diatoms growth are described. Data from white field and fluorescence microscopy, evaluation of cell densities and cell parameters (k value and R value) according to cell culture conditions are listed. Different cleaning procedures for obtaining bare frustules are described. In addition, FTIR and spectrofluorimetric analyses of cleaned biosilica are shown. The data are related to the research article “Chemically Modified Diatoms Biosilica for Bone Cell Growth with Combined Drug-Delivery and Antioxidant Properties” [1]. Specifications Chemistry, Biology, Phicology More specific subject area Diatoms, Biomaterials Type of data Images, text file, graph, figure How data was acquired FT-IR spectra were recorded on a PerkineElmer 1710 spectrofotometer using dry KBr pellets Fluorescence images were recorded on an Axiomat microscope, Zeiss (German), fluorescence filter set 15 (exc.546 nm,em.590 nm) Fluorescence spectra were recorded on a Varian Cary Eclipse spectrofluorimeter Cell densities and cell parameters were taken using Burker hemocytometer Knittel Glass Data format Analyzed Experimental factors Diatoms culturing in f/2 Guillard-sea water enriched medium and cleaning procedure via different acidic-oxidative treatments Experimental features Data accessibility Data are provided with this article Value of the data Easily achievable conditions for growing diatoms cells and cleaning procedures for the biosilica extraction from the living cultures will be helpful for researchers without specific biological background. Evaluation of cell densities and cell parameters according to the two living cell cultures conditions are reported for a basic biological monitoring. These datasets are useful for obtaining monodisperse biosilica in high yields and could be helpful for the developing science of bionanotechnology. Data Data provided in this article represent the results from two different cultures conditions of Thalassiosira weissflogii diatoms. In Fig. 1, all the steps to obtain cleaned biosilica from living cells are summarized. Figs. 2, 3 and 4 present cell density evaluations in the living cultures and related cell size control according to the different cultures conditions. FTIR signals of extracted biosilica are also shown (Table 1) together with the emission spectra of the bare frustules (Fig. 5). Experimental design, materials and methods We analyzed each step of the T. weissflogii growth by white field and fluorescence microscopy and we evaluated cell density and cell parameters for cell viability estimation. The living diatoms underwent different acidic/oxidative cleaning procedures and FTIR analyses and UV-vis spectra were done on the solid biosilica obtained. Diatoms cultures conditions The diatom T. weissflogii (culture collection of algae and protozoa, CCAP strain 1085/10) was grown in a sterile f/2 Guillard enriched by seawater medium in PS flasks (25 mL) [1]. A preparation of stock solutions was performed adding 0.2 g/100 mL of NaCl in sea water (3.8-3.875% of salinity rate), and bufferring with NaOH (2 N) till pH value of 8 [2] The medium was enriched with Na 2 SiO 3 Á 9H 2 O, trace metals, and a vitamin mix. [3] The cultures were aerated manually 2 times per day, to provide air and to prevent algae precipitation. In the first 4 days of subculture glucose was added (0.55 mg L À 1 ) for enhancing cell viability, and sodium sulfate (4.26 gL À 1 ) for increasing photosynthesis yields, as reported in the literature. [4] Moreover, in order to avoid bacterial contamination, a low amount of kanamycin (0.5 mg L À 1 ) was added. Growth was controlled at 18-20°C under a continuous photon flux density (PFD) provided by cool-white fluorescent tubes. The light source was placed 15 cm away from cultures. The light/dark cycle was 12 h illumination/12 h darkness and minimal air change (basal oxygen influx) was guaranteed by ventilation through sterile filters applied onto tubes. Microscopy of living T. weissflogii diatoms A 10 μL droplet of culture was spotted on the cover slip and a glass slide was placed on. After removing the surplus medium with cotton, the enamel seal was put between the cover slip and the glass slide. Living T. weissflogii diatoms appeared as box-like structures in which the green chloroplasts mottles were evident and compartmentalized close to the glass-box frustule (white field transmission microscopy, Fig. 4). Green mottles were red fluorescent when observed using a bidimensional fluorescent microscope (in reflection, inset from Fig. 4). The control of the cell density was closely related to the fluorescence and white field microscopies: vital cells appeared green (evident yellowish-white mottles were instead evident in chlorotic state), with intact box-like structures and with high qualitative levels of red emission from chloroplasts. Evaluation of cell densities and cell parameters according to cell culture conditions We evaluated cell density using standard counting in a Burker hemocytometer (by monitoring the first 5 days, when cell density reached 5 À8x10 5 cells mL À 1 ) taking into account that cells normally were subcultured after 14-15 days of growth. In the specific way, we analyzed cell density and cell parameters (k value and R value) according to the growth of diatoms without (control sample [ctrl]) and with the addition of a energizing mixture containing glucose (0.55 mg L À 1 ) and sodium sulfate (4.26 gL À 1 ) [GþSS]. So we performed cell density experiments for [ctrl] sample and [Gþ SS] sample, and their results are reported in Fig. 2. We calculated k values (number of generations) for both the samples, admitting a general binary scission in T. weissflogii growth. The k value was obtained from this formula: We analyzed cultures with a Nt o (starting cell densities) of 226 cells/μL, in a total time of 120 h. So we calculated a k 0 value for the [ctrl] and a k″ value for the [G þSS] culture, considering k 0 and k″ as number of generations in the time t (and considering the respective Nt values as final cell densities, as reported in Fig. 2 These cells parameters confirmed that glucose and sodium sulfate are considered as energizing nutrients which enhance cell growth (R 0 4R). Glucose and sodium sulfate were also considered controllers for the over-sizing (which is the unbalanced growth in size). We monitored the over-size percentage (%) in 3 times (0, 96 and 120 h, Fig. 3), which is the ratio between number of cells with valve diameter 411 μm and total cells, for both [ctrl] and [GþSS] cultures. Glucose and sodium sulfate allowed us to obtain a quasi mono-dispersed diatoms in cultures. Generally [Gþ SS] cultures do not exhibit cells with valve diameter o10 μm. Cleaning procedures of diatoms cultures a. Cleaning with trifluoroacetic acid (TFA-acid): a 5 mL suspension of cells was collected by centrifugation (900 rpm  15 0 ). After the removal of the supernatant 100 μL of H 2 O Millipore were added to rinse pellet; the procedure started adding 3 drops of trifluoroacetic acid (TFA) and 20 μL KMnO 4 þ20 μL H 2 O 2 (very low amount of oxidant only to spark cleaning reaction), and the pellet was kept at 90°C for 5 h; the procedure continued with sonication for 5″, and cleaned diatoms are pelletted at 1000 rpm for 30 0 . A series of washing steps was then performed (3x H 2 O Millipore), and pellet was suspended in 500 μL of pure EtOH. b. Cleaning with hydrochloric acid and methanol (HCl þmeth): according to the literature [5], removal of organic matter was performed by several washes with 50/50 HCl/deionised water, deionised water and methanol. A pellet of diatoms coming from a 5 mL suspension in culture was suspended in 50:50 HCl/deionised water for 1 min and then centrifuged at 1100 rpm for 5 min. Pellets were suspended in 50/50 HCl/deionised water for 1 min and centrifuged at 1150 rpm for 6 min, then again in 50/50 HCl/deionised water for 1 min and centrifuged at 1150 rpm for 12 min. This cycle of HCl washes was followed by three steps in deionised water. Lastly, samples were suspended in methanol for 2 min and centrifuged at 800 rpm for 10 min. The resulting pellet was dried using a pump and appeared as white. Samples were weighted, sealed and stored at 4°C. : cells were previously collected by centrifugation (1000 rpm, 20 0 ) from a 5 mL suspension of living cultures, rinsed with bidistilled water (total volume of 200 μL) and organic matter was removed through a mix of acid treatment and oxidation with H 2 SO 4 (5 drops, 98% w/w, 1 drop of HCl 37% w/w) and KMnO 4 (2 grains from solid powder) at 80°C for 30 0 ; after 2 s of sonication, a further oxidation step with hydrogen peroxide (200 μL, 30% w/w) at 90°C for 4 h was performed. [6] This treatment was followed by repeated washing steps with bidistilled water and soft centrifugations (1100 rpm, 10 0 ). This cleaning procedure was reported to be the most efficient in a hard organic matter removal from entire frustules, even if the entropic opening of frustules in valves and girdle occurred. Biosilica deposition on glass slides After the cleaning procedure, we deposed biosilica dispersion on glass. The glass was pre-treated with a H 2 SO 4 (1 mL, 98% w/w)-hydrogen peroxide (2 mL, 30% w/w) at 80°C for 1 h. After this pre-treating, the glass slide was dried. A 20 μL whitish dispersion in water was put with 20 μL of acetone in eppendorf. Then 15 mL of the bottom enriched part of whitish pellet from the eppendorf were put on a cleaned glass slide. A pre-annealing at 60°C for 10-30 0 was useful to dry the sample. If necessary (for multilayer sample), this deposition was repeated together with the pre-annealing. A microscopy monitoring was necessary to check layer density and layer quality. If the deposed frustules did not appear transparent, a further washing of the pellet with a solution 1:1 acetone and DMSO was sufficient for a successful new deposition. A final thermal treatment at 120-200°C for 2 h on heating plate was performed to make samples dry and to link silica shells onto glass slide surface. For the dried frustules sample preparation, after soft ethanol rinsing (50 mL on the white spots) and drying, diatoms shells remained intact and attached onto glass slide surface, and they were ready for further investigations. Time-lapse photoluminescence analyses (λ excitation 385 nm) Using an UV-excitation wavelength (385 nm), we recorded emission spectra of cleaned dried frustules (layered on glass slides) before first exposition (t ¼0) and 30 0 after first exposition (t¼30 0 ). Results showed that there was a general quenching of fluorescence after 550 nm (*, Fig. 5), while the blue-green area of the spectra remained stable (**, Fig. 5). This quenching occurred after 30 0 (first UV exposition) and it was maybe due to a photo-degradation of such fluorophores [7]. The blue-green area (**, Fig. 5) remained stable and not quenched [8]. Transparency document Transparency document associated with this article can be found in the online version at http://dx. doi.org/10.1016/j.dib.2016.05.033.
2018-04-03T02:37:46.849Z
2016-05-28T00:00:00.000
{ "year": 2016, "sha1": "83c54cb6e311262dec94acc418557ca946bbf851", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2016.05.033", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "844c678da582218b03c18a226b0752196122684a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219095500
pes2o/s2orc
v3-fos-license
Free Sunday-Loss, Deprivation or Welfare This paper presents the part of results obtained by a comprehensive statistical analysis of public opinion in the issue of work-free Sunday, based on a survey undertaken in the Republic of Croatia in October 2017. The research has been made aiming at providing the answer to the crucial question of whether free Sunday can be considered only as of the economic issue or concerns deeply almost all the spheres of life in general. Moreover, the authors want to show and promote free Sunday as socio-economic phenomena which become a political and ideological issue as a fundamental human right and true notion of human freedom and welfare. Besides, as a member of the European Sunday Alliance, Croatia is the first EU member state which promotes free Sunday as one of the measures of active demographic policy. Along with the results of classical statistical processing of public opinion research, the methodology of this research has also involved the hypothesis testing about differences in the proportions as well as post-stratification of the two-step stratified random sample based on gender, age, size of residence, regions and education level. Even more, than two-thirds of respondents consider important or exceptionally important notworking on Sundays and support the maximum limit of that work. Revolution that it has taken from the Gospel. (Črpić, Džolan,2014) These are very powerful ideas, the foundations of the development of our civilization. At the general level, many, if not all, will agree with these great ideas. However, when we ask how to be free today, a series of sub-questions open up: Where are the modern forms of enslavement? Who are the new slaveholders of our bodies and souls? Whose dignity we should protect, why and how? In some European countries, especially in Croatia, that means looking at the situation of women and men employed in retail stores. The situation in which these people are is closer to the state of slavery than the state of freedom. In Croatia people who work in retail stores are forced to work overtime, to work on holidays and Sundays. Formally, everything looks legitimate but in reality, they are not paid for overtime work nor their work on Sundays. If so, they are shamelessly underpaid. Not only does the situation have negative consequences on their families, but also on society as a whole. The damages are even worse since most of these employees are women. Too much exploited and more absent than present in their families, women cannot effectively raise and educate their children. In the long run, this most likely generates and spreads violent behavior among children and young adolescents and consequently in society as a whole. That is why it concerns all the citizens and common freedom. Moreover, what is happening to the employees in the retail trade today could soon happen to everyone else. The big capital knows no borders. In the Croatian language, the day devoted to the man and God, the day of the family, the day that enables and promotes freedom has a very indicative name "ne-djelati", which means "non-working". With the theme of non-working Sunday, it should be borne in mind that premature workers are more likely to get sick, especially from chronic illnesses. As a rule, burly capital destroys all public expenditures, and this also applies to the reduction of the level of public health care. Consequently, an increase in social problems can be expected as well as the delinquent behavior of the impoverished. A particularly vulnerable group is formed by chronically-diseased middle-aged employees who at their firms become the technological surplus. They fall at the expense of the state and the state should be protected in that sense. To put it better, the state should first protect its citizens because that is its primary task. It is particularly necessary to look at the social significance of the Sunday and its impact on family welfare. Joining European Sunday Alliance Croatia has enriched the scope of arguments for work-free Sunday by extending its relevance for demographic state and trends. Croatian Sunday Alliance has been first to propose free Sunday as one of the main measures of active demographic policy. The results of many scientific types of research undoubtedly confirm the negative influence of overtime work, work on holidays and Sundays on the stability of marriage and family. Due to work overload, less time remains for quality marital relationships and the establishment of intimacy among spouses. Considering that partners spend most of their time together during weekends (Lyonette, Clark,2009) spouses who have to work on weekends have even less time for each other than those who have normal working hours. The research made on the sample of married couples in Zagreb, Croatia (Čudina, Obradović,2006) shows that partner's absence from home reduces the feeling of intimacy with one another, which leads to loneliness and causes weakening of mutual support. Besides, excessive stress associated with significant disruption of family dynamics and the relationship between husband and wife can ultimately lead to divorce. An enormous increase in the divorce rate has devastating consequences not only to all members of respective families but also to society as a whole. Fifty years ago, the main reasons for divorce were mostly behavioral and fairly concrete such as alcoholism and neglect by a spouse (Chang,2003). However, in the last two decades reasons for divorce have become more effective and abstract natures as feeling unloved or incompatible in the areas of life values and interests. According to the results of some researches with an increased number of hours women spend at their workplaces, the probability of divorce increases (Lyonette, Clark,2009). Since the number of divorced marriages in a quarter of the counties in the Republic of Croatia on an annual basis exceeded the number of new marriages, it is no wonder that free Sunday is for the first time perceived and promoted in Croatia as one of the main measures of active demographic policy. This paper is organized as follows. After the introduction, the second part of the paper is dedicated to the historical background and legislation in some European countries. The topic of the third part is the case study of Croatia consisting of statistical analysis of public opinion survey results. The public opinion survey on free Sunday has been carried out in October 2017 organized by Franciscan Institute for the Culture of Peace from Split. The final section contains concluding remarks. Consulted literature is listed at the end of the paper. Free Sunday Background and Contemporary Legislation It can hardly be denied that Sunday as "the day of the Lord and as the day of rest" is a Christian institution (Tamarut, 1970). Sunday is celebrated in the church as the day of Christ's resurrection. Throughout the centuries, this fundamental reason for celebration is sociologically infused with Jewish fundamental social attitude that a man needs rest. Moreover, Jewish tradition has always emphasized that this vacation belongs to every man, not just to certain groups or individuals. Therefore, the Israeli "Shabbat" as appreciating the Saturday as a holy day "appears already in the oldest parts of the Law (Exodus 20,8;23,12;34,21 )"( Spicq, Grilot, 1993 ), and is based on the cognition of the basic need of people and animals for rest. Saturday in Israel was originally a "social-ethical institution, i.e. exclusively day of the rest and (Tamarut, 1970). After a long time, the Roman Empire launched the legalization of the free day, and instead of Sabbath, Sunday become the free day as the day of Christ's resurrection. Namely, by legitimate reforms of Emperor Constantine life's attitude has changed considerably. Milan edict that prescribed Constantine I and Licinius in 313 stopped the persecution of Christians. Other reforms followed and Sunday was also overturned on a non-working, public holiday by the law in 321. Works in the field are exempted from the rest "because it is not always a good time for them" (Tamarut, 1970). Thus, those works who were defined as "farming or hard-working" through the later centuries were exempted by this law for practical reasons. Modernization, urbanization, secularization, and industrialization have brought up the question of free Sunday into the center again. "Machine" and "Profit" caused an increase in working hours, and working conditions became worse and worse. Based on the scientific analysis of that time, the benefit of the workers' vacations is recognized and the free Sunday is reinstated in the legal regulations. Thus, the implementation of free Sunday in the legislations of the western countries at the end of the 19th and the beginning of the 20th century was not motivated by spirituality and/or the workers' needs. It has already seen the benefit of the workers' rest "for human-social and economic reasons" (Tamarut, 1970). It is certainly the tradition of thousands of years of practice that has influenced the determination of Sunday as a non-working day. Thus, the non-working Sunday was reintroduced to Switzerland 1877, in Germany 1891, in France 1906and Italy in 1907. It should be noted that protestant countries such as England and the United States have not abolished free Sunday (Sabotič, 2005). Due to the character of this paper, it is possible to provide only a concise description of the Sunday work models that are currently valid in certain European countries. In the EU, about 30% of employees regularly work on Sundays. In Austria, this percentage is 16%, tending to grow, and in Germany 23%. According to Eurostat data, Sunday is mostly workday in England and Denmark, while Spain and Italy recorded the lowest level of Sunday work. Work on Sunday (in retail trade) is prohibited in Belgium, Denmark, France, Greece, Italy, Norway, Germany, Luxembourg, and Austria. However, there are exceptions in all of these countries: work in retail shops in Norway is allowed only three weeks before Christmas from 14:00 to 20:00; in Greece, there are no work restrictions in smaller places and tourist zones; in the Netherlands, it is allowed to work 12 Sundays per year; in Spain 8 Sundays per year; in Finland just from 12:00 to 21:00h. In Germany, since 2006, working time has been allocated to federal states. So, work on Sunday in Bavaria is prohibited altogether but in most other federal states it is allowed to work only 4-6 Sundays per year. Berlin was allowed to work full-time 10 Sundays from 1 pm to 8 pm. However, the Constitutional Court of Germany issued an act, in December 2009, according to which only exceptionally can shop be open on Sundays. This law emphasized that activities typical for workdays cannot be passed on to Sunday, and that "pure financial interests of the shop owners are not strong enough to open trade on Sunday ". Thanks to a strong civilian fight for free Sunday; in Austria, the Constitutional Court issued a decision in 2012 which prohibited work on Sundays. Shops can be open on working days from 6 to 21h, on Saturdays from 6 to 18h. They are closed on Saturdays after 18h, on Sundays, on holidays and feasts. Exceptions are the stations, airports, ports and public events. Due to multiple and various negative effects of work on Sundays throughout modern Europe in recent years numerous initiatives, associations and mass movements for free Sunday have been established. Among them are European Sunday Alliance and European Citizens' Initiative for a work-free Sunday in Europe which advocates nonworking Sunday be implemented in European legislation and be valid throughout the EU. Case Study of Croatia As an integral part of the EU, Croatia shares its destiny and all that has been said previously refers to Croatia, too. Croatia is a member of all-important European initiatives for free Sunday and is particularly active in the two above mentioned. Croatian Sunday Alliance (CSA) has been established in 2017. It is an association of trade unions, academic, social and religious institutions as well as NGOs. It's already mentioned that CSA as a member of the European Sunday Alliance has significantly contributed to the promotion of free Sunday as a measure of active demographic policy. Free Sunday is particularly opposed to the competition of the burly capital owners. For decades they have been misrepresenting realities of the work on Sundays, falsely claiming, for example, that it increases economic activities and employment rate in Croatia. However, no economic theory proves that and the available indicators disclaim it. Moreover, the data of the Croatian Ministry of Finance Tax Department Zagreb in Table 1 The table shows that in all analyzed years the buyers in retail stores most often buy on Saturday, followed by Friday, Thursday, Wednesday, Monday, Tuesday, rather rarely they go shopping on Sunday, and rarely on holidays. These buying habits are relatively stable in the entire analyzed period. In 2015 and 2016 on Saturdays, the purchases were 89% higher than on Sundays, and in 2017 the sales realized on Saturday exceeded the Sunday sales for 84% on average. The difference is even higher when comparing turnover in retail trade on Saturday and holiday. The survey shows that in 2015 the average Saturday turnover was even 151% higher than average turnover on the days of holiday. In 2016 and 2017 the average retail turnover on Saturday exceeds the turnover realized on days of holiday for 144%. CSA significantly contributed to the promotion of the values of free Sunday. On behalf of it, the Franciscan Institute for the Culture of Peace from Split conducted a survey, in October 2017, to find out prevailing public attitudes to the value of non-working Sunday, which authors have used as the case study of Croatia. The public opinion survey was carried out by a specialized agency Ipsos Public Affairs. It was conducted by telephone interviews. Since the views of the Croatian population aged 18 and on were studied, a two-step stratified random sample was used with the following stages: • By random selection of place of residence within the stratum -the stratums are defined by region (6) and size of the place of residence (4 categories). • The household was chosen by the random selection of the phone number. • The respondent was chosen by quota. The final realized sample consists of 603 respondents. As far as methodology is concerned, testing of hypothesis about the difference in the proportions of the two statistical populations has been applied. The usual level of significance of the 5% test was used. Post-stratification of the two-step stratified random sample has been carried out based on gender, age, size of residence, regions and education level. From the results of this comprehensive research, only some basic attitudes and answers to the relevant issues are presented in this paper. The first one is presented on Figure 1 with the answers to the question: "How often you, if ever, go to the next places on Sundays?" The random stratified sample enabled authors to statistically analyze the answers to the aforementioned question for some demographic features. The frequency of going to the respective places has been statistically analyzed about the answer: "I rarely or never visit them on Sundays". The analysis has been carried out for all of the places visited on Sundays and the results are very similar for each of them. Therefore, below are presented more precise results of visiting the shopping centres on Sundays based on demographic categories. Statistical analysis has shown that there is no statistically significant difference between the frequency of going to a shopping centre on Sundays between men and women. The answer "I rarely or never go shopping on Sundays" gave 48% of male and 50% of female respondents. The same answer has been given by 47% of the urban and 52% of the rural population. It shows that neither their responses statistically differ significantly. Analysis of the profile of the respondents according to the level of education reveals that 45% of them who have elementary school education, 51% middle and 46% high school or college education "rarely or never" go shopping on Sundays. The respondents up to 30 years visit shopping centres on Sundays more often (41% rarely or never) while those between 45 and 60 years old do it the rarest (59% rarely or never). »Rarely or never« go shopping on Sundays 44% of the respondents between 30 and 44 years old and those over 60 do not differ much (45%). The analysis done according to the particular regions of Croatia shows that 56% of the population of Zagreb and its surroundings go shopping on Sundays »rarely or never«; in Slavonija 41%; in Kordun and Lika 57%; in North Croatia 43%; in Primorje and Istra 46% and Dalmatia 49%. Even more than two thirds (67.5%) of respondents claim that not-working on Sundays within their regular business is important or exceptionally important for them. What they particularly think about work on Sundays can be seen distinctly from the Figure 2. The share of different attitudes of respondents, resulting in the case of simulation of the Sunday work ban has been statistically analyzed according to their basic demographic features. Although the analysis has covered the answers to all the questions dealing with Sunday shopping in bakery, small grocery store, supermarket, shopping centre, kiosk, gas station and pharmacy in Table 2 Table 2 presents attitudes on adjustment to the change in the situation of the Sunday work ban. On average 61% of respondents in the entire sample have answered "it wouldn't be a problem ", to adjust to the situation where shopping centre does not work on Sundays, whereby there's no statistically significant difference in their answers considering gender criteria. Namely, 60% of male respondents would get used to work-free Sunday in shopping centres without problems, while the women show a somewhat higher percentage -62%. However, the analysis based on the age criteria shows statistically significant differences in adjustment to workfree Sunday in the shopping centre. The adjustment to such a situation would be most difficult for the oldest ones, and the easiest for the respondents aged 45 to 60. Namely even 74% of this last group have answered that the adjustment would not be a problem, on the contrary, only 44% of respondents above 60 years of age have given the same answer. Considering the type of residence, there's no statistically significant difference in adjustment to work-free Sunday between rural (59%) and urban (62%) population. Differing from this demographic feature, upon the analysis based on the respondents' level of education, the resulting answers have shown statistically significant differences. While only 48% of the respondents with elementary school education would not find it problematic to get used to workfree Sunday in the shopping centre, even 68% of respondents with the highest level of education have given such an answer. Differences in answers are also statistically significant according to the respondents' region of residence. Namely, the respondents from Slavonia have answered only in 46% cases that they would adjust without problems to work-free Sunday in the shopping centre, the same answer has been obtained even by 70% of respondents from the territory of Zagreb and its surroundings. The answers analyzed according to the work status of respondents also show statistically significant differences. Students would have no problems getting used to a work-free Sunday of the shopping centre (51%), even 66% of the employed respondents would do the same. Among the respondents who work on Sundays, 70% of those with permanent work contracts would adjust without problem to work-free Sundays of the shopping centre, while only 56% of those who are sometimes engaged on a parttime basis for Sunday work give the same answer. The resulting share of different attitudes of respondents in the case of simulation of Sunday work ban in pharmacy has also been subject to statistical analysis according to their basic demographic indicators. Percentages of the answer "it wouldn't be a problem"to adjust to Sunday work-ban in pharmacy are presented in the third column of Table 2. The answers about the adjustment to work-free Sunday of the shopping centre "hard and very hard"are illustrated in the fourth column of Table 2 and reveal the same logic as the data presented in the first row of the same Table. This answer has been given by 6% of all respondents, equal percentage by men and by women. There are no statistically significant differences in the answers when analyse according to gender, education level and type of respondents' residence. Namely, the percentage of respondents who would find it "hard and very hard"to adjust to a work-free Sunday of shopping centre according to all stated demographic categories remain at the level of population average of 6%. The respondents' answers analyzed according to work status show statistically significant differences. The students have answered even in 12% of the cases that they would find it "hard and very hard" to adjust to a work-free Sunday of the shopping centre, and unemployed persons have given such an answer only in 2% of the cases. The fifth and sixth columns of Table 2 present demographic features of the percentage of respondents who would find it "hard and very hard" to adjust to a work-free Sunday of pharmacy. In order to simulate the effects of work-free Sunday as the measure of demographic policy, the respondents have been asked as follows: "Would you mind if your family member had to work on Sundays?" The answers are illustrated in Figure 3. The answers of the respondents to the question: "Would you mind if your family member had to work on Sundays?" have been statistically analyzed according to demographic features in Table 3. The answer "I would not mind if my family member had to work on Sundays" has been given by 42% of respondents, precisely 46% men and 39% women. According to the type of residence, the share of answers to this question doesn't show a statistically significant difference, since the answers of both, rural and urban population correspond to the average of the interviewed population. The highest share of this answer is recorded in the youngest age group of respondents -58% in the group aged below 30 years. Meanwhile, this answer reaches only 32 % in the age group 45 to 60. The answers of respondents with different levels of education also show statistically significant differences. 37% of those with elementary school education would not mind if their family member had to work on Sundays, and those with the highest level of education would not mind it in 56% of cases. Regional differences noted in this answer are minor. Thus, the respondents from the area of Banovina, Lika, and Kordun record the lowest percentage of such answers (35%), differing from Dalmatia with the highest percentage (45%) of the above answer. The survey shows statistically significant differences in the answers analyze according to work status. Thus 36 % of pensioners would not mind if their family member had to work on Sundays, this answer among unemployed respondents has reached the level of even 60%. Considering that the answer "I do not know" to this question (together with those who have not answered at all) has been given on average by 2,7% respondents, the relations of statistically significant differences according to demographic features remain unchanged and are illustrated in the last row of Table 3. Table 4 provides the review of the answers to the question "How important is it to you personally not to work on Sundays?" expressed in percentage and analyze according to demographic features. Answers about situation that family member had to work on For more than two-thirds of respondents (68%), it is important that they personally don't have to work on Sundays. These answers show statistically significant differences between male and female respondents. 61% of respondents aged below 30 years have answered that it is important to them that they personally do not have to work on Sundays, and this percentage has increased to 73% in the group aged 30 to 40 years. The rural population considers it important not to work on Sundays in 63% of cases, and urban respondents even in 71%. Regional differences are similar. Namely 62% of the population of North Croatia and Croatian Littoral & Istria declare that it is important for them not to work on Sundays, and the same applies to 73% of the Dalmatian population. According to work status, the employed respondents show the highest rate of those for whom it is important that they personally do not have to work on Sundays (74%), among them only 46% of students. Modalities and frequency of the answer to the question "How important are it to you personally not to work on Sundays?" are illustrated in It is highly indicative for integral conclusions of this research that even 70% of all the respondents support the limitation of work on Sundays. After the accomplished methodology of stratification, the results of statistical analysis of the frequency of respondents' answers according to their demographic features are illustrated in Table 5. It is interesting to set aside according to demographic features those groups of respondents who have more often supported the limitation of work on Sundays, differing from the respondents who show the average result on the level of the sample as a whole. These are women who support the limitation of Sunday work in 73% of cases. Age criteria show the major share of supporters of Sunday work limitation in groups aged 30 to 45 (74%), and 45 to 60 (77%). The respondents with secondary school education also support Sunday work limitation above average (73%), just like the urban population (71%). Regions, where the respondents prefer Sunday work limitation above average, are North Croatia (73%), Slavonia (75%) and Croatian Littoral & Istria (76%). Analysis of the respondents according to work status criteria shows that work-free Sunday is mostly (75%) urged by the employed persons. Conclusion This paper tries to provide the answer to the crucial question of whether work-free Sunday can be considered only as of the economic issue or concerns deeply almost all the segments of life in general. Moreover, the authors through the results of the public opinion research want to show and promote work-free Sunday as one of the major socio-economic phenomena, which in the modern world becomes a political and ideological issue as a fundamental human right which, inter alia, is the true notion of human freedom and welfare. Besides, as a member of the European Sunday Alliance, Croatia is the first EU member state which promotes free Sunday as one of the measures of active demographic policy. The case study is based on the results of the research of public opinion about the work-free Sunday, undertaken in October 2017. Along with results of classical statistical methods applied in the processing of public opinion research, the methodology of this research has also involved hypothesis testing about the difference in the proportion of the two statistical populations as well as post-stratification of the two-step stratified random sample on the basis of gender, age, size of residence, regions and education level. In recent months, after a long silence, free Sunday has recurrently appeared as the topic of discussion in Croatia. Namely, for decades the burly capital owners have been misrepresenting realities of the work on Sundays, falsely claiming, for example, that it increases economic activities and employment rate in Croatia. However, no economic theory proves that and the available indicators disclaim it. Moreover, the data of the Croatian Ministry of Finance-Tax Department Zagreb show far the lowest fiscal turnover in retail trade on Sundays and holidays. In all analyzed years, the buyers in retail trade most often buy on Saturday, followed by Friday, Thursday, Wednesday, Monday, Tuesday, rather rarely on Sunday, and almost never on holidays. These features remain relatively stable in the entire analyzed period. In compliance with the character of this paper, the most important results of the comprehensive statistical analysis of this research have been highlighted. Almost onethird of the respondents visit the bakery at least every second Sunday, and almost one-fourth of them go to small grocery stores and supermarkets. The pensioners show the lowest Sunday shopping rate, on the contrary, young respondents, in particular students, practice Sunday shopping more often. Almost all the respondents agree that it is important for the family to be together on Sundays, and think that Sunday work is bad and incurs numerous and long-lasting adverse consequences in all the segments of life. The respondents generally, in all subsets, distributed according to demographic features, to a high extent agree with those allegations which are directed against Sunday work, and they do not agree with those which justify it. If raising the issue of free Sunday as a dominant political question, the answers of the respondents might be used as a general conclusion based on the results of public opinion research. The share of 47% "fully supports the maximum limitation of Sunday work". If we add to the above percentage 24% of those who "mostly support it", the final result reveals the truth, so different from the conclusions often imposed by media, that even 70% of respondents support the limitation of Sunday work. In 27% of the cases, the respondents do not support the limitation of Sunday work, and 2,7% of them have not declared their attitude. This work is only a part of the on-going research dealing with the free Sunday phenomenon and the authors have presented the beginnings of extensive public opinion statistical analysis. Therefore, the results of further researches by the same authors, that promote the values of the nonworking Sunday as the basis of well-being, can be expected soon.
2020-04-30T09:11:47.007Z
2020-04-22T00:00:00.000
{ "year": 2020, "sha1": "f342214f818aaf350db226f834b857607eb259fc", "oa_license": "CCBY", "oa_url": "https://ijbassnet.com/publication/311/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0deadb7153a041d88cc522c0eadd01c2f5446488", "s2fieldsofstudy": [ "Economics", "Sociology", "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
16888498
pes2o/s2orc
v3-fos-license
Lymphocyte subsets in peripheral blood of patients with moderate-to-severe versus mild plaque psoriasis In several studies peripheral blood T-cells have been quantified, yet few data are available on lymphocyte subsets in moderate-to-severe psoriasis (in terms of extent and activity of lesions) versus mild psoriasis. The objective is to compare lymphocyte subsets in peripheral blood of patients with moderate-to-severe disease (PASI-score ≥12) to patients with mild disease (PASI-score <12) and to healthy subjects. By means of flow cytometry method, lymphocytes in peripheral blood of 27 patients with psoriasis and 10 healthy controls were characterized. The absolute number of total lymphocytes was markedly decreased in patients with moderate-to-severe psoriasis as compared to patients with mild disease and normal subjects. Cellcounts of all analysed subsets were found to be increased in more severe psoriasis, except for CD8+CD45RO+ cells. The under-representation of CD8+CD45RO+ cells is compatible with the dynamics of acquired immunity, which requires a time log after the relapse of the lesions to differentiate from CD45RA+ naive cells. Introduction Psoriasis is a common inXammatory skin disease characterized by hyperproliferation of keratinocytes. It is a well established fact that T-cells play an important role in the pathogenesis of psoriasis [6,9]. Indeed, treatments such as anti-CD4 [5,15,19], anti-CD11 [5,13] and anti-CD25 [13], targeting at speciWc T-cells, have shown to be eVective in psoriasis. Several T-cell subsets seem to play a primary role. Major classes are CD4+, CD8+, CD45RO+ and CD45RA+ T-cells. Recently, NK-T cells have been suggested to play an additional role in the regulation of immunity, through the release of cytokines [3,14,16]. NK-T cells are cells bearing both T cell receptors as well as natural killer-cell speciWc receptors such as CD161. It has been shown that circulating NK-T cells are signiWcantly reduced in some autoimmune diseases [8,17]. The large body of research on the role of T-cells in psoriasis has been focused on lesional T-cells. This has resulted in the hypothesis that there is a Wnal common pathway responsible for the development of chronic plaque psoriasis, which may involve speciWc antigen recognition by T-cells that results in stimulation of keratinocyte proliferation. In psoriasis, CD4+ cells seem to be important mainly in the early phase of plaque development. These cells are found predominantly in the upper dermis, whereas CD8+ cells have proven to be relevant during chronic phases and are found predominantly in the epidermis [21], although others have found a more early involvement of CD8+ T-cells in the development of psoriatic plaques [22]. A large number of activated T-cells have been shown in clinically involved skin of psoriatic patients, but also uninvolved skin contains a signiWcant number of T-cells. In healthy skin, however, T-cells can hardly be found. It has been suggested that circulating T-cells are activated and subsequently recruited from the circulation during the development of psoriatic plaques [2,10]. These studies indicated that the total amount of T-cells in patients is comparable or slightly increased as compared to that found in normal subjects. No relevant diVerences have been shown with respect to total T-cell counts and T-cell subsets including CD4+, CD8+, CD45RO+ and CD45RA+ cell counts. Only few studies comprise a comparison between mild and more severe forms of psoriasis, using clinical severity indicators such as the Psoriasis Area and Severity Index (PASI-score) [4,18]. The present study compares speciWc circulating lymphocyte subsets in patients with mild psoriasis, patients with moderate-to-severe psoriasis and normal subjects. Patients and controls Fifteen patients with moderate-to-severe psoriasis vulgaris (12 male and three female aged 19-66, mean age 46.2 years) and 12 patients with mild psoriasis (seven male and Wve female, aged 34-72, mean age 52.8 years) from our outpatient department participated in this study. Mean PASI-scores in both groups were 20.97 § 2.55 (mean § SEM) and 6.11 § 1.27, respectively. Patients were classiWed into one of both groups based on their PASI-score. We considered all patients with a PASI-score <12 as having a "mild", and all patients with a PASI >12 as having a "moderate-to-severe" psoriasis [4,18]. Patients were free of any systemic therapy for at least 4 weeks and did not use any topical therapy in the last 2 weeks. Peripheral blood was obtained from all subjects with their written informed consent. Control samples were collected from 10 healthy volunteers without any history or signs of skin disease (four male and six female aged 24-49, mean age 33.8 years). Preparation of PBMC's For each patient, the exact amount of blood withdrawn was determined by measuring the height of the column of blood in the tube, which was subsequently converted to the corresponding volume in microliter. PBMC's were isolated from heparinized blood by density centrifugation on polyester gel (Becton Dickinson Vac-utainer™ CPT™, Franklin Lakes NJ, USA). After Wltering through a 70 m cellstrainer, cells were washed twice. For Xow cytometry, single-cell suspensions (concentration 5 £ 10 5 cells ml ¡1 ) were stained in 1% fetal calf serum in phosphate-buVered saline (PBS) at concentrations recommended by the manufacturer. Additionally, 450 l propidium iodide (PI) in PBS was added to each sample in order to exclude non-viable cells from analysis. Flowcytometric analysis Cells were analyzed with an EPICS Elite Xow cytometer (Coulter, Luton, UK), using the forward scatter as a discriminator. Lymphocytes were identiWed by gating on CD45 and side and forward scatter properties. All samples were processed within 18 h of phlebotomy. Determination of absolute numbers of lymphocytes Enumeration of positive cells was performed by adding Flow Count Beads (Beckman Coulter, Fullerton, CA, USA) to the cell suspension of PBMCs. An automatic stop after a deWned amount of beads was programmed on the Xow cytometer. Absolute counts of lymphocyte subsets per blood sample were calculated by determining the ratio of the beads to the cell population and then multiplying this ratio by the number of beads in the tube. By dividing this count by the amount of microliters in the tube, a relative absolute lymphocyte count in cells l ¡1 blood withdrawn was obtained. Analysis was performed with Verity software. Statistical analysis Data-entry and analysis was performed using Statistica 6.0 software. Means and standard deviations were calculated for each parameter and were tested with one-way ANOVA. DiVerences were considered statistically signiWcant at p < 0.05. Results In total, 27 patients with psoriasis and 10 normal subjects without signs or symptoms of skin disease were included in the present study. Table 1 summarizes the demographic details and psoriasis related characteristics of the patients with moderate-to-severe psoriasis, patients with mild psoriasis and normal subjects. Four out of 12 patients with mild psoriasis had shown a minimal increase in the extent of lesions; in the others the skin abnormalities were stable. With respect to the 15 patients with moderate-to-severe-psoriasis, 12 had an increase of the extent of lesions during the previous 4 weeks, whereas the other three had been stable. Total cell counts The total amount of gated cells (per microliter blood withdrawn) in normal subjects was 2,220 § 301 cells l ¡1 (mean § SEM). The cell-count in mild psoriasis was 1,923 § 230 cells l ¡1 . In contrast, patients with moderateto-severe psoriasis had cell counts of 498 § 95 cells l ¡1 . No statistically signiWcant diVerence was observed between normal subjects and patients with mild psoriasis. However, a highly signiWcant diVerence (p < 0.0001) was observed between mild and severe psoriasis. The CD4+/CD8+ ratios in all three groups were in the same range, meaning no shift occurred in the distribution of these subgroups of peripheral blood lymphocytes. The total number of lymphocytes, and the CD4+ and CD8+ counts of the patients with moderate-to-severe disease were also routinely measured at the GLP certiWed Central Haematology Laboratory in our hospital, in which the whole blood lysing method is the standard. Comparison of both methods reveals an acceptable resemblance in results, with lower counts of maximum 10%. Lymphocyte subsets Lymphocyte subsets as a percentage of total gated cells are summarized in Fig. 1. CD4+, CD8+, CD45RA+ and CD45RO+ cells are increased in patients with severe psoriasis as compared to patients with mild psoriasis and normal subjects (p < 0.0001 in all cases). No signiWcant diVerences were observed between patients with mild psoriasis and normal subjects. Lymphocytes expressing NK cell receptors CD94+ and CD161+ cells, expressed as a percentage of total cells were increased in patients with moderate-tosevere psoriasis, as compared to patients with mild psoriasis and normal subjects (p < 0.05 and p < 0.001, respectively). Figure 4 summarizes these observations. Again no statistically signiWcant diVerences were observed between normal subjects and patients with a mild form of psoriasis. With respect to double stained cells, we observed an increase in the expression of both CD8+CD94+ cells as well as CD8+CD161+ cells (markers for NK T subpopulations) in patients with a more extensive form of psoriasis, expressed as a percentage of the total amount of cells, as compared to patients with mild psoriasis and normal subjects (p < 0.008 and p < 0.009, respectively, results shown in Fig. 5). Fig. 4 CD94 and CD161 subpopulations, expressed as a percentage of total lymphocyte counts. Comparison between normal subjects, patients with mild-and patients with moderate-to-severe psoriasis (*: p < 0.05 and p < 0.001 respectively) lymphocytes did not reveal a diVerence between patients with spreading lesions and those with stable plaque psoriasis. Discussion The results of this study show marked diVerences in absolute cellcounts between the three patient groups. Our main goal was to investigate whether there is a diVerence in the occurrence of biomarkers on circulating lymphocytes in mild psoriasis versus more severe forms of the disease. One of the criteria to qualify for a treatment with major systemic therapies such as a biological is often a minimum PASI-score of 12 [4,18]. We were interested whether this cut-oV point also implies a diVerentiation between mild versus moderate/severe psoriasis in terms of T-cell and NKT-cell subsets. The results of the present study have shown us that there are indeed marked diVerences between mild psoriasis and the more severe forms of disease deWned by cut-oV point PASI 12. A decrease of total lymphocyte counts, as percentage of PBMC, and an increase in the percentage of CD4+CD45RO+ cells in patients with moderate-to-severe psoriasis has been reported before [10]. These patients (32 subjects) had an average PASI of 25.7 (10-48), and all had had an exacerbation following infection. Patients had a mixed morphology, from guttata psoriasis to chronic plaque psoriasis. In the same study, however, a decrease in the amount of CD4+ cells was described, which is in contrast with the Wndings in our study. Other authors have reported no signiWcant diVerence in total T-cells as percentage of PBMC counts between patients with psoriasis and normal subjects [3]. These patients (14 subjects) had an average PASI of 23.5 § 10.6. However, no information was given whether these patients had relapsing psoriasis. In the present study, the patients with severe psoriasis had an extension of lesions during the wash out period (average increase in PASI: 4.9). Therefore, the severity in our patients not only comprises the extent of lesions but also activity of lesions with progression and therefore, approaches the characteristics of the patients reported by Lecewicz-Torún et al. [10], who also reported a decrease of T-cells in peripheral blood in psoriasis (expressed as percentage of total PBMC). The decrease of lymphocyte counts, expressed as a percentage, in peripheral blood in patients with moderate-to-severe psoriasis is the reverse of the substantial accumulation of lymphocytes in psoriatic skin. The most likely explanation for this paradox is an increased inXux of cells into the skin of patients with severe disease. Further studies, in which simultaneous measurements of lesional and circulating T cells will be performed, need to be carried out to support this hypothesis. A striking observation is the increased percentage of both CD4+ cells and CD8+ cells in patients with moderateto-severe psoriasis as compared to mild psoriasis and normal subjects, without a change in CD4/CD8 ratio. So far, reports on the CD4+/CD8+ ratio in psoriasis have been at variance [10,11]. With respect to the double labeled lymphocytes, both the percentage of CD4+CD45RO+ cells as well as the percentage of CD4+CD45RA+ cells is increased in patients with moderate-to-severe psoriasis, in contrast to the absolute decrease of total lymphocytes. The relative count of both subsets is increased, whereas their ratio shows a tendency to decrease (p = 0.0509). Thus, with increasing severity of disease, the increase in CD4+CD45RA+ cells seems to exceed the increase in CD4+CD45RO+ cells. An explanation for this phenomenon is the preferential migration of CD4+CD45RO+ cells into lesional skin in extensive disease, resulting in a relative decrease of this subset of CD4+ lymphocytes in peripheral blood. This hypothesis supports earlier studies where CD4+CD45RO+ cells have been shown to be the major players in psoriatic plaques. Indeed, CD4+ cells appear in a very early phase of the psoriatic process. In a previous study, CD4+ cells have been detected in the distant uninvolved skin of patients with active psoriasis [22]. The percentage of CD8+CD45RO+ cells in patients with moderate-to-severe psoriasis approached the percentages in patients with mild psoriasis and normal subjects, in contrast with the CD8+CD45RA+ cells, which did show a signiWcant increase in peripheral blood. The impressive under-representation of CD8+CD45RO+ lymphocytes as compared to CD8+CD45RA+ lymphocytes in moderate to severe psoriasis suggests a preferential inXux of CD8+CD45RO+ cells into the skin. Both CD8+ cells, as well as CD45RO+ cells are abundantly present in psoriatic lesional skin. Previously, we have shown that in the outer margin of the spreading psoriatic plaque the number of CD8+ cells and CD45RO+ cells were increased [22]. Therefore, the relative under-representation in peripheral blood of CD45RO+ cells, in particular the CD8+ CD45RO+ cells, most likely reXect the inXux of these cells into the skin in patients with moderate-to-severe psoriasis in an active phase of disease. However, further studies are needed to support this hypothesis. Lymphocyte counts, preferably T-cells, in skin as well as in the peripheral blood should be monitored during a longer period of time, e.g., during exacerbation and during disease-free intervals. The CD8+CD94+ cells and CD8+CD161+ cells show a marked increase in patients with moderate-to-severe psoriasis as compared to mild psoriasis and normal subjects. The increases of these lymphocytes expressing NK receptors in peripheral blood are accompanied with increases of these cells in the psoriatic lesion. The results of the present study suggest that in peripheral blood of patients with moderate-to-severe psoriasis, in contrast to the vast presence of T-cells in skin lesions, the absolute lymphocyte counts are profoundly decreased, without any change of CD4/CD8 ratio. The relative under-representation of CD45RO+ cells, as compared to CD45RA+ cells, in peripheral blood most likely results from a massive inXux of memory eVector T-cells into the psoriatic lesions in an active phase of disease. Both in blood and skin of patients with moderate-to-severe psoriasis the percentages of CD94+ and CD161+ cells are increased. These results are compatible with the functions of these cells. CD45RO+ cells diVerentiate from CD45RA+ cells and represent acquired immunity, whereas NK-T cells are able to act without prior sensitization, representing innate immunity [3]. In further research, more selective lymphocyte markers are required in order to investigate the role of peripheral blood T-cells in the pathogenesis of psoriasis, and correlation studies between peripheral blood T-cells and T-cells in skin are required in order to understand the dynamics of compartmentalization of T-cell subsets belonging to either the acquired or the innate immunity system in psoriasis. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
2014-10-01T00:00:00.000Z
2007-12-19T00:00:00.000
{ "year": 2007, "sha1": "b612e53d8ddd77abd404b5baad2521a4db889593", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00403-007-0819-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b612e53d8ddd77abd404b5baad2521a4db889593", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }